Some clarifications from Monday’s lecture (and HW #5):

- Firstly, Akshay had a question about the Matrix Chernoff bound I mentioned in Lecture today — it seemed to have a weird scaling. The bound I stated said that for ,
The worry was that if we scaled each little random variable by some , we would get

So the numerator of the exponent would increase from to whereas the denominator would go from to . As gets large, we get better and better bounds for free. Super-fishy.

I went over it again, and the bound is *almost* correct: the fix is that the bound holds only for deviations . You will prove it in HW#5. (I updated HW #5 to reflect this missing upper bound on ; the upper bound on subtly but crucially comes into play in part 5(c).) This upper bound on means one can’t get arbitrarily better bounds for free by just scaling up the random variables.

However, you can get a little bit better: if you do try to play this scaling game, I think you can get a slightly better bound of

(Follow the argument above but set so that the deviation you want is about , etc.) The resulting bound would improve the sampling-based SVD result from today’s lecture to sampled rows from the weaker bound of rows I claimed.

- Secondly, for any real symmetric matrix assume the eigenvalues are ordered
The statement I wanted to prove was the following: given real symmetric, we have that for all ,

Akshay pointed out that it follows from the Weyl inequalities. These say that for all integers such that ,

Hence setting , and setting and ,

Similarly setting and , we get

Hence,

- John found Rajendra Bhatia‘s book on Matrix Analysis quite readable.
- A clarification asked after class: all matrices in this class are indeed matrices over the reals.
- There are minor fixes to the HW, in particular to problem #5. Please look at the latest file online, changes are marked in red.

### Like this:

Like Loading...

*Related*