 Nicholas asked how, in the SoS example where we wanted to write
the polynomialas a sum of square polynomials, we could quickly restrict ourselves to
.Here’s one way. Of course, in general we have to consider the vector , since the squares of these monomials span . This means we must consider a much larger (i.e., ) matrix , and try to write with . But observe two simple facts:
 The first three diagonal terms of are zero, by inspection.
 The bottom principal minor is just our old matrix.
Now we claim that if some diagonal term in some PSD matrix is zero, then all entries in that row and column are zero too. Indeed, if , then for any , the nonnegativity of the determinants of the principal minors (see below) says: , so . Hence, the problem just collapses to finding such that
which we saw a solution for. Indeed, if we set
then the eigenvalues are , with eigenvectors
Since , you get
Which is the SoS representation we wanted.
 Thanks, C.J.: I’d clean forgotten about the allimportant equivalent definition of PSDness: is PSD if and only if all principal minors have nonnegative determinants. Recall that a principal minor is obtained by restricting to some set of rows and columns. One direction of the proof is easy (and thankfully it’s the one we need): first, a matrix is PSD then all its principal minors are PSD. (Use the definition.) But the determinant of a principal minor is just the product of its eigenvalues, which is nonnegative for PSD matrices. The other direction is a little more involved, I don’t know a oneline proof, please consult your favorite algebra text. (Or if you know one, catch me and tell me about it.)
 Finally, the point that Goran asked a clarification for: if has maxdegree , then the need only be polynomials of degree at most . Suppose not. Let be the highest degree monomial in some , and let contain all the monomials in of degree . Now is a sum of square polynomials, and all its terms (of degree ) also appear in .
Goran had asked: Can be identically zero, with all its terms cancelling out? No, and here’s one easy way to see it: consider some for which some . Then . (I am sure there are another easy arguments? Please tell me about them.)
Lec #24: SDPs I
Lec #19: JL etc.
So two things that we wanted to check during lecture today, and some notes:
 If is subGaussian with parameter , and they are independent, then is indeed subGaussian with parameter , as claimed. Basically, just use the MGF definition to show this. And note the analogy to Gaussians, where should be the standard deviation, and not the variance.
 I got the subexponential equivalence slightly wrong. The MGFbased definition: there exist parameters such that
This is equivalent to there existing some other constants such that
See Theorem~2.2 here for a proof. (Thanks for pointing out that mistake, Tom!)
 For compressive sensing, there are a million resources online. E.g., here’s Terry Tao on the topic.
 I should have emphasized: the blood testing example we talked about was the simple case where you were guaranteed that the vector was sparse. I.e., there is exactly one person with the disease among a population of people. Then the binary search strategy we did used linear tests, which is optimal. (To clarify, for each , the test combines together the samples for all people whose index has a in the place and tests this combination. Now construct a bit vector with a in the position precisely if the test came up positive: this is the index of the person with the infection.
Note this is a nonadaptive strategy: we can write down all the tests upfront, and can read off the answer from their results. This is opposed to an adaptive strategy that would look at the answers to previous tests to decide which sets to test next.
What about the case where there are exactly two people infected? The same strategy does not work. In fact, since there are answers and each answer gives us bit each time, we must perform at least tests. Any thoughts on how to do this? An adaptive strategy that uses tests is easy: can you get a nonadaptive strategy? Using randomness?
Matroids
Some of you asked for intuition about matroids: they seem so abstract, how to think about them? Whenever I have to prove a fact about matroids, I think about how I’d prove the fact for graphic matroids. And often (though alas, not always) that proof translates seamlessly to general matroids.
Recall that a graphic matroid is constructed as follows: take a graph. Its edges are the elements of the matroid. A set of elements (edges) is independent if they do not induce a cycle (in the graphtheory sense), i.e., if they form a forest. Now you can check that this definition satisfies the conditions for being a matroid. The bases of the matroid are spanning trees of G.
So on HW3, the first part asked: suppose I have two spanning trees on a graph G, a red one and a blue one. Show that there exists a bijection F between the red and blue edges so that replacing any red edge e by its matched blue edge F(e) gives a spanning tree. The proof now seems well within reach. And the arguments to prove it (adding an element/edge forms a cycle, etc) all generalize to matroids.
Why are we studying matroids? They form a convenient abstraction for many combinatorial structures you often encounter. E.g., spanning trees, linearlyindependent sets of vectors, matchable vertices in a bipartite graph, sets of vertices that can routed to a sink in some vertexdisjoint way, all are matroids. E.g., perfect matchings in bipartite graphs can be phrased as finding a common base of two matroids defined on the same ground set. A famous result of Edmonds shows that this matroid intersection problem can be solved for any two matroids. Also, since matroids characterize the greedy algorithm (in a sense you saw in HW1), the fact that the greedy algorithm works for some problem often indicates there may be some deeper structure lurking within, that we can exploit.
I was hoping to cover matroids in a lecture later in the course (let’s see how things go), but here is a course by Jan Vondrak that covers many polyhedral aspects of matroids.
A comment about HW3 #5
Pranav and Sidhanth asked me for the motivation behind that problem: why did we just not find a perfect matching in the graph (say by the blossom algorithm), put weight 0 on the matching edges, and 1s elsewhere?
The reason is that we want to find perfect matchings in parallel. If we had a single perfect matching in the graph, we could check (in parallel) for each edge if it was in this perfect matching. (How? Use Lovasz’s algorithm via Tutte’s theorem on G and then on Ge. This requires computing determinants, which is doable in parallel.) And then (in parallel) output all edges that belong to the matching.
But if there are many perfect matchings, maybe all edges may find that Ge still has a perfect matching. Which of these should we output? We don’t want to do things sequentially, so we seem stuck.
This is why Ketan Mulmuley, Umesh Vazirani, and Vijay Vazirani came up with the isolation lemma. Suppose we choose random weights for all edges. Whp there is a unique minweight matching. So now each edge e’s subproblem is: does e belong to the unique minweight matching? This can also be done in parallel, using a slight extension of the Lovasz idea. See the MVV paper for details.
The next question arises: do we really need randomization? And this has been open for some time, and people have tried to reduce the number of random bits you need. The naive approach uses O(log m) bits per edge, so O(m log m) random bits over all. In this question you proved a bound of O(log^2 m) bits, which is much better. It still remains an open problem how to remove the randomness altogether.
Lec #17: Ellipsoid and InteriorPoint
Hi all: since we covered most of the shortstep method today, it may be easiest if you looked over the proof yourself. We can discuss it in office hours on Tuesday, if you’d like. I’d like to start off talking about concentration bounds on Monday.
Some other notes about Ellipsoid and interiorpoint methods.
 We talked about how a separation oracle for a convex gives (via Ellipsoid) an algorithm for optimization over . In fact, the two problems of separation and optimization are equivalent! So, given an algorithm for maxweight perfect matching in general graphs, you also get an algorithm for finding minimum oddcuts. You will give a direct algorithm for the problem in the upcoming HW.
 In fact, something even more surprising is known. Suppose we are given a point in the body , and values such that . And we are also given a membership oracle, which given a point , just outputs whether or not . Then we can still optimize over . The idea is to use the membership oracle to sample points from the polytope and generate a (weak) separation oracle.
(It is important that you be given a point inside , else you have no chance, even if you know a bounding ball , and that has some nontrivial ball within it. There is too much volume in for you to go searching with just a membership oracle: you’ll pretty much need time to find a point inside .)
Of course, this threeway equivalence between membership, separation, and optimization requires care to make precise and to prove. See the Grotschel Lovasz Schrijver book for all the details.
 There are many different ways to implement interiorpoint methods. We saw a primaldual analysis that was pretty much from first principles (with a little bit swept under the rug, and even those details can be found in the MatousekGaertner book). Matousek and Gaertner also give a different way to find a starting point , different from the one we outlined in lecture.
Sadly, we did not talk about Newton’s method, or selfconcordance, or local norms and Dikin ellipsoids, which form the basis for the “modern” treatment of interiorpoint methods. If your interest is piqued, you should check out a dedicated optimization course (Tepper and MLD both offer one), or have a look at one of the books listed on the course webpage.
Also, another interiorpoint algorithm that is very approachable (and has a short selfcontained exposition) is Renegar’s algorithm: here are notes by Ryan from our LP/SDP course.
Lec #16: Ellipsoid and CenterofGravity
We will talk a bit more about the Ellipsoid algorithm (and separation vs optimization) on Friday, maybe start talking about the Newton method, and then move on to more details on interior point techniques on Monday.
A couple things about Ellipsoid:
 One point to emphasize: the step of Ellipsoid cuts into two, and builds around the relavant half. Suppose , where is the unit ball. Then is some ellipsoid with volume at most times the volume of the unit ball. Now since any ellipsoid is a linear transformation of the ball, if , where is the associated linear transformation, then . But for any body , the volume of is . So
In fact, you can imagine that each step we are just transforming the current elipsoid back to a ball, making a cut, and then transforming things back. One problem, of course, is that at each step we make the transformation more complicated. Indeed, in the notation above, if , then . So the numbers involved can get larger at each step: we may need to round the numbers to control how big they get. These and other numerical issues are at the heart of Khachiyan’s proof that the algorithm runs in polynomial time.
 My apologies: the alternate version of Ellipsoid I thought used boxes, in fact, uses simplices. The analysis that it runs in polynomial time is due to Boris Yamnitsky and (Leonid) Levin. The main idea is that half of some simplex can be contained in another simplex of volume at most times the volume of . (A proof appears in Vasek Chvatal’s book.) Note this factor is worse than the Ellipsoid factor of by another factor of , but the numerical calculations in the algorithm definition do not require the use of square roots. The original paper (also here) is not quite kosher, since it can lead to the size of the numbers blowing up: this report by Bartels gives an example, and also suggests rounding approaches to control the problem. Finally, some notes on the simplices algorithm by Yossi Azar (parts 1, 2) which I have not had a chance to go over in detail.
And a couple words about the centerofgravity algorithm:

The centerofgravity definition. It’s the natural extension of the discrete case. Indeed, if we have objects in , the one having location and mass , the center of gravity (or the center of mass, or centroid) is defined as
The continuous analog of this where we have a general measure over (basically replacing sums by integrals), is
The numerator is the total measure over . (In class I was implcitly assuming the uniform measure over , which is given by .
The in Grunbaum’s theorem (that each hyperplane through the centroid of a convex body contains at least fraction of the mass on either side) is best possible for convex bodies. And the proof is clever but not difficult See Grunbaum’s (very short) paper for examples and proof. Or these notes by Jon Kelner or Santosh Vempala.
What happens if we don’t have the uniform measure over a convex body, but a more general distribution? Then things change quite a bit. E.g., consider equal point masses at the vertices of an dimensional simplex. No matter which point you choose, you can find a hyperplane through it that contains only a single point (which is of the mass) on one side. Grunbaum actually shows (in the same paper) that you can find a point that ensures at least fraction of the mass on either side.
Lec #12: Notes on solving LPs
Notes for today’s lecture:
 Tightness of the Hedge guarantee. Consider the basic Hedge analysis, which says that for any , we have regret at most . Now if we were to set by balancing those terms, the regret bound would be . This is tight, upto the constant term.
Let’s see why in the case . Suppose we again have two experts and , and are trying to predict a fair coin toss. I.e., every time the loss vector is either or with equal probability. So our expected gain is at least . But after coin tosses, with constant probability we have more flips of one type than of another, and indeed, the expected gain of one of the static experts is . So our regret cannot be less than even for two experts. Similarly for experts one can show that is necessary.
 Larger Range of Loss Vectors. For the setting where loss/gain functions could be in , we claimed an algorithm with average regret less than as long as . We left it as an exercise in HW3.
In fact, you can prove something slightly weaker for the asymmetric setting where losses are in , where . In handwritten notes on the webpage, I show how to use a guarantee for Hedge to get
as long as . The constants are worse, and there’s a term hitting the “best expert” term, but the analysis is mechanical.
You can use this gurantee along with the shortestpath oracle to get an iteration algorithm for approximate maximum flow algorithm, since the gains will be in the range . More details below.
 The maxflow part was fast, here are some more details. We wrote the LP, and plugged it into the multiplicative weights framework. Since we had a constraint for each edge , the “average” constraint looked like:
Flipping the summations, we get
If we denote the optimal solution is to send flow along a shortest path, where the edge lengths are . We can find this using Dijkstra even though we cannot write the massive LP down. Since the “easy” constraints were , we send flow along this shortest path. Now we update the probabilities (edge lengths), find another shortest path, push flow, and repeat. At each step the gains will be in the range .
So we can use the asymmetric losses analysis above. After iterations, taking the “average” flow , we have that for each edge ,
(How? chase through the LPsolving analysis we did in lecture, but use the above asymmetric analysis, instead of the standard symmetric one we used.)
Finally, the flow is not feasible, since it may violate edge capacities. So scale down. I.e., define the flow : it has value and satisfies all the edge constraints. Viola.
 Next lecture we will do the improvement using electrical flows.