For this discussion, imagine that the SDP is in the form
Guru pointed out a major omission in the runtime of Ellipsoid for SDPs: even when the input numbers are all of size , SDPs can have solutions values of size . Note that the solutions for LPs are only singly exponential, so this is much much worse — we can’t even write down the solution in polynomial time. See the Gärtner and Matousek book for a detailed discussion of the issues here. (Remember, the CMU library gives you access to the electronic version of the book.)
What’s the fix? The actual runtime is polynomial not only in the length of the input, but also in , where is an upper bound on the Frobenius norm of the optimal solution . Thankfully, for most combinatorial optimization problems, the bound is small. E.g., for max-cut, each , and hence in this case.
There is another concern, though, which I knowingly swept under the rug (and will largely continue to do so): the Ellipsoid algorithm requires that the problem has a optimal solution that is “-deep” inside the feasible set, and if so it returns a solution that satisfies all the equality constraints but is only -close to being feasible with respect to the PSD constraint (and with value within an additive of the optimum). This means that we usually need to do some post-processing to make the solution PSD (and hence can only satisfy the equality constraints approximately). However, for now we will assume these issues away.
A different approach is to use multiplicative-weights based algorithms for SDPs which returns solutions that are PSD matrices, but which have additive errors for both the objective function value and the equality constraints. These algorithms require bounds on the trace of , and have runtimes that depend on the maximum Frobenius norm of the and matrices, and also on instead of . More of this in the Gärtner/Matousek book.