## Lec #4: Notes

Some more notes about today’s lecture:

• We know that ${APSP(n) \leq O(\log n) \cdot MSP(n)}$, and in a forthcoming homework we will remove this log factor. The reduction in the other direction is not that mysterious: given two matrices ${A}$, ${B}$, can you write down an ${O(n)}$ vertex graph ${G}$ such that the ${APSP}$ in this graph gives you the MSP? Please post answers in the comments below.
• If you’ve not seen Strassen’s algorithm before, it is an algorithm for computing ${n \times n}$ matrix multiplication in time ${n^{\log_2 7} \approx n^{2.81}}$. It’s quite simple to state, and one can think of it as a 2-dimensional version of Karatsuba’s algorithm for multiplying two numbers. Mike Paterson has a very beautiful geometric interpretation of the sub-problems Strassen comes up with, and how they relate to Karatsuba.

The time for matrix multiplication was later improved by the Coppersmith-Winograd algorithm to get ${\omega = 2.376}$. Small improvements in the lower-order digits of ${\omega}$ were given by Andrew Strothers and (our own) Virginia Vassilevska-Williams.

To answer Goran’s question, I looked over the original CW paper: it is presented as being similar to Strassen’s algorithm in that it breaks the matrices into smaller blocks and recurses on them. But the recursion is quite mysterious, at least to me. Recently, Cohn, Kleinberg, Szegedy, and Umans gave some group-theoretic approaches that use some of the CW ideas, but other seemingly orthogonal ideas, to match the CW-bound. They also give conjectures that would lead to ${\omega = 2}$.

• I was mistaken in claiming that for general directed graphs, the ${(1+\epsilon)}$-approximate APSP is as hard as computing the exact APSP even for small values of ${\epsilon > 0}$. One advantage is that one can round all edge weights to powers of ${(1+\epsilon)}$, and that gives a lot of room to play with. There are a lot of positive results in this setting, e.g., this paper of Uri Zwick, and a talk by him.