Two things I wanted to re-emphasize about today’s lecture:
- Yu asked, “where does the algorithm use the undirectedness”? The undirectedness was used in the Claim 2 towards the end. In particular, for the first part of the claim, we used that for any neighbor z of y (in G), d(x,y) <= d(x,z) +1. (Because z can reach y in one hop.) In the second part, we used that for a neighbor z of y (in G), d(x,z) <= d(x,y) +1. (Because y can reach z in one hop). These are not satisfied if the graph is directed.
But maybe there’s a clever way to extend this idea for directed graphs. Feel free to throw out other ideas, we can discuss them in the comments…
- Once we know the claim, one naive way to go from D to d would be this: for each x and y, iterate over all neighbors z of y and check the condition. That would take too much time: O(n^3). The matrix multiply (DA) allows us to use that either the “average” D(x,z)-value of the neighbors is at least D(x,y), or strictly less. And this we can look up in constant time.
Edit: answers to two more questions:
- Euiwoong’s question about the fastest SSSP (negative weights) algorithm: if you don’t have a bound on the weights, then Bellman-Ford still seems the best. If you know that the weights are integers and the minimum edge weight in the graph is , then Goldberg has an -time algorithm. (Uri Zwick’s notes.)
- Yan’s question about parallelizability of Seidel’s algorithm: since it’s a bunch of matrix multiplies, and a small amount of extra work, it should be easily parallelizable.
See you on Friday!