Lecture #5

We didn’t get time to see Fredman’s “trick” after all. What Fredman shows in this paper is the following result:

The min-sum product (MSP) of two n \times m matrices can be computed using only O(m^2 n \log n) comparisons.

(He uses m instead of our \ell.) Using the same approach of breaking up the MSP of two n \times n matrices into “slabs” of width m, we can compute the MSP of n \times n using O((n/m) m^2 n \log n + n^3/m) comparisons. Setting m = \sqrt{n \log n} gives us a total of n^{2.5} \sqrt{\log n} comparisons.

Note this is just the number of comparisons, not the actual time to write down the answer. But it shows that the comparisons are not the bottleneck.

In the paper, he then optimizes things to remove the \sqrt{\log n} term, and uses the trick to get an algorithm whose runtime actually beats n^3 by a little bit. But I wanted to highlight the proof of the above fact, which is one paragraph long, with a very clever idea:


A natural question: can you beat this bound and get \widetilde{O}(n^2) comparisons?

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s