OK, sorry, I was being silly.
We wanted to compute MSP of two matrices and . I.e, we wanted , say. (Notice that I’ve transposed .) In words, this is
If is the minimizer of this expression then
which is equivalent to
Indeed, the signs are opposite for the and entries. No worries.
Now, for each possible , do the following. Take the numbers
and sort them with comparisons.
How many comparisons have we done over all? We enumerated over pairs of , and for each we did comparisons. Total of . We claim we don’t need to do any more, we have all the information to compute the MSP.
How? Take . Find some index such that for every , all the entries precede all the entries . This can be done by just looking at the sorted lists for and . Now you can set . Done.
Let’s also complete the analysis for Fredman’s trick. We’d broken down the MSP into different mini-MSPs, which together take time. Now we can combine them together by taking the component-wise minima. Each minimum takes comparisons since there are MSPs. And there are entries. So a total of comparisons. Balancing the two gives us comparisions.
Fredman’s paper gives a better analysis to get rid of the log term, and uses it to get an algorithm with runtime better than by an factor. But we’ll see a larger improvement in the next lecture.