A few loose ends from yesterday’s lecture:

- The formula for computing inverses of a submatrix from the inverse of the whole thing is given by this:

(from this survey by Pawel Parys). A good explanation of the Mucha and Sankowski -time algorithm is in Marcin Mucha’s thesis. - I looked at the Tutte paper again. The reason he was introducing the Tutte matrix (not his terminology) was to prove Tutte’s theorem (again, not his terminology). Namely, that a graph has a perfect matching if and only if, for every subset of vertices , the number of odd components in is at most . (Note that this is a special case of the Tutte-Berge formula.) Pretty cool that this 5-page paper introduced two important concepts.
- The proof of the isolation lemma went a little fast towards the end, so let me say it again. We wanted to bound the probability that an edge is confused, i.e., that the min-weight perfect matching containing has the same as that of the min-weight perfect matching not containing , by .So let’s give weights to all edges apart from . We can now read off the weight of the min-weight perfect matching not containing , say this is some constant . Moreover, if the weight of is eventually set to , then since the weights of all other elements is known, the min-weight perfect matching will have weight , where is another constant depending on the weights of other edges. So, for to be confused, we must have . And the chance this happens is at most , since we are choosing the weight for each edge independently from . Done.We will see how to use the isolation lemma in tomorrow’s lecture on smoothed analysis. (Slight change in lecture order, by the way.)
- By the way, the really useful part of the isolation lemma is that the weights are small, linear in the number of edges even though the number of matchings is exponential. E.g., if you assigned weight of edge to be , you would again get that the min-weight matching would be unique. But the weights would be exponentially large, which is undesirable in many contexts.