We will talk a bit more about the Ellipsoid algorithm (and separation vs optimization) on Friday, maybe start talking about the Newton method, and then move on to more details on interior point techniques on Monday.
A couple things about Ellipsoid:
 One point to emphasize: the step of Ellipsoid cuts into two, and builds around the relavant half. Suppose , where is the unit ball. Then is some ellipsoid with volume at most times the volume of the unit ball. Now since any ellipsoid is a linear transformation of the ball, if , where is the associated linear transformation, then . But for any body , the volume of is . So
In fact, you can imagine that each step we are just transforming the current elipsoid back to a ball, making a cut, and then transforming things back. One problem, of course, is that at each step we make the transformation more complicated. Indeed, in the notation above, if , then . So the numbers involved can get larger at each step: we may need to round the numbers to control how big they get. These and other numerical issues are at the heart of Khachiyan’s proof that the algorithm runs in polynomial time.
 My apologies: the alternate version of Ellipsoid I thought used boxes, in fact, uses simplices. The analysis that it runs in polynomial time is due to Boris Yamnitsky and (Leonid) Levin. The main idea is that half of some simplex can be contained in another simplex of volume at most times the volume of . (A proof appears in Vasek Chvatal’s book.) Note this factor is worse than the Ellipsoid factor of by another factor of , but the numerical calculations in the algorithm definition do not require the use of square roots. The original paper (also here) is not quite kosher, since it can lead to the size of the numbers blowing up: this report by Bartels gives an example, and also suggests rounding approaches to control the problem. Finally, some notes on the simplices algorithm by Yossi Azar (parts 1, 2) which I have not had a chance to go over in detail.
And a couple words about the centerofgravity algorithm:

The centerofgravity definition. It’s the natural extension of the discrete case. Indeed, if we have objects in , the one having location and mass , the center of gravity (or the center of mass, or centroid) is defined as
The continuous analog of this where we have a general measure over (basically replacing sums by integrals), is
The numerator is the total measure over . (In class I was implcitly assuming the uniform measure over , which is given by .
The in Grunbaum’s theorem (that each hyperplane through the centroid of a convex body contains at least fraction of the mass on either side) is best possible for convex bodies. And the proof is clever but not difficult See Grunbaum’s (very short) paper for examples and proof. Or these notes by Jon Kelner or Santosh Vempala.
What happens if we don’t have the uniform measure over a convex body, but a more general distribution? Then things change quite a bit. E.g., consider equal point masses at the vertices of an dimensional simplex. No matter which point you choose, you can find a hyperplane through it that contains only a single point (which is of the mass) on one side. Grunbaum actually shows (in the same paper) that you can find a point that ensures at least fraction of the mass on either side.