User:IssaRice/Extreme value theorem

From Machinelearning

Working through the proof in Pugh's book by filling in the parts he doesn't talk about.

For x[a,b], define Vx=f([a,x])={f(t):atx} to be the image of f up to and including x.

Let M=sup{f(x):x[a,b]} and X={x[a,b]:supVx<M}.

Now suppose f(a)<M. Then aX. We already know that X is bounded above, for instance by the number b. We can thus take the least upper bound of X, say c=supX. We already know f(c)M, so if we can just eliminate the possibility that f(c)<M, we will be done.

So suppose f(c)<M. We can choose ϵ>0 with ϵ<Mf(c).[note 1] By continuity at c, there exists a δ>0 such that |tc|<δ implies |f(t)f(c)|<ϵ. This means f(t)<f(c)+ϵ<M. If tcδ/2 then there exists some xX such that t<x. This means supVtsupVx<M so tX. Otherwise if cδ/2tc then |tc|<δ so f(t)<f(c)+ϵ<M. So now what can we say about supVc? We want to say supVc<M. We can do this by showing that there exists a number M<M such that f(t)<M for all t[a,c]. That way, supVcM<M. But M=supVc works.

supVcδ/2<M so let M=(M+supVcδ/2)/2. Then supVcδ/2<M<M and if vVcδ/2 we have vsupVcδ/2<M.

Therefore, c=b, which implies that M=supVb=supVc<M, a contradiction. So the assumption that f(c)<M was false, and we conclude f(c)=M.

Notes

  1. it is important here that ϵ does not equal Mf(c); choosing this ϵ would be too weak and we would not be able to conclude supVc<M, rather only that supVcM.