User:IssaRice/Linear algebra/Riesz representation theorem: Difference between revisions

From Machinelearning
No edit summary
No edit summary
Line 4: Line 4:


So in the case <math>V = \mathbf R^n</math> we can understand the Riesz representation theorem as saying something we already knew. What Riesz representation theorem does is extend this same sort of "representability" to all finite-dimensional inner product spaces V and all linear functionals <math>T : V \to \mathbf F</math>.
So in the case <math>V = \mathbf R^n</math> we can understand the Riesz representation theorem as saying something we already knew. What Riesz representation theorem does is extend this same sort of "representability" to all finite-dimensional inner product spaces V and all linear functionals <math>T : V \to \mathbf F</math>.
[https://www.youtube.com/watch?v=LyGKycYT2v0&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab&index=9 this video] talks about this for <math>\mathbf R^2</math>

Revision as of 23:46, 8 January 2019

Let's take the case where V=Rn and the inner product is the usual dot product. What does the Riesz representation theorem say in this case? It says that if we have a linear functional T:RnR then we can write T as Tv=vu for some vector uRn. But we already know (from the correspondence between matrices and linear transformations) that we can represent T as a 1-by-n matrix. And v can be thought of as a n-by-1 matrix. And now the dot product is the same thing as the matrix multiplication!

If σ=(e1,,en) is the standard basis of Rn and (1) is the standard basis of R, then Tv=[Tv](1)=[T]σ(1)[v]σ=[T]σ(1)[v]σ=[v]σ[T]σ(1)=v[T]σ(1).

So in the case V=Rn we can understand the Riesz representation theorem as saying something we already knew. What Riesz representation theorem does is extend this same sort of "representability" to all finite-dimensional inner product spaces V and all linear functionals T:VF.

this video talks about this for R2