User:IssaRice/Linear algebra/Riesz representation theorem: Difference between revisions

From Machinelearning
No edit summary
No edit summary
Line 2: Line 2:


If <math>\sigma = (e_1, \ldots, e_n)</math> is the standard basis of <math>\mathbf R^n</math> and <math>(1)</math> is the standard basis of <math>\mathbf R</math>, then <math>Tv = [Tv]^{(1)} = [T]_\sigma^{(1)} [v]^\sigma = [T]_\sigma^{(1)} \cdot [v]^\sigma = [v]^\sigma \cdot [T]_\sigma^{(1)} = v \cdot [T]_\sigma^{(1)}</math>.
If <math>\sigma = (e_1, \ldots, e_n)</math> is the standard basis of <math>\mathbf R^n</math> and <math>(1)</math> is the standard basis of <math>\mathbf R</math>, then <math>Tv = [Tv]^{(1)} = [T]_\sigma^{(1)} [v]^\sigma = [T]_\sigma^{(1)} \cdot [v]^\sigma = [v]^\sigma \cdot [T]_\sigma^{(1)} = v \cdot [T]_\sigma^{(1)}</math>.
So in the case <math>V = \mathbf R^n</math> we can understand the Riesz representation theorem as saying something we already knew. What Riesz representation theorem does is extend this same sort of "representability" to all finite-dimensional V and all linear functionals <math>T : V \to \mathbf F</math>.

Revision as of 17:36, 6 January 2019

Let's take the case where and the inner product is the usual dot product. What does the Riesz representation theorem say in this case? It says that if we have a linear functional then we can write as for some vector . But we already know (from the correspondence between matrices and linear transformations) that we can represent as a 1-by-n matrix. And v can be thought of as a n-by-1 matrix. And now the dot product is the same thing as the matrix multiplication!

If is the standard basis of and is the standard basis of , then .

So in the case we can understand the Riesz representation theorem as saying something we already knew. What Riesz representation theorem does is extend this same sort of "representability" to all finite-dimensional V and all linear functionals .