User:IssaRice/Linear algebra/Linear transformation vs matrix views: Difference between revisions
No edit summary |
No edit summary |
||
Line 30: | Line 30: | ||
(3) for all <math>T,A,\beta,\gamma</math> such that <math>A = [T]_\beta^\gamma</math>: T injective iff A injective. | (3) for all <math>T,A,\beta,\gamma</math> such that <math>A = [T]_\beta^\gamma</math>: T injective iff A injective. | ||
Potential proof strategy that might not be so tedious: basically imagine that A is always written in the standard basis. Then consider <math>[T]_\sigma^{\sigma'}</math> where <math>\sigma,\sigma'</math> are the standard bases in R^n and R^m. Then <math>A = [I]_\gamma^{\sigma'}[T]_\beta^\gamma[I]_\sigma^\beta</math>. |
Revision as of 22:26, 27 June 2019
Given an matrix we can define a linear map by .
Given a linear map , it is not immediately possible to get a corresponding matrix. We must choose some basis for and a basis for . Then we can get a matrix by setting the th column to be written in the basis .
We would hope that any property we attribute to a linear map is invariant of the matrix we use to represent it. For instance if is called "injective" then it should be injective regardless of what matrix we use. Similarly given any matrix that is injective, any of the possible linear maps that that matrix represents should be injective.
Examples of other properties like this: injective, surjective, bijective, rank, diagonalizable
On the other hand, a property like "the sum of the columns is equal to such and such" is not invariant
I think the root of the confusion is that for these invariant properties, it is possible to define them given either the matrix or the map. So then there are two definitions floating around, and i don't see people showing them equivalent in general.
we can think of a linear map as an equivalence class of matrices. or we can think of a matrix as an equivalence class of linear maps. then we can phrase these invariance results as basically saying that these properties are well-defined. the difference seems to be that here we want to show equivalence, so we need to do it in both directions (?).
the actual proofs are pretty tedious (is my guess)
let's run with injectivity as an example.
Definition. A linear map is injective iff implies for all .
Definition. An matrix is injective iff implies for all .
We want to say that these are basically the same thing. How do we express that? Some ideas:
(1) if is injective and are any bases, then is injective
(2) if is injective and are any bases are any bases, then for any such that we have that is injective
I think these can be combined into:
(3) for all such that : T injective iff A injective.
Potential proof strategy that might not be so tedious: basically imagine that A is always written in the standard basis. Then consider where are the standard bases in R^n and R^m. Then .