User:IssaRice/Linear algebra/Classification of operators: Difference between revisions

From Machinelearning
No edit summary
No edit summary
 
(19 intermediate revisions by the same user not shown)
Line 1: Line 1:
Let <math>V</math> be a finite-dimensional inner product space, and let <math>T : V \to V</math> be a linear transformation. Then in the table below, the statements within the same row are equivalent.
Let <math>V</math> be a finite-dimensional inner product space, and let <math>T : V \to V</math> be a linear transformation. Then in the table below, the statements within the same row are equivalent. Below, we consider only complex operators, or the complexification of a real operator.


{| class="sortable wikitable"
{| class="sortable wikitable"
|-
|-
! Operator name !! Description in terms of eigenvectors !! Description in terms of diagonalizability !! Notes
! Operator kind !! Description in terms of eigenvectors !! Description in terms of diagonalizability !! Geometric interpretation !! Algebraic property !! Notes !! Examples
|-
|-
| <math>T</math> is diagonalizable || There exists a basis of <math>V</math> consisting of eigenvectors of <math>T</math> || <math>T</math> is diagonalizable (there exists a basis <math>\beta</math> of <math>V</math> with respect to which <math>[T]_\beta^\beta</math> is a diagonal matrix)
| <math>T</math> is diagonalizable || There exists a basis of <math>V</math> consisting of eigenvectors of <math>T</math> || <math>T</math> is diagonalizable (there exists a basis <math>\beta</math> of <math>V</math> with respect to which <math>[T]_\beta^\beta</math> is a diagonal matrix) || ||  || This basis is not unique because we can reorder the vectors and also scale eigenvectors by a non-zero number to obtain an eigenvector. But there are at most <math>\dim V</math> distinct eigenvalues so the diagonal matrix should be unique up to order? This result holds even if <math>V</math> is merely a vector space with any field of scalars. || If <math>T</math> is the identity map, then every non-zero vector <math>v \in V</math> is an eigenvector of <math>T</math> with eigenvalue <math>1</math> because <math>Tv = 1v</math>. Thus every basis <math>\beta = (v_1,\ldots,v_n)</math> diagonalizes <math>T</math>. The matrix of <math>T</math> with respect to <math>\beta</math> is the identity matrix.
|-
|-
| <math>T</math> is normal || There exists an orthonormal basis of <math>V</math> consisting of eigenvectors of <math>T</math> || <math>T</math> is diagonalizable using an orthonormal basis
| <math>T</math> is normal || There exists an orthonormal basis of <math>V</math> consisting of eigenvectors of <math>T</math> || <math>T</math> is diagonalizable using an orthonormal basis || || <math>TT^* = T^*T</math> || A normal operator has the additional property that it can be written as <math>T = S + A</math>, where <math>S</math> is a self-adjoint operator and <math>A</math> is an anti-self-adjoint operator, and where <math>S</math> and <math>A</math> are simultaneously diagonalizable using a single orthonormal basis ||
|-
|-
| <math>T</math> self-adjoint (<math>T</math> is Hermitian) || There exists an orthonormal basis of <math>V</math> consisting of eigenvectors of <math>T</math> with real eigenvalues || <math>T</math> is diagonalizable using an orthonormal basis and the diagonal entries are all real
| <math>T</math> self-adjoint (aka Hermitian) || There exists an orthonormal basis of <math>V</math> consisting of eigenvectors of <math>T</math> with real eigenvalues || <math>T</math> is diagonalizable using an orthonormal basis and the diagonal entries are all real || || <math>T = T^*</math> ||
|-
|-
| <math>T</math> is an isometry || There exists an orthonormal basis of <math>V</math> consisting of eigenvectors of <math>T</math> whose eigenvalues all have absolute value 1 || <math>T</math> is diagonalizable using an orthonormal basis and the diagonal entries all have absolute values 1 || This only works when the field of scalars is the complex numbers
| <math>T</math> is anti-self-adjoint (aka skew-Hermitian or anti-Hermitian) || There exists an orthonormal basis of <math>V</math> consisting of eigenvectors of <math>T</math> with pure imaginary eigenvalues || <math>T</math> is diagonalizable using an orthonormal basis and the diagonal entries are all pure imaginary || || <math>T^* = -T</math> || || 90-degree rotation of the plane?
|-
|-
| <math>T</math> is positive || There exists an orthonormal basis of <math>V</math> consisting of eigenvectors of <math>T</math> with positive real eigenvalues || <math>T</math> is diagonalizable using an orthonormal basis and the diagonal entries are all positive real numbers
| <math>T</math> is an isometry (aka unitary in a complex vector space, or orthogonal in a real vector space) || There exists an orthonormal basis of <math>V</math> consisting of eigenvectors of <math>T</math> whose eigenvalues all have absolute value 1 || <math>T</math> is diagonalizable using an orthonormal basis and the diagonal entries all have absolute values 1 || || <math>TT^* = T^*T = TT^{-1} = I</math> || This only works when the field of scalars is the complex numbers
|-
| <math>T</math> is positive (positive semidefinite) || There exists an orthonormal basis of <math>V</math> consisting of eigenvectors of <math>T</math> with nonnegative real eigenvalues || <math>T</math> is diagonalizable using an orthonormal basis and the diagonal entries are all nonnegative real numbers || Polar decomposition says an arbitrary linear operator can be written as a positive operator followed by a rotation (isometry). In polar decomposition, the positive operator step chooses orthogonal directions in which to stretch or shrink, so that we have a tilted ellipse, and the isometry rotates that ellipse. So a positive operator is simply one that does not require the second step. In other words, for a positive operator you can find some orthogonal "coordinate axes" along which to scale. || ||
|}
|}
Acknowledgments: Thanks to Philip B. for feedback on this page.

Latest revision as of 13:45, 28 December 2021

Let V be a finite-dimensional inner product space, and let T:VV be a linear transformation. Then in the table below, the statements within the same row are equivalent. Below, we consider only complex operators, or the complexification of a real operator.

Operator kind Description in terms of eigenvectors Description in terms of diagonalizability Geometric interpretation Algebraic property Notes Examples
T is diagonalizable There exists a basis of V consisting of eigenvectors of T T is diagonalizable (there exists a basis β of V with respect to which [T]ββ is a diagonal matrix) This basis is not unique because we can reorder the vectors and also scale eigenvectors by a non-zero number to obtain an eigenvector. But there are at most dimV distinct eigenvalues so the diagonal matrix should be unique up to order? This result holds even if V is merely a vector space with any field of scalars. If T is the identity map, then every non-zero vector vV is an eigenvector of T with eigenvalue 1 because Tv=1v. Thus every basis β=(v1,,vn) diagonalizes T. The matrix of T with respect to β is the identity matrix.
T is normal There exists an orthonormal basis of V consisting of eigenvectors of T T is diagonalizable using an orthonormal basis TT*=T*T A normal operator has the additional property that it can be written as T=S+A, where S is a self-adjoint operator and A is an anti-self-adjoint operator, and where S and A are simultaneously diagonalizable using a single orthonormal basis
T self-adjoint (aka Hermitian) There exists an orthonormal basis of V consisting of eigenvectors of T with real eigenvalues T is diagonalizable using an orthonormal basis and the diagonal entries are all real T=T*
T is anti-self-adjoint (aka skew-Hermitian or anti-Hermitian) There exists an orthonormal basis of V consisting of eigenvectors of T with pure imaginary eigenvalues T is diagonalizable using an orthonormal basis and the diagonal entries are all pure imaginary T*=T 90-degree rotation of the plane?
T is an isometry (aka unitary in a complex vector space, or orthogonal in a real vector space) There exists an orthonormal basis of V consisting of eigenvectors of T whose eigenvalues all have absolute value 1 T is diagonalizable using an orthonormal basis and the diagonal entries all have absolute values 1 TT*=T*T=TT1=I This only works when the field of scalars is the complex numbers
T is positive (positive semidefinite) There exists an orthonormal basis of V consisting of eigenvectors of T with nonnegative real eigenvalues T is diagonalizable using an orthonormal basis and the diagonal entries are all nonnegative real numbers Polar decomposition says an arbitrary linear operator can be written as a positive operator followed by a rotation (isometry). In polar decomposition, the positive operator step chooses orthogonal directions in which to stretch or shrink, so that we have a tilted ellipse, and the isometry rotates that ellipse. So a positive operator is simply one that does not require the second step. In other words, for a positive operator you can find some orthogonal "coordinate axes" along which to scale.

Acknowledgments: Thanks to Philip B. for feedback on this page.