Some Theorems about Eigenvalues

Matrix with Distinct Eigenvalues
Usually, it is not easy to determine whether a square matrix is diagonalizable or not. But if an n x n matrix has n distinct eigenvalues, then the matrix is diagonalizable, thanks to the following theorem:[br][br][u]Theorem[/u]: If [math]v_1,v_2,\ldots,v_r[/math] are eigenvectors that correspond to distinct eigenvalues [math]\lambda_1,\lambda_2,\ldots,\lambda_r[/math] of an n x n matrix [math]A[/math], then the set [math]\left\{v_1,v_2,\ldots,v_r\right\}[/math] is linearly independent.[br][br]We can prove the theorem by induction: When [math]r=1[/math], the theorem is trivial as [math]v_1[/math] is non-zero. Assume the theorem is true when [math]r=p[/math]. Now consider the case when [math]r=p+1[/math]. Assume [math]\left\{v_1,v_2,\ldots,v_{p+1}\right\}[/math] is linearly dependent. Since the first [math]p[/math] vectors are linearly independent by the induction hypothesis, we have[br][br][math]v_{p+1}=c_1v_1+c_2v_2+\cdots+c_pv_p[/math] ----(1)[br][br]for some real numbers [math]c_1,c_2,\ldots,c_p[/math]. Apply the matrix [math]A[/math] to both sides, we get[br][br][math]Av_{p+1}=c_1Av_1+c_2Av_2+\cdots+c_pA_p\Rightarrow\lambda_{p+1}v_{p+1}=c_1\lambda_1v_1+c_2\lambda_2v_2+\cdots c_p\lambda_pv_p[/math] ----(2)[br][br]Multiply (1) by [math]\lambda_{p+1}[/math] and subtract (2), we get[br][br][math]0=c_1\left(\lambda_{p+1}-\lambda_1\right)v_1+c_2\left(\lambda_{p+1}-\lambda_2\right)v_2+\cdots+c_p\left(\lambda_{p+1}-\lambda_p\right)v_p[/math][br][br]Since [math]\left\{v_1,v_2,\ldots,v_p\right\}[/math] is linearly independent, [math]c_i\left(\lambda_{p+1}-\lambda_i\right)=0[/math] for [math]i=1,\ldots,p[/math]. As all eigenvalues are distinct, we have [math]c_1=c_2=\cdots=c_p=0[/math], which implies [math]v_{p+1}=0[/math]. This contradicts the fact that [math]v_{p+1}[/math] is an eignvector of [math]A[/math].[br][br]Hence, [math]\left\{v_1,v_2,\ldots,v_{p+1}\right\}[/math] is linearly independent.[br]
Cayley-Hamilton Theorem
Given an n x n matrix [math]A[/math], we already knew that its eigenvalues satisfy the characteristic equation [math]p(\lambda)=\det(A-\lambda I)=0[/math]. The following famous theorem says that the matrix [math]A[/math] also satisfies the characteristic equation:[br][br][b][u]Cayley-Hamilton theorem[/u][/b]: [math]p\left(A\right)=0[/math].[br][br]Recall the [url=https://www.geogebra.org/m/rsamte2c#material/m5ww8s3k]example[/url] [math]A=\begin{pmatrix}2&3\\3&-6\end{pmatrix}[/math]. Its characteristic equation is [math]p(\lambda)=\lambda^2+4\lambda-21=0[/math]. By Cayley-Hamilton theorem, we have[br][br][math]A^2+4A-21I=0[/math][br][br](Note: the constant term in the polynomial is regarded as the scalar multiplication of the constant to the identity matrix.)[br][br]The full proof of the Cayley-Hamilton theorem is beyond the scope of this course. However, we can easily prove the special case: when [math]A[/math] is a diagonalizable matrix i.e. [math]A=PDP^{-1}[/math], where [math]D[/math] is a diagonal matrix. First of all, notice that for any non-negative integer [math]k[/math],[br][br][math]A^k=(PDP^{-1})^k=\underbrace{(PDP^{-1})(PDP^{-1})\cdots (PDP^{-1})}_{k \ \text{times}}=PD^kP^{-1}[/math][br][br]And for diagonal matrix [math]D=\begin{pmatrix}\lambda_1&&&\\&\lambda_2&&\\&&\ddots&\\&&&\lambda_n\end{pmatrix}[/math], [math]D^k=\begin{pmatrix}\lambda_1^k&&&\\&\lambda_2^k&&\\&&\ddots&\\&&&\lambda_n^k\end{pmatrix}[/math].[br][br]Therefore, we have [math]p(A)=P\begin{pmatrix}p(\lambda_1)&&&\\&p(\lambda_2)&&\\&&\ddots&\\&&&p(\lambda_n)\end{pmatrix}P^{-1}[/math]. Since [math]p\left(\lambda\right)[/math] is the characteristic polynomial, [math]p\left(\lambda_i\right)=0[/math] for [math]i=1,\ldots,n[/math]. In other words, [math]p\left(A\right)=P0P^{-1}=0[/math].[br]

Information: Some Theorems about Eigenvalues