Skip to main content

Section 4.1 Eigenvalues and Eigenvectors

We jump right into the definition, which you have probably seen previously in your first course in linear algebra.

Definition 4.1.1.

Let \(A\) be an \(n\times n\) matrix. A number \(\lambda\) is called an eigenvalue of \(A\) if there exists a nonzero vector \(\xx\) such that
\begin{equation*} A\xx = \lambda\xx\text{.} \end{equation*}
Any such vector \(\xx\) is called an eigenvector associated to the eigenvalue \(\lambda\text{.}\)

Remark 4.1.2.

You might reasonably wonder: where does this definition come from? And why should I care? We are assuming that you saw at least a basic introduction to eigenvalues in your first course on linear algebra, but that course probably focused on mechanics. Possibly you learned that diagonalizing a matrix lets you compute powers of that matrix.
But why should we be interested in computing powers (in particular, large powers) of a matrix? An important context comes from the study of discrete linear dynamical systems
 1 
en.wikipedia.org/wiki/Linear_dynamical_system
, as well as Markov chains
 2 
en.wikipedia.org/wiki/Markov_chain
, where the evolution of a state is modelled by repeated multiplication of a vector by a matrix.
When we’re able to diagonalize our matrix using eigenvalues and eigenvectors, not only does it become easy to compute powers of a matrix, it also enables us to see that the entire process is just a linear combination of geometric sequences! If you have completed Section 2.5, you probably will not be surprised to learn that the polynomial roots you found are, in fact, eigenvalues of a suitable matrix.

Remark 4.1.3.

Eigenvalues and eigenvectors can just as easily be defined for a general linear operator \(T:V\to V\text{.}\) In this context, an eigenvector \(\xx\) is sometimes referred to as a characteristic vector (or characteristic direction) for \(T\text{,}\) since the property \(T(\xx)=\lambda \xx\) simply states that the transformed vector \(T(\xx)\) is parallel to the original vector \(\xx\text{.}\) Some linear algebra textbooks that focus more on general linear transformations frame this topic in the context of invariant subspaces for a linear operator.
A subspace \(U\subseteq V\) is invariant with respect to \(T\) if \(T(\uu)\in U\) for all \(\uu\in U\text{.}\) Note that if \(\xx\) is an eigenvector of \(T\text{,}\) then \(\spn\{\xx\}\) is an invariant subspace. To see this, note that if \(T(\xx)=\lambda \xx\) and \(\yy=k\xx\text{,}\) then
\begin{equation*} T(\yy)=T(k\xx)=kT(\xx)=k(\lambda \xx)=\lambda(k\xx)=\lambda\yy\text{.} \end{equation*}

Exercise 4.1.4.

Note that if \(\xx\) is an eigenvector of the matrix \(A\text{,}\) then we have
\begin{equation} (A-\lambda I_n)\xx=\zer\text{,}\tag{4.1.1} \end{equation}
where \(I_n\) denotes the \(n\times n\) identity matrix. Thus, if \(\lambda\) is an eigenvalue of \(A\text{,}\) any corresponding eigenvector is an element of \(\nll(A-\lambda I_n)\text{.}\)

Definition 4.1.5.

For any real number \(\lambda\) and \(n\times n\) matrix \(A\text{,}\) we define the eigenspace \(E_\lambda(A)\) by
\begin{equation*} E_\lambda(A) = \nll (A-\lambda I_n)\text{.} \end{equation*}
Since we know that the null space of any matrix is a subspace, it follows that eigenspaces are subspaces of \(\R^n\text{.}\)
Note that \(E_\lambda(A)\) can be defined for any real number \(\lambda\text{,}\) whether or not \(\lambda\) is an eigenvalue. However, the eigenvalues of \(A\) are distinguished by the property that there is a nonzero solution to (4.1.1). Furthermore, we know that (4.1.1) can only have nontrivial solutions if the matrix \(A-\lambda I_n\) is not invertible. We also know that \(A-\lambda I_n\) is non-invertible if and only if \(\det (A-\lambda I_n) = 0\text{.}\) This gives us the following theorem.

Strategy.

To prove a theorem involving a “the following are equivalent” statement, a good strategy is to show that the first implies the second, the second implies the third, and the third implies the first. The ideas needed for the proof are given in the paragraph preceding the theorem. See if you can turn them into a formal proof.
The polynomial \(c_A(x)=\det(xI_n -A)\) is called the characteristic polynomial of \(A\text{.}\) (Note that \(\det(x I_n-A) = (-1)^n\det(A-x I_n)\text{.}\) We choose this order so that the coefficient of \(x^n\) is always 1.) The equation
\begin{equation} \det(xI_n - A) = 0\tag{4.1.2} \end{equation}
is called the characteristic equation of \(A\text{.}\) The solutions to this equation are precisely the eigenvalues of \(A\text{.}\)

Remark 4.1.7.

A careful study of eigenvalues and eigenvectors relies heavily on polynomials. An interesting fact is that we can plug any square matrix into a polynomial! Given the polynomial \(p(x) = a_0+a_1x+a_2 x^2 + \cdots + a_nx^n\) and an \(n\times n\) matrix \(A\text{,}\) we define
\begin{equation*} p(A) = a_0I_n+a_1A+a_2A^2+\cdots+a_nA^n\text{.} \end{equation*}
Note the use of the identity matrix in the first term, since it doesn’t make sense to add a scalar to a matrix.
One interesting aspect of this is the relationship between the eigenvalues of \(A\) and the eigenvalues of \(p(A)\text{.}\) For example, if \(A\) has the eigenvalue \(\lambda\text{,}\) see if you can prove that \(A^k\) has the eigenvalue \(\lambda^k\text{.}\)

Exercise 4.1.8.

Recall that a matrix \(B\) is said to be similar to a matrix \(A\) if there exists an invertible matrix \(P\) such that \(B = P^{-1}AP\text{.}\) Much of what follows concerns the question of whether or not a given \(n\times n\) matrix \(A\) is diagonalizable.

Definition 4.1.9.

An \(n\times n\) matrix \(A\) is said to be diagonalizable if \(A\) is similar to a diagonal matrix.
The following results will frequently be useful.

Proof.

The first two follow directly from properties of the determinant and trace. For the last, note that if \(B = P^{-1}AP\text{,}\) then
\begin{equation*} P^{-1}(xI_n-A)P = P^{-1}(xI_n)P-P^{-1}AP = xI_n - B\text{,} \end{equation*}
so \(xI_n-B\sim xI_n-A\text{,}\) and therefore \(\det(xI_n-B)=\det(xI_n-A)\text{.}\)

Example 4.1.11.

Determine the eigenvalues and eigenvectors of \(A = \bbm 0\amp 1\amp 1\\1\amp 0\amp 1\\1\amp 1\amp 0\ebm\text{.}\)
Solution.
We begin with the characteristic polynomial. We have
\begin{align*} \det(xI_n - A) \amp =\det\bbm x \amp -1\amp -1\\-1\amp x \amp -1\\-1\amp -1\amp x\ebm\\ \amp = x \begin{vmatrix}x \amp -1\\-1\amp x\end{vmatrix} +1\begin{vmatrix}-1\amp -1\\-1\amp x\end{vmatrix} -1\begin{vmatrix}-1\amp x\\-1\amp -1\end{vmatrix}\\ \amp = x(x^2-1)+(-x-1)-(1+x)\\ \amp x(x-1)(x+1)-2(x+1)\\ \amp (x+1)[x^2-x-2]\\ \amp (x+1)^2(x-2)\text{.} \end{align*}
The roots of the characteristic polynomial are our eigenvalues, so we have \(\lambda_1=-1\) and \(\lambda_2=2\text{.}\) Note that the first eigenvalue comes from a repeated root. This is typically where things get interesting. If an eigenvalue does not come from a repeated root, then there will only be one (independent) eigenvector that corresponds to it. (That is, \(\dim E_\lambda(A)=1\text{.}\)) If an eigenvalue is repeated, it could have more than one eigenvector, but this is not guaranteed.
We find that \(A-(-1)I_n = \bbm 1\amp 1\amp 1\\1\amp 1\amp 1\\1\amp 1\amp 1\ebm\text{,}\) which has reduced row-echelon form \(\bbm 1\amp 1\amp 1\\0\amp 0\amp 0\\0\amp 0\amp 0\ebm\text{.}\) Solving for the nullspace, we find that there are two independent eigenvectors:
\begin{equation*} \xx_{1,1}=\bbm 1\\-1\\0\ebm, \quad \text{ and } \quad \xx_{1,2}=\bbm 1\\0\\-1\ebm\text{,} \end{equation*}
so
\begin{equation*} E_{-1}(A) = \spn\left\{\bbm 1\\-1\\0\ebm, \bbm 1\\0\\-1\ebm\right\}\text{.} \end{equation*}
For the second eigenvector, we have \(A-2I = \bbm -2\amp 1\amp 1\\1\amp -2\amp 1\\1\amp 1\amp -2\ebm\text{,}\) which has reduced row-echelon form \(\bbm 1\amp 0\amp -1\\0\amp 1\amp -1\\0\amp 0\amp 0\ebm\text{.}\) An eigenvector in this case is given by
\begin{equation*} \xx_2 = \bbm 1\\1\\1\ebm\text{.} \end{equation*}
In general, if the characteristic polynomial can be factored as
\begin{equation*} c_A(x)=(x-\lambda)^mq(x)\text{,} \end{equation*}
where \(q(x)\) is not divisible by \(x-\lambda\text{,}\) then we say that \(\lambda\) is an eigenvalue of multiplicity \(m\text{.}\) In the example above, \(\lambda_1=-1\) has multiplicty 2, and \(\lambda_2=2\) has multiplicty 1.
The eigenvects command in SymPy takes a square matrix as input, and outputs a list of lists (one list for each eigenvalue). For a given eigenvalue, the corresponding list has the form (eigenvalue, multiplicity, eigenvectors). Using SymPy to solve Example 4.1.11 looks as follows:
An important result about multiplicity is the following.
To prove Theorem 4.1.12 we need the following lemma, which we’ve borrowed from Section 5.5 of Nicholson’s textbook.

Proof.

We have
\begin{align*} P^{-1}AP \amp = P^{-1}A\bbm \xx_1\amp \cdots \amp \xx_n\ebm\\ \amp =\bbm (P^{-1}A)\xx_1\amp \cdots \amp (P^{-1}A)\xx_n\ebm\text{.} \end{align*}
For \(1\leq i\leq k\text{,}\) we have
\begin{equation*} (P^{-1}A)(\xx_i) = P^{-1}(A\xx_i) = P^{-1}(\lambda_i\xx_i)=\lambda_i(P^{-1}\xx_i)\text{.} \end{equation*}
But \(P^{-1}\xx_i\) is the \(i\)th column of \(P^{-1}P = I_n\text{,}\) which proves the result.
We can use Lemma 4.1.13 to prove that \(\dim E_\lambda(A)\leq m\) as follows. Suppose \(\{\xx_1,\ldots, \xx_k\}\) is a basis for \(E_\lambda(A)\text{.}\) Then this is a linearly independent set of eigenvectors, so our lemma guarantees the existence of a matrix \(P\) such that
\begin{equation*} P^{-1}AP = \bbm \lambda I_k \amp B\\0\amp A_1\ebm\text{.} \end{equation*}
Let \(\tilde{A}=P^{-1}AP\text{.}\) On the one hand, since \(\tilde{A}\sim A\text{,}\) we have \(c_A(x)=p_{\tilde{A}}(x)\text{.}\) On the other hand,
\begin{equation*} \det(xI_n-\tilde{A}) = \det\bbm (x-\lambda)I_k \amp -B\\0 \amp xI_{n-k}-A_1\ebm = (x-\lambda)^k\det(xI_{n-k}-A_1)\text{.} \end{equation*}
This shows that \(c_A(x)\) is divisible by \((x-\lambda)^k\text{.}\) Since \(m\) is the largest integer such that \(c_A(x)\) is divisible by \((x-\lambda)^m\text{,}\) we must have \(\dim E_\lambda(A)=k\leq m\text{.}\)
Another important result is the following. The proof is a bit tricky: it requires mathematical induction, and a couple of clever observations.

Proof.

The proof is by induction on the number \(k\) of distinct eigenvalues. Since eigenvectors are nonzero, any set consisting of a single eigenvector \(\vv_1\) is independent. Suppose, then, that a set of eigenvectors corresponding to \(k-1\) distinct eigenvalues is independent, and let \(\vv_1,\ldots, \vv_k\) be eigenvectors corresponding to distinct eigenvalues \(\lambda_1,\ldots, \lambda_k\text{.}\)
Consider the equation
\begin{equation*} c_1\vv_1+c_2\vv_2+\cdots +c_k\vv_k=\zer\text{,} \end{equation*}
for scalars \(c_1,\ldots, c_k\text{.}\) Multiplying both sides by the matrix \(A\text{,}\) we have
\begin{align} A(c_1\vv_1+c_2\vv_2+\cdots +c_k\vv_k) \amp = A\zer\tag{4.1.3}\\ c_1A\vv_1+c_2A\vv_2+\cdots + c_kA\vv_k \amp = \zer\tag{4.1.4}\\ c_1\lambda_1\vv_1+c_2\lambda_2\vv_2+\cdots + c_k\lambda_k\vv_k \amp =\zer\text{.}\tag{4.1.5} \end{align}
On the other hand, we can also multiply both sides by the eigenvalue \(\lambda_1\text{,}\) giving
\begin{equation} \zer = c_1\lambda_1\vv_1 + c_2\lambda_1\vv_2+\cdots + c_k\lambda_1\vv_k\text{.}\tag{4.1.6} \end{equation}
Subtracting (4.1.6) from (4.1.5), the first temrs cancel, and we get
\begin{equation*} c_2(\lambda_2-\lambda_1)\vv_2+\cdots + c_k(\lambda_k-\lambda_1)\vv_k=\zer\text{.} \end{equation*}
By hypothesis, the set \(\{\vv_2,\ldots, \vv_k\}\) of \(k-1\) eigenvectors is linearly independent. We know that \(\lambda_j-\lambda_1\neq 0\) for \(j=2,\ldots, k\text{,}\) since the eigenvalues are all distinct. Therefore, the only way this linear combination can equal zero is if \(c_2=0,\ldots, c_k=0\text{.}\) This leaves us with \(c_1\vv_1=\zer\text{,}\) but \(\zz_1\neq \zer\text{,}\) so \(c_1=0\) as well.
Theorem 4.1.14 tells us that vectors from different eigenspaces are independent. In particular, a union of bases from each eigenspace will be an independent set. Therefore, Theorem 4.1.12 provides an initial criterion for diagonalization: if the dimension of each eigenspace \(E_\lambda(A)\) is equal to the multiplicity of \(\lambda\text{,}\) then \(A\) is diagonalizable.
Our focus in the next section will be on diagonalization of symmetric matrices, and soon we will see that for such matrices, eigenvectors corresponding to different eigenvalues are not just independent, but orthogonal.

Exercises Exercises

1.

Find the characteristic polynomial of the matrix \(A = {\left[\begin{array}{ccc} 1 \amp -2 \amp 0\cr 0 \amp 4 \amp -4\cr -3 \amp 1 \amp 0 \end{array}\right]}.\)

2.

Find the three distinct real eigenvalues of the matrix \(B = {\left[\begin{array}{ccc} -1 \amp 4 \amp 7\cr 0 \amp -4 \amp -8\cr 0 \amp 0 \amp 7 \end{array}\right]}.\)

3.

The matrix \(A={\left[\begin{array}{ccc} -8 \amp -4 \amp -12\cr -4 \amp -8 \amp -12\cr 4 \amp 4 \amp 8 \end{array}\right]}\) has two real eigenvalues, one of multiplicity \(1\) and one of multiplicity \(2\text{.}\) Find the eigenvalues and a basis for each eigenspace.

4.

The matrix \(A={\left[\begin{array}{cccc} 5 \amp 2 \amp -14 \amp 2\cr -2 \amp 1 \amp 5 \amp -2\cr 1 \amp 1 \amp -4 \amp 1\cr 1 \amp 1 \amp -7 \amp 4 \end{array}\right]}\) has two distinct real eigenvalues \(\lambda_1 \lt \lambda_2\text{.}\) Find the eigenvalues and a basis for each eigenspace.

5.

The matrix
\begin{equation*} A = \left[\begin{array}{ccc} 2 \amp 1 \amp 0\cr -9 \amp -4 \amp 1\cr k \amp 0 \amp 0\cr \end{array}\right] \end{equation*}
has three distinct real eigenvalues if and only if
\(\lt k \lt \) .

6.

The matrix
\begin{equation*} A=\left[\begin{array}{cccc} 4 \amp -4 \amp -8 \amp -4\cr -2 \amp 2 \amp 4 \amp 2\cr 2 \amp -2 \amp -4 \amp -2\cr 0 \amp 0 \amp 0 \amp 0 \end{array}\right] \end{equation*}
has two real eigenvalues \(\lambda_1 \lt \lambda_2\text{.}\) Find these eigenvalues, their multiplicities, and the dimensions of their corresponding eigenspaces.
The smaller eigenvalue \(\lambda_1 =\) has multiplicity and the dimension of its corresponding eigenspace is .
The larger eigenvalue \(\lambda_2=\) has multiplicity and the dimension of its corresponding eigenspace is .

7.

Supppose \(A\) is an invertible \(n\times n\) matrix and \(\vec{v}\) is an eigenvector of \(A\) with associated eigenvalue \(3\text{.}\) Convince yourself that \(\vec{v}\) is an eigenvector of the following matrices, and find the associated eigenvalues.
  1. The eigenvalue of the matrix \(A^{8}\text{.}\)
  2. The eigenvalue of the matrix \(A^{-1}\text{.}\)
  3. The eigenvalue of the matrix \(A - 3 I_n\text{.}\)
  4. The eigenvalue of the matrix \(-3 A\text{.}\)

8.

Let
\begin{equation*} \vec{v}_1={\left[\begin{array}{c} 0\cr -3\cr -1 \end{array}\right]}, \vec{v}_2 = {\left[\begin{array}{c} -3\cr 3\cr 0 \end{array}\right]}, \vec{v}_3 = {\left[\begin{array}{c} -1\cr 0\cr 1 \end{array}\right]} \end{equation*}
be eigenvectors of the matrix \(A\) which correspond to the eigenvalues \(\lambda_1 = -1\text{,}\) \(\lambda_2 = 0\text{,}\) and \(\lambda_3 = 4\text{,}\) respectively, and let
\begin{equation*} \vec{x}={\left[\begin{array}{c} 2\cr 3\cr 3 \end{array}\right]}. \end{equation*}
Express \(\vec{x}\) as a linear combination \(\vec{x} =a\vec{v}_1 + b\vec{v}_2 +c\vec{v}_3\text{,}\) and find \(A\vec{x}\text{.}\)

9.

Recall that similarity of matrices is an equivalence relation; that is, the relation is reflexive, symmetric and transitive.
Verify that \(A={\left[\begin{array}{cc} 0 \amp 1\cr 1 \amp -1 \end{array}\right]}\) is similar to itself by finding a \(T\) such that \(A = T^{-1} A T\text{.}\)
We know that \(A\) and \(B={\left[\begin{array}{cc} 1 \amp -1\cr 1 \amp -2 \end{array}\right]}\) are similar since \(A = P^{-1} B P\) where \(P = {\left[\begin{array}{cc} 1 \amp -1\cr 2 \amp -3 \end{array}\right]}\text{.}\)
Verify that \(B\sim A\) by finding an \(S\) such that \(B = S^{-1} A S\text{.}\)
We also know that \(B\) and \(C={\left[\begin{array}{cc} -3 \amp 5\cr -1 \amp 2 \end{array}\right]}\) are similar since \(B = Q^{-1} C Q\) where \(Q = {\left[\begin{array}{cc} 1 \amp 1\cr 1 \amp 0 \end{array}\right]}\text{.}\)
Verify that \(A\sim C\) by finding an \(R\) such that \(A = R^{-1} C R\text{.}\)
You have attempted of activities on this page.