Section4.3Project: Eigenvalues and diagonalization
Let \(A\) be a symmetric matrix. According to Real Spectral Theorem, there exists an orthogonal matrix \(P\) such that \(P^TAP\) is diagonal, with the entries on the diagonal given by the eigenvalues of \(A\text{.}\)
Now, find the eigenvalues of \(A\text{.}\) Yes, we could jump straight to the answer, but letβs go through the steps anyway. First, compute the characteristic polynomial of \(A\text{.}\) You may with to refer to SubsectionΒ B.3.2 for the correct syntax to use.
Recall that if \(\vv\) is an eigenvector of \(A\) corresponding to the eigenvalue \(\lambda\text{,}\) then \(\vv\) belongs to the nullspace of \(\lambda I-A\text{,}\) where \(I\) is the identity matrix. You can create an \(n\times n\) identity matrix in SymPy using the syntax eye(n).
For each eigenvalue \(\lambda\) found in part (b), compute the nullspace of \(\lambda I-A\text{.}\) You will want to refer to these nullspaces later, so give each one a name. For example, if your first eigenvalue was 7, you could enter something like
E1 = (7*sy.eye(5)-A).nullspace()
E1
to get the first eigenspace. Three code cells are provided below. If you are in Jupyter, you can add more cells by clicking the \(+\) button in the toolbar.
Finally, letβs check our work. Use the command A.eigenvects() to compute the eigenvalues and eigenvectors in one step, and confirm that the results match what you found above.
Recall that the eigenvectors corresponding to distinct eigenvalues are automatically orthogonal. But this is not true of repeated eigenvalues, and thereβs a good chance youβve found that one of your eigenvalues has multiplicity greater than 1.
A matrix \(P\) is orthogonal if \(P^TP=I\text{,}\) in which case te columns of \(P\) form an orthonormal basis. (So we are looking for an orthonormal basis of eigenvectors.)
For any 1-dimensional eigenspaces, you will just need to find a unit eigenvector. For example, if \(\vv = (2,1,0,3,1)\) is an eigenvector for an eigenvalue of multiplicity 1, then you will want to use the eigenvector \(\uu = \frac{1}{\sqrt{15}}(2,1,0,3,1)\text{,}\) since \(\len{\vv} = \sqrt{15}\text{.}\)
For any higher-dimensional eigenspaces, you can use the Gram-Schmidt algorithm. Recall that the optional argument true will produce unit vectors: given a basis B for an eigenspace, the command sy.GramSchmidt(B,true) will produce an orthonormal basis.
Subsection4.3.2The power method for dominant eigenvalues
For the problem you just completed, you may have noticed that there was one eigenvalue that was larger in absolute value than all the other eigenvalues. Such an eigenvalue is called a dominant eigenvalue.
If you complete SectionΒ 4.6, you will see that the state of such systems is ultimately given by a sum of certain geometric series (involving eigenvalues of a matrix), and that the long-term behaviour of the system is approximately geometric, and governed by the dominant eigenvalue.
Letβs see if this is true. Below, you are given an initial guess x0 and an empty list L. You want to populate L with vectors of the form \(A^k\xx_0\text{,}\) for \(0\leq k\leq 10\text{.}\) This is most easily done using a for loop. For details on syntax, see SubsubsectionΒ 4.7.2.1
Now letβs check our work. How can we tell if this vector is (approximately) a multiple of the dominant eigenvector? One option is to divide each entry in L[10] by the smallest (in absolute value) non-zero entry in L[10]. How does this compare to the eigenvector you originally found for the dominant eigenvalue?
Finally, suppose we didnβt know what the dominant eigenvalue was, and we wanted to find it. Note that if \(\xx\) is a dominant eigenvector, then \(A\xx = \lambda\xx\text{,}\) where \(\lambda\) is the dominant eigenvalue. Then
Following the approach in SubsubsectionΒ 4.7.2.1, compute the Rayleigh quotients \(r_k\) for \(1\leq k\leq 10\text{,}\) and comment on how well they approximate the dominant eigenvalue of \(A\text{.}\)
If your numbers seem wrong, it might be for the following reason: when we write our initial guess x0 as a linear combination of the eigenvectors, the coefficient of the dominant eigenvector has to be nonzero. Is that the case here? To check, note that if \(\vec{c}\) is the vector of coefficients, then we must have \(P\vec{c}=\xx_0\text{.}\) Can you solve this equation for \(\vec{c}\text{?}\)