The eigenvalue problem

The point of this chapter is (1) that diagonal matrices are awesome, and (2) to explain the conditions under which one has a diagonalizable matrix on their hands. As to why one would want to diagonalize a matrix, see (1).

Diagonal matrices

diagonal matrix is a square matrix D satisfying D_{i,j} = 0 unless i = j. Diagonal matrices have the nicest possible properties in their algebra. If A and B are diagonal matrices of size n, then

  • A +B, AB, and BA are diagonal. More to the point, AB = BA!
  • \det A = A_{1,1}A_{2,2}\cdots A_{n,n} and is zero if any of the diagonal entries of A are.

e.g. Let

B = \left[ \begin{array}{cccc} -2 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 7 & 0 \\ 0 & 0 & 0 & 4 \end{array} \right].

First, B is diagonal. Second, \det B = (-2)(1)(7)(4) = -56.

Finally, observe that for any basis vector {\mathbf e}_i, that B{\mathbf e}_i = B_{i,i}{\mathbf e}_i. In other words, all a diagonal matrix does to the basis vectors is scale them. (This will be important later.)

 \left[ \begin{array}{cccc} -2 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 7 & 0 \\ 0 & 0 & 0 & 4 \end{array} \right] \left[ \begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \\ 7 \\ 0 \end{array} \right].

Similar matrices

Recall that all matrices represent linear transformations written in a given basis. If a matrix is square, it represents a linear operator (a linear map from a space to itself, up to isomorphism). If two square matrices A and B represent the same linear operator, then they are similar and we write A \sim B.

From another point of view, the only difference between A and B is that they act on their vector space in terms of different bases. Therefore, there is a change of basis matrix P such that PA = BP: either you act on the space and then change the basis or change the basis and then act on the space, but the result is the same. Therefore, an equivalent definition of similarity is the existence of an invertible matrix P such that A = P^{-1}BP.

An obvious application of this concept is to ask whether a given square matrix is diagonalizable, or similar to a diagonal matrix.

Eigenvectors and eigenvalues

Remember that a diagonal matrix just scales the basis elements. Let’s put some math into that sentence. If T is a linear operator on V, then {\mathbf v} is an eigenvector of the eigenvalue \lambda if

T({\mathbf v}) = \lambda {\mathbf v}.

The eigenvectors of \lambda form a vector space,

E_\lambda = \{ {\mathbf v} \in V \; : \; T({\mathbf v}) = \lambda {\mathbf v}\},

which is a subspace of V.

Therefore, T acts diagonally on its eigenvectors. If we could write the whole space in terms of the eigenvectors, then we could find a diagonal matrix for T.

T is diagonalizable if its eigenvectors form a basis for V.

How do you find eigenvalues for a square matrix A, then? We must solve the equation

A{\mathbf x} = \lambda {\mathbf x}

for x \neq {\mathbf 0} and \lambda. First, write \lambda {\mathbf x} = \lambda (I {\mathbf x}); now we can subtract and distribute.

A{\mathbf x} - \lambda I {\mathbf x} = {\mathbf 0}

(A - \lambda I){\mathbf x} = {\mathbf 0}.

If {\mathbf x} is a nonzero vector that (A - \lambda I) destroys, then \lambda must cause \det (\lambda I - A) = 0. The determinant \det (\lambda I - A) is a polynomial in \lambda, called the characteristic polynomial of A. Its roots are the eigenvalues of A. Once you have the eigenvalues, you can solve A{\mathbf x} = \lambda {\mathbf x} for {\mathbf x}.

Advertisements