Coordinatization

Return to the main page.

Introduction

At long last (with this section plus the final paper extra credit), we will understand why matrix multiplication has the odd but extremely effective definition that it does. Aside from matrices of the same size forming a vector space and their action on {\mathbf R}^n being a linear transformation, it isn’t obvious what either of the last two sections really mean for matrix theory and the study of real column vectors. This penultimate section will finally get at the relationship between matrices and linear transformations before applying this isomorphism in Section 7 to look at the eigenvalue problem, which is exploited by Google to the tune of all of their earthly riches.

So if you understand these next two sections, you are in line to receive all of Google’s money.*

Coordinatization of vector spaces

All n-dimensional vector spaces are isomorphic to {\mathbf R}^n.

No doubt you have already had some inkling that in the case of polynomial vector spaces, only the coefficients matter, and in matrix vector spaces, only the entries matter, and in column vector spaces, only the components matter, and so on. First, we will formalize this notion: if V is a vector space of finite dimension n and ordered (the order matters) basis \mathscr{B} = \{ {\mathbf b}_1, {\mathbf b}_2, \ldots {\mathbf b}_n \}, then V is isomorphic to {\mathbf R}^n by the map

[ \sum_{i=1}^n \alpha_i {\mathbf b}_i ]_{\mathscr{B}} = \left[ \begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n \end{array} \right].

Note that [{\mathbf b}_i]_\mathscr{B} = {\mathbf e}_i. Since [ \cdot ]_\mathscr{B} sends a basis to a basis, per the preceding section we know it is an isomorphism. This theorem gives mathematical rigor to our growing feeling that it’s really only the coefficients of the basis vectors that matter.

e.g. By the preceding theorem and the transitivity of isomorphism, the vector spaces {\mathbf R}^4, P_3(x), and M_{2\times 2}({\mathbf R}) are all isomorphic, or algebraically indistinguishable as vector spaces.

Coordinatization of linear maps

First, we note that for vector spaces V and W, the set

\text{Hom}(V,W) = \{ T:V \to W \; : \; T \text{ is linear} \}

is itself a vector space. Verify for yourself, or with your class notes, that this in fact a vector space. This fact is required to make sense of the following statement:

Every linear map between finite-dimensional vector spaces is a matrix.

If V and W have dimensions m and n respectively, \text{Hom}(V,W) \simeq M_{m \times n}({\mathbf R}). The map that gives the isomorphism is defined as follows. Let V have ordered basis \mathscr{B} = \{ {\mathbf b}_1, {\mathbf b}_2, \ldots, {\mathbf b}_n \} and W have ordered basis \mathscr{C} = \{ {\mathbf c}_1, {\mathbf c}_2, \ldots, {\mathbf c}_n \}. Then T \in \text{Hom}(V,W) is assigned the matrix [T]_{\mathscr{B},\mathscr{C}} whose i-th column is [T({\mathbf b}_i)]_\mathscr{C}, or T applied to {\mathbf b}_i written in the basis \mathscr{C}.

Observe that there are m members of \mathscr{B}, so [T]_{\mathscr{B}, \mathscr{C}} has m columns, and that [T({\mathbf b}_i)]_{\mathscr{C}} has n components, so the matrix has n rows.

e.g. The identity map I_V:V \to V where both the domain and codomain are written in the same n-element matrix corresponds to the identity matrix I_n. If the domain and codomain are written in different bases, then the corresponding matrix is called a change of basis matrix and will play an integral role in diagonalizing matrices in the next section.

Consequences in matrix algebra

Many facts from matrix algebra are verified in the light of linear transformation. First of all, if T:V \to W, U:W \to X, \dim V = m, \dim W = n, and \dim X = k with respective bases \mathscr{B}\mathscr{C}, and \mathscr{D}, then

[UT]_{\mathscr{B},\mathscr{D}} = [U]_{\mathscr{C},\mathscr{D}}[T]_{\mathscr{B},\mathscr{C}}.

Notice that [U]_{\mathscr{C},\mathscr{D}} is a matrix with k rows and n columns and that [T]_{\mathscr{B},\mathscr{C}} is a matrix with n rows and m columns. Therefore, matrix multiplication is possible if and only if the corresponding linear composition makes sense.

Here are a few more connections between matrix algebra and linear transformations. Let [T]_{\mathscr{B},\mathscr{C}} = A.

  • If A is a square matrix, then T is a linear transformation whose domain and codomain have the same dimension and are hence isomorphic.
  • The invertibility of A corresponds exactly to the invertibility of T.
  • The kernel of T is nontrivial if and only if the determinant of A is zero.

In extra credit 4, you will venture into the vortex and verify, through a frankly disgusting (but helpful) amount of computation, that matrix multiplication makes absolutely perfect sense.


* This is a lie.

Advertisements