# What is a determinant?

Inspired by this Reddit thread which was inspired by this MathOverflow thread. Prerequisites: matrix multiplication. The original version of the post was missing the minus sign in the definition of the determinant; thanks to my friend Josh Zelinsky for politely pointing that out.

A determinant is the real number $\det A$ associated to a $n \times n$ matrix $A = (a_{i,j})$, in other words

$A = \left[ \begin{array}{cccc} a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n,1} & a_{n,2} & \cdots & a_{n,n} \end{array} \right],$

such that

$\det A = \displaystyle \sum_{\sigma \in S^n} (-1)^{\text{sgn}(\sigma)} a_{1,\sigma(1)} a_{2,\sigma(2)} \cdots a_{n, \sigma(n)}.$

This definition, while correct and more concise than the expansion-by-cofactors algorithm used to compute it (which I gave as the definition in MTH 237 because like fun am I introducing permutations to use once in a month-long course), is completely nonilluminating. In English, the definition says that you multiply together every entry in a path across the matrix such that in each path you never repeat rows or columns and add the results. (One such product would be $a_{1,2}a_{3,4}a_{2,3}a_{4,1}$; notice each row and column is used only once.) This still tells us nothing about what a determinant means or why one gives a shit, and since determinants are one of the most useful tools by which to understand linear operators / square matrices, it’s a pretty big gap to ignore.

It is necessary to add an interpretation to our (definition, algorithm) pair.

If this all starts to sound a bit abstract, skip ahead to the examples and come back after.

Consider what multiplying an arbitrary real number $x$ by a fixed real number $\lambda$ (Greek “lambda”) does. If $\lambda = 0$, it destroys the whole number line, collapsing it into its origin, zero. If $\lambda < 0$, it flips the number line over; positive numbers become negative and negative numbers become positive. Finally, the magnitude of $\lambda$, the number $|\lambda|$, determines how much the length of the unit interval $[0,1]$ gets stretched.

Let’s kick this up to a two-dimensional plane, where we’ll stay for the rest of the article. In this plane, we replace our arbitrary number $x$ with a vector $[x,y]$, where $x$ gives the horizontal position and $y$ gives the vertical position of the vector. Our fixed transformation $\lambda$ is going to be replaced with a square matrix

$A = \left[ \begin{array}{cc} a & b \\ c & d \end{array} \right]$

and rather than number multiplication, our matrix does this to $[x,y]$:

$\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right]\left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} ax + by \\ cx + dy \end{array} \right].$

You may recognize this as regular old matrix multiplication, but more illustrative for us, it is the action on the plane by the matrix $A$. In other words, $[x,y]$ goes in, $[ax + by, cx + dy]$ comes out.

Let’s look back at the properties of our number multiplication and see how each comes through in the new two-dimensional setting.

• $A$ will either destroy information or it won’t.
• $A$ will either flip the space inside-out or it won’t.
• $A$ will stretch the unit square whose edges are the vectors $[1, 0]$ and $[0, 1]$ by some factor.

It will turn out that the determinant is a single number that encodes whether our matrix destroys or flips and how much it stretches by, just like the number $\lambda$ was able to encode all of this information for regular real-number multiplication. First, a consequence of the definition of the determinant and the algorithm used to compute it is that

$\det A = ad - bc.$

Using this easy formula, let’s do one example that shows each property.

e.g. 1. Suppose

$A = \left[ \begin{array}{cc} 3 & 1 \\ 1 & 2 \end{array} \right].$

All of the vectors in the unit square, in gray, are stretched and sheared such that they are now inside the blue parallelogram. (The reader can verify this using matrix multiplication; multiplying by $A$ any vector whose horizontal and vertical component are less than one will give you a vector inside the blue parallelogram.) The area of the parallelogram is 5, which is coincidentally the same as

$\det A = 3 \cdot 2 - 1 \cdot 1 =5.$

The matrix $A$ stretches the plane by a factor of 5, and the matrix’s determinant tells us that.

e.g. 2. Suppose

$B = \left[ \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right].$

If we think of rightwardness $[1,0]$ as our “first” direction and upwardness $[0,1]$ as our “second,” then the regular plane is oriented counterclockwise; we travel from $[1,0]$ to $[0,1]$ as we travel from 3 o’clock backwards to 12 o’clock. Using matrix multiplication,

$\left[ \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right]\left[ \begin{array}{c} 1 \\ 0 \end{array} \right] = \left[ \begin{array}{c} 0 \\ 1 \end{array} \right]$

and

$\left[ \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right]\left[ \begin{array}{c} 0 \\ 1 \end{array} \right] = \left[ \begin{array}{c} 1 \\ 0 \end{array} \right].$

The matrix $B$ switches our directions, so the plane is now oriented clockwise. This reversal of direction is just like the reversal of direction we get on the number line when we multiply by -1! In fact:

$\det B = 0 \cdot 0 - 1 \cdot 1 = -1$

If a matrix changes the plane’s orientation, its determinant is negative. Since the matrix didn’t stretch the unit square, just flipped it over, its determinant is just -1.

e.g. 3. Finally, let’s have

$C = \left[ \begin{array}{cc} 1 & 2 \\ 2 & 4 \end{array} \right].$

This matrix sends every vector in the unit square, shown in gray again, to just vectors inside the red vector. The area of a line is zero, but more importantly, we’ve lost a whole dimension! Like multiplying by zero, the matrix $C$ destroys something—not everything, but the whole second dimension in our unit square. This gives us a clue that our determinant will be zero. Well,

$\det C = 1 \cdot 4 - 2 \cdot 2 = 0.$

So it is.

definition is a mathematically precise statement of the form “all X are and all are X.” It is a statement that fully characterizes an object and tells us exactly what it can and cannot do. Like a card in a game like Magic (which was not coincidentally made by a mathematician), this card gets to be combined with other cards to produce additional results. For example, if a determinant of a matrix is zero, that matrix is said to be singular, and has no multiplicative inverse. The definition of the determinant is that gobbledygook from the top of the article.

An algorithm is a process by which an object may be computed and is not necessarily the same as the definition. We compute determinants typically by breaking the matrix up along a row or column into smaller determinants. Though this “definition” is suitable for an undergraduate course because of its conceptual simplicity (you have to dip your toe into abstract algebra to fully understand the regular definition), it is a little long to write down, and at shortest it requires two sums (one for a row and one for a column) and a proof that the choice of row or column doesn’t change the determinant.

An interpretation is not at all mathematically precise and it is not good for computation. Rather, interpretation gives you a mental image of the object you are dealing with, something more palatable to a human brain. If anything, it serves as a bridge to the definition, allowing more reliable information storage.

It’s my opinion that all three are equally important, and that the third doesn’t see nearly enough play.