Systems of linear equations


Take the lines 4x + 3y = 6 and x + 7y = 14. Where do these lines intersect? We find a point (x,y) that satisfies both equations. In algebra, you learned a couple of methods by which you can find this point. You may solve one of the equations for x and substitute it back into the second equation (the substitution method). Or you may multiply equations by a scalar and add them together so that the coefficient on one of the variables becomes zero (the elimination method).

Another method, which largely relies on the elimination method, is to consolidate the numbers involved in the system into what is called an augmented matrix. We ignore the x and y—who cares what the variables are?—and write the system as follows:

\left[ \begin{array}{cc|c} 4 & 3 & 6 \\ 1 & 7 & 14 \end{array} \right].

Arrays of numbers like this one have the valuable property that computer programs really, really like them. The more ones and zeroes we can put onto the left side of the bar, the simpler our system becomes and the easier it is to read its solution.

Reduced row echelon form

An augmented matrix that has been simplified as much as possible is said to be in reduced row echelon form and has the following three properties.

  1. The first nonzero entry in each row, called a pivot, is 1.
  2. Each pivot occurs strictly to the left of each pivot below it.
  3. Each pivot is the only nonzero entry in its column.

We will render the example matrix into reduced row echelon form and see how easy the corresponding system is to read. To make the first entry in the first row a one, we divide the row by 4. This is fine because it corresponds to dividing the first equation by 4, and as we know, it’s fine to multiply both sides of an equation by the same number.

\left[ \begin{array}{cc|c} 1 & 3/4 & 3/2 \\ 1 & 7 & 14 \end{array} \right] \begin{array}{c} R1 = R1/4 \\ \end{array}

We can also add multiples of rows to each other, just like you can add equations together. To make the first 1 in the second row a zero, we can just subtract the first row from the second one.

\left[ \begin{array}{cc|c} 1 & 3/4 & 3/2 \\ 0 & 25/4 & 25/2 \end{array} \right] \begin{array}{c} \\ R2 - R1 \end{array}

Now the first column satisfies the rules of reduced row echelon form: its first entry is a 1, and all other entries are 0. We move to the right and down (since the pivot in the first row must occur to the left of all lower pivots); the 25/4 must be turned into a 1. How? We divide by 25/4, or, multiply by 4/25.

\left[ \begin{array}{cc|c} 1 & 3/4 & 3/2 \\ 0 & 1 & 2 \end{array} \right] \begin{array}{c} \\ R2 - R1 \end{array}

We have made the first nonzero entry in the second row into a one, but it is not the only nonzero entry in its column. We must make the 3/4 into a zero. That will be accomplished by multiplying the second row by 3/4 and subtracting it from the first row.

\left[ \begin{array}{cc|c} 1 & 0 & 0 \\ 0 & 1 & 2 \end{array} \right] \begin{array}{c} R1 - (3/4)R2 \\ \end{array}

This matrix is in reduced row echelon form. Most importantly, we can translate it back into a system of equations. That system is now

\begin{cases} 1x + 0y = 0 \\ 0x + 1y = 2 \end{cases}

or, simpler, x = 0 and y = 2. If you plug (0,2) back into the original two equations, you will see that this is a solution.

Infinitely many solutions

A system of equations has infinitely many solutions when, loosely speaking, there is not enough information to nail down the value of each variable. Algebraically speaking, there are too few independent equations in the system. We’ll delve further into the idea of linear independence later on, but for a list of equations to be independent, it must be impossible to build one equation out of the others. Let’s see an example. Let’s find the intersection of the planes

 3x + y + 3z = 2 \\  6x + 4y + z = 0 \\  3x + 3y - 2z = -2

Can’t see it? The third equation is the difference between the second and third. When we code the system into an augmented matrix

\left[ \begin{array}{ccc|c} 3 & 1 & 3 & 2 \\ 6 & 4 & 1 & 0 \\ 3 & 3 & -2 & -2 \end{array} \right]

and whittle it into reduced row echelon form

\left[ \begin{array}{ccc|c} 1 & 0 & 11/6 & 4/3 \\ 0 & 1 & -5/2 & -2 \\ 0 & 0 & 0 & 0 \end{array} \right]

it reveals to us that one of our equations was but a straw puppet, held aloft by the other two. (In other words: nothing to see here.) The row of all zeroes signals a redundant equation. How do we read this solution, then? Remember that this corresponds to the equations

x + (11/6)z = 4/3 \\ \\ y - (5/2)z = -2

You already knew, perhaps from calculus, that the intersection of two planes is a single line. This line is given by the column vector

\left[ \begin{array}{c} 4/3 - (11/6)z \\ (5/2)z - 2 \\ z \end{array} \right]

(which you used to write with angle brackets in multivariable calculus!) or, even better, the parametric equation

\left[ \begin{array}{c} 4/3 \\ -2 \\ 0 \end{array} \right] + z\left[ \begin{array}{c} -11/6 \\ 5/2 \\ 1 \end{array}\right]

which was broken up using the rules of addition and scalar multiplication of vectors from calculus. The parametric equation is a useful representation of the line. The constant vector is where the line “starts” (think about the y-intercept of a two-dimensional line), and the vector multiplied by z is the direction the line travels in (its slope).

This example illustrates that when your system has infinitely many solutions, as in this case, as many variables as necessary are free (like the z) to take all appropriate values (usually all real numbers) and the rest are expressed in terms of the free variables (like the x and y here).

No solution

Sometimes your system will not have a solution. The lines 2x + 2y = 0 and 2x + 2y = 1 do not intersect, as they are parallel. Therefore something should go wrong when we try to row-reduce the augmented matrix

\left[ \begin{array}{cc|c} 2 & 2 & 0 \\ 2 & 2 & 1 \end{array} \right]

We get

 \left[ \begin{array}{cc|c} 1 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right]

which, translating our equations back from the augmented matrix, seems to say (in the second equation) that 0x + 0y = 1. Of course, 0(x + y) = 0, not 1. If a system has no solution, or is inconsistent, row reduction will illuminate that with a contradiction.


Given a system of m linear equations in n variables, there are three outcomes. If m = n and all m equations are independent, i.e. one cannot be built out of the others through addition and subtraction of multiples, then there will be a single solution to the system. The left-hand side of the augmented matrix may be manipulated into a matrix with ones on the diagonal and zeros everywhere else.

If the system is redundant, row-reduction will reveal this by making one or more of the rows into a row of all zeros. In that case, not every variable can be assigned a unique value. The ones that can’t are free variables and allowed to take any allowable value, and the remaining variables are written as functions of these variables.

Finally, if the system is inconsistent, row-reduction will also show this by making one of the rows contradictory, usually by asserting something nonsensical like 0 = 1.

Moving forward, we will abstract the idea of matrices and learn what kind of algebraic structure (addition, multiplication, so forth) they have. We will learn that they have a lot of very elegant properties that contribute to understanding linear systems and the many applications in which they are found.


In Octave, run

A = floor(10*rand(m,n))

with and appropriate integers to kick up an example of a matrix to row-reduce. For example, if you would like to solve a system that may have a unique solution for a single output, let m be some number and n = m + 1 and draw the bar before the n-th column. Run


to check yourself. If you would like to make sure the numbers are nice and the solution is unique, run

A = floor(10*rand(n,n))
x = [x1, x2, ..., xn]
b = A*x

where x1, x2, …, xn are your chosen solutions, and row-reduce [A|b]. Use


to check yourself.

Or, y’know, rent the textbook.