Orthogonality

 

Mathematics is the language of reality, and so eight or nine times a day I find myself casually using a word I learned in topology or linear algebra in a completely different setting. My friends are threatening not to speak to me anymore. But these are good words! If only I had a way to explain the words I’ve come to love without asking my friends to take a math major first—oh right. I have a blog. (The post proper is aimed at an audience with maybe high school math. The footnotes are for mathematicians or an interested reader who wants to know more. The exercises are for anyone with the listed prerequisite.)

Two things are orthogonal if they are perpendicular. But why don’t I just use the word “perpendicular” with retail customers and at the bar? Why roll to lose an audience with a slightly harder word that means effectively the same thing?

Shade of meaning is everything, but where perpendicular lines are going in the opposite direction on a piece of paper, orthogonal objects are going in the opposite direction on everything. There is no setting in which these two objects are connected, correlated, or drift-compatible.

Let’s define some words.

In perp-linesgeometry, lines are perpendicular if they mutually form right angles. The ur-example is a completely horizontal line and a completely vertical line, like shown. Any rotation of this figure, additionally, will also exhibit perpendicular lines.

In vector calculus, directions in the plane are pairs of numbers (vectors(x,y) where x gives the horizontal movement and y gives its vertical movement. The completely horizontal line is in the direction of the vector (1,0), and a purely vertical line follows (0,1). Some other line might be in the directionvector (5,3) (pictured) which, much like plotting points in high school, tells you to go right five units every time that you go up three. [1]

Determining whether two vectors are perpendicular is as easy as multiplying their horizontal coordinates together and adding that to the product of their vertical coordinates. This operation is called the inner product (for the plane) and goes like this:

\langle (a,b) ; (c,d) \rangle = ac + bd

The inner product is zero when two vectors are perpendicular.

Don’t let funny-looking angle brackets and semicolons (really only used to differentiate from the comma used elsewhere) scare you off: multiply the first number with first number (ac) and the second number with second number (bd) and add the results. This may be the easiest thing a calculus student learns to do.

The take-away here is that the inner product measures how much two objects are pointed in the same direction. If that measure comes up zero, they are orthogonal.

Developing mathematics—remember, we are building tools to describe the natural world—is about consistency and generality. When we build a new toy like an inner product, we need to make sure it matches what we already know. Are (1,0) and (0,1) perpendicular with this new definition, that perpendicular vectors have an inner product of zero? Let’s find out:

\langle (1,0) ; (0,1) \rangle = 1 \cdot 0 + 0 \cdot 1 = 0 + 0 = 0.

Exercise. You were once told that if a line has a slope of m, its perpendicular line has a slope of -1/m. Prove that the vectors (1,m) and (1, -1/m) are perpendicular like I just showed you.

Score! So, our new toy is consistent. What does it mean for it to be general? Ah, we finally made it to orthogonality.


r3Vectors don’t need to go in only two directions. It won’t surprise you to learn that a vector representing three-dimensional position in space has two “ground” components—forward and back usually given by x, left and right usually denoted y—and an up and down component usually called z.

How do you tell if there’s a right angle between two three-dimensional lines (x,y,z) and (u,v,w)? Same way: their inner product

\langle (x,y,z) ; (u,v,w) \rangle = xu + yv + zw = 0.

Exercise. Are lines in the directions (1, 3, 1) and (2, -1, 1) perpendicular? How about (1,1,0) and (2,0,1)?

It’s better than that. In two or three dimensions, the inner product will tell you not just whether or not two vectors are orthogonal, but it will give you a measure of how parallel they are! For those of you that know trigonometry, \langle u ; v \rangle is proportional to the cosine of the angle between u and v. [2] For those of you that don’t, that’s fine: the take-away here is that the inner product measures how much two objects are pointed in the same direction. If that measure comes up zero, they are orthogonal.

In linear algebra and functional analysis, we study spaces that don’t have our usual idea of angles. Sometimes the objects we study aren’t even lines—they may be matrices, polynomials, or other functions. [3] But we still want to know how much two objects mutually dovetail. If you can find a big enough collection of mutually orthogonal vectors—in other words, a bunch of vectors for which every two are orthogonal—every object you need to talk about can be expressed in terms of this collection. It’s like coordinates, and everything can be represented by some number taken along each coordinate. Kind of like how we use perpendicular lines on a plane to plot coordinates. How about that?

On these alien worlds with lines that aren’t lines, what’s an angle? Well, an angle is an inner product! At this level of abstraction, we generalize the notion of an angle to just match the inner product concept that worked pretty well at the lower levels. Our vectors may not exactly be lists of numbers anymore—especially if we are working in infinitely many dimensions [4] instead of two or three—so the usual multiply-and-add-the-coordinates approach will fail. Other approaches work, but must satisfy certain rules [5], and these inner products still calculate “how much” two of your vectors travel together. And if they have nothing to do with each other? They’re orthogonal.

Exercise. [Prerequisite: calculus.] Continuous real-valued functions on the interval [-\pi, \pi] are vectors and here,

\langle f ; g \rangle = \displaystyle\int_{-\pi}^\pi f(x)g(x) \; \text{d}x

is a satisfactory inner product [6]. Prove that \cos x and \sin x are orthogonal. (This is the starting point for the very applicable area of mathematics called Fourier analysis, which views functions as combinations of waves.)

Exercise. [Prerequisite: complex numbers.] Using the rules in [5], discover an inner product of two complex numbers and determine whether 1 and i are orthogonal with scalars coming from the real numbers.


In many settings, both concrete and abstract, there is a tool called an inner product (sometimes a dot product in calculus) that measures the “parallelness” of two objects—in more sophisticated language, to what degree these objects are lined up, similar, or compatible. The word “perpendicular” is generally reserved for lines on a plane, or lines in general. But if you really want to break up with your boyfriend properly, in infinitely many dimensions, “orthogonal” is the word you’re looking for.


Footnotes:

[1] How am I getting away with conflating vectors and lines here? Mostly, it’s because I don’t want to overwhelm new readers with too much detail. A line through the origin is equivalent to all multiples of a single vector. Other lines are pushed out to a certain point, and then identified with a span of vectors. Let’s imagine that where the lines we’re talking about meet is the origin and then it should be okay.

[2] What does cosine have to do with whether two lines are parallel? Remember that the cosine of a horizontal rightward angle is 1, but as your angle becomes less horizontal and more vertical, its cosine shrinks and its sine grows. The inner product treats u as the horizontal axis and measures the angle between that line and v.

[3] Well, they kind of are. From a certain point of view, their appearance is linear, and not bendy, which is required to have an inner product at all (see [5]).

[4] As you do.

[5] An inner product with scalars (numbers, usually) \lambda coming from a set with division defined must have:

  • \overline{\langle u; v \rangle} = \langle v ; u \rangle where \bar{z} is the complex conjugate,
  • \langle \lambda u + v; w \rangle = \lambda \langle u ; w \rangle +  \langle v ; w \rangle where $lambda \lambda$ is any scalar, and
  • \langle u ; u \rangle \ge 0 with equality if and only if u = 0.

[6] If you were unlucky enough to learn your calculus from me, then you know this is exactly the infinite-dimensional version of multiplying your coordinates and adding.

Advertisements

Author: Douglas Weathers

math teacher, pop culture junkie, universe enthusiast

2 thoughts on “Orthogonality”

  1. From my experience in functional analysis and all things applied, up to Hilbert space (isn’t it the highest up one can go with the definition of angles and orthogonality? Well it might not, but “Hilbert is the most abstract spatial structure that REALL MATTERS”, says me), “orthogonality” and “perpendicularity” are used interchangeably.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s