A while ago I promised a series of posts about inner product spaces and visualization, and how, in particular, the right inner product space can help build a nice visualization of trajectory data. But before we get to that, I want to start with a quick refresher of orthogonality and least squares.

Let’s think of the simplest possible geometric problem in which this arises. I give you a point in and ask you to give me back the closest point to it, but constrained on being in a line given by . You simply write out the distance as a function of k and minimize it:

This is trivial enough. However, it is in a sense the wrong way to go about solving this problem. I want to start showing a superficially harder way to solve it that show us a lot more of the geometry of euclidean spaces. Let’s say I give you a candidate point . This point could be the minimizer of the distance or it could not. How can we know? Intuitively, we can jiggle k a little bit, and see if it changes things. If it does, we move in the direction it helps. If it doesn’t, we’re close to a minimum. There’s two important vectors here: the direction in which the candidate point can move (this is the tangent vector), and the vector that points to , which is the direction where $c$ would like to go if it could (this is the error vector). However, it still is possible for c to improve a little, by moving as much in the direction of that it can. In particular, when the direction to is orthogonal to the directions that c can move, we will necessarily be at a minimum.

This is the better way of looking at this problem: instead of trying to minimize the (squared) distance, try to find the point whose error vector is orthogonal to the tangent vector. Denoting the error vector as , we look for $k$ such that (the tangent space of a point in a line going through the origin is exactly the vector that defines the line). This gives us

Notice how much simpler the calculation is: no mention of distances or derivatives. However, the most important point about the calculation is that we never had to write out those dot products: we just use the dot product’s linearity with respect to vectors to compute k. “But dot products are easy”, you say! Why go through this trouble to avoid writing out the simplest vector expressions we can think of?

It turns out that the common dot product everyone is familiar with is far from the only dot product we can define. In particular, (real) inner products exist as long as they satisfy the following properties:

, with equality iff

It is trivial to see that given vectors and , is an inner product. Here’s a slightly more complicated inner product: , where is a symmetric, positive-definite matrix. But why should we care about these weirder inner products? The main point is that different inner product correspond to different notions of distances between points in space, and some distances are more useful than others.

Let’s think of distances between (grayscale) images. We can define the standard inner product between two images as

However, this distance is not particularly useful. Consider the following examples:

It should be easy to see that all of these images stand in the same distance to each another. However, this is really not what one would intuitively expect. It is also not particularly useful when one cares about the distances between *different points in the same image*. One would hope that portions of the images that are close together would be factored in the distance metric, so that the distance between the first and the last image is greater than the distance between the first and the second. In the next series of posts, I will show how the right inner product makes talking about these distances meaningful, and finally how to make an inner product that works for trajectory data.