Back when I went to that excellent MSRI graduate summer school (video of lectures available at the link), one professor had us try a few seemingly-random exercises in order to motivate some definitions. The exercises by themselves were pretty fun, so I thought I’d share them and maybe get to the definitions (which I have since used in my research).
Linear Algebra is really hard, and it’s one of the first courses you take after calculus (if you take courses after calculus). I’m very often surprised by how often I use it and how applicable it is (I’m pretty sure my husband also uses it often). A few months after I first saw an eigenvalue, I ran into an old friend who told me “I still don’t know what eigenvalues are useful for.” Fast forward five or six years, and now I see eigenvalues EVERYWHERE! My senior seminar of undergraduate was basically a semester all about eigenvalues (spectral analysis). Anyways, this is all to say I still don’t feel very proficient at linear algebra, but I am very appreciative of it. I hope you feel that way about lots of math/baking too!
In these exercises, we’ll live in vector spaces. We’ll do a “definition” by example here: Euclidean space, or real space, is an example of a vector space. Instead of thinking of coordinates as points in space, we think of them as little arrows that point to a point and have a specific length/magnitude (the distance of the point to the origin).

The purple point is (2,3). The orange vector has magnitude (square root of 13) and is pointing at the point (2,3).
Just as groups are the fundamental object of study in algebra, vectors are the fundamental object of study in linear algebra. Vectors are abstract sets with two operations (instead of one, which groups have): you can add vectors and you can scalar multiply them (which means make them longer or shorter). There are a bunch of axioms that vector spaces have to satisfy, but you can just think of arrows in Euclidean space.
Next we need to understand the concept of an inner product. This is an operation where you put in two vectors and get out a number (in our case, real, but generally complex or in any “field” in quotes because I haven’t ever defined what a field is). In our example, if you have one vector that points to (2,3) and another that points to (9,11), you could do 2*9 + 3*11 = 18+ 33 = 51 and get 51 as the inner product of the two. This is written . So we multiply all the matching coordinates, and then we add up all those products.
There’s a few axioms that inner products satisfy:
- Conjugate symmetry:
In our case, since we’re in the reals and not the complex numbers, we can ignore the bar and read this as symmetry. From our definition this is true.
- Positive-definiteness:
for any a, and equals 0 only if a is 0. In our case,
will be a sum of squares, which is only 0 if every single coordinate is 0.
- Linearity in the first argument:
. You can/should check this is true for our example.
From our definition of inner product over Euclidean space, we can come up with a formula for the magnitude of a vector (remember, that’s the distance from the origin to the point that we’re pointing at). We have . By the Pythagorean theorem, we already know the distance of a point to the origin: it’s the square root of this quantity! So we have that the magnitude of a vector is the square root of its inner product with itself. In symbols, we write
. Any time you have an inner product, you can define a norm
in this way.
Here’s the exercise: let’s say you have a bunch of unit vectors (vectors with magnitude 1) and you take all of the pairwise inner products. So if you have four unit vectors a, b, c, d, you’ll have all these inner products, which we’ll arrange in a matrix:
That’s all fine and dandy and well and reasonable. Here’s the exercise: if I give you a matrix A, can you tell me if you can make A out of some unit vectors in this way with some inner product?
Notice that in the question, we didn’t say a particular inner product (we’re using one example, remember?) This just says if you have a matrix, can you find an inner product and some unit vectors that make it. So to take a few stabs at this problem, we’ll have to use all those axioms of inner products? Definitely we need A to be symmetric (that means that the entry in the 2nd row, 3rd column is equal to the entry in the 3rd row, 2nd column etc. etc.) by property 1.
Property 2 and our discussion above say that all the diagonal entries have to be 1 (because we started with unit vectors).
Those first two conditions are necessary for A to even be a candidate, but just because A satisfies those conditions doesn’t mean that there’s a collection of unit vectors that form it. (These are “necessary” but not “sufficient” conditions.)
The actual answer is that A must be non-negative definite: this means that if A is n rows by n columns and you pick a collection of n complex numbers, this sum is greater than or equal to zero (it’s not negative). Parsing the sum expression: take each entry of A and multiply it by the complex number corresponding to its row, and by the conjugate of the complex number corresponding to its column. Add up all of these products. If this number is non-negative, then yes, your matrix can arise from a collection of unit vectors.
Done!
Just kidding! Did you think I’d just throw an answer at you and not explain it? That’s not how we do!
As always, the great question in math is why? Just getting a random condition thrown at you shouldn’t feel satisfying (well, it might, but too bad I’m taking away your satisfaction).
Let’s find out why the condition is necessary:
Squaring any number makes a positive number, so if you add up all your unit vectors and take the norm of that and then square it, you should get a positive number. Even if you throw random complex scalar multiplication on to those vectors, the norm squared will still be positive. So in equations, we have . If we unpack this definition, we get
So we get the condition is necessary (didn’t even use the unit-ness of the vectors here).
How about sufficiency? Well, we have to come up with an inner product and some unit vectors. So take some unit vectors . Any vector will be written
. We’ll define an inner product on this space by
. Go check that this inner product satisfies the three axioms above. Great! It’s an inner product (this relies on the fact that A is non-negative definite). If you let all the
, then you can form A from the inner product of the
s. That’s it!
Congratulations! That was a hard exercise session! We did it!
This reminds me of an interesting open problem I spent a few weeks puzzling over (before giving up). I wonder if you have any insight.
Given a square, nonnegative symmetric matrix
, determine if
is the square of the adjacency matrix of an undirected graph.
A variant that may or may not be equivalent: given a non-negative symmetric matrix, does it have non-negative symmetric square root?
To clarify: the square is the standard operation of squaring a matrix, so it’s the Euclidean inner product.
Looking at it for a few minutes I don’t have anything, but also do you want to write a post about this problem and your stabs at it? =D?