A Linear Algebra Exercise

13 Aug

Back when I went to that excellent MSRI graduate summer school (video of lectures available at the link), one professor had us try a few seemingly-random exercises in order to motivate some definitions.  The exercises by themselves were pretty fun, so I thought I’d share them and maybe get to the definitions (which I have since used in my research).

Linear Algebra is really hard, and it’s one of the first courses you take after calculus (if you take courses after calculus).  I’m very often surprised by how often I use it and how applicable it is (I’m pretty sure my husband also uses it often).  A few months after I first saw an eigenvalue, I ran into an old friend who told me “I still don’t know what eigenvalues are useful for.”  Fast forward five or six years, and now I see eigenvalues EVERYWHERE!  My senior seminar of undergraduate was basically a semester all about eigenvalues (spectral analysis).  Anyways, this is all to say I still don’t feel very proficient at linear algebra, but I am very appreciative of it.  I hope you feel that way about lots of math/baking too!

In these exercises, we’ll live in vector spaces.  We’ll do a “definition” by example here: Euclidean space, or real space, is an example of a vector space.  Instead of thinking of coordinates as points in space, we think of them as little arrows that point to a point and have a specific length/magnitude (the distance of the point to the origin).

The purple point is (2,3).  The orange vector has length (square root of 13) and is pointing at the point (2,3).

The purple point is (2,3). The orange vector has magnitude (square root of 13) and is pointing at the point (2,3).

Just as groups are the fundamental object of study in algebra, vectors are the fundamental object of study in linear algebra. Vectors are abstract sets with two operations (instead of one, which groups have): you can add vectors and you can scalar multiply them (which means make them longer or shorter).  There are a bunch of axioms that vector spaces have to satisfy, but you can just think of arrows in Euclidean space.

Next we need to understand the concept of an inner product.  This is an operation where you put in two vectors and get out a number (in our case, real, but generally complex or in any “field” in quotes because I haven’t ever defined what a field is).  In our example, if you have one vector that points to (2,3) and another that points to (9,11), you could do 2*9 + 3*11 = 18+ 33 = 51 and get 51 as the inner product of the two.  This is written \langle (2,3), (9,11) \rangle =51.  So we multiply all the matching coordinates, and then we add up all those products.

There’s a few axioms that inner products satisfy:

  1. Conjugate symmetry: \langle b, a \rangle = \bar{\langle a, b \rangle}  In our case, since we’re in the reals and not the complex numbers, we can ignore the bar and read this as symmetry.  From our definition this is true.
  2. Positive-definiteness: \langle a, a \rangle \geq 0 for any a, and equals 0 only if a is 0.  In our case, \langle a, a \rangle will be a sum of squares, which is only 0 if every single coordinate is 0.
  3. Linearity in the first argument: \langle ax+c, b\rangle = x\langle a, b \rangle + \langle c, b \rangle.  You can/should check this is true for our example.

From our definition of inner product over Euclidean space, we can come up with a formula for the magnitude of a vector (remember, that’s the distance from the origin to the point that we’re pointing at).   We have \langle (a_1, a_2, \ldots, a_n), (a_1,\ldots, a_n) \rangle = a_1^2+a_2^2+\ldots+a_n^2.  By the Pythagorean theorem, we already know the distance of a point to the origin: it’s the square root of this quantity!  So we have that the magnitude of a vector is the square root of its inner product with itself.  In symbols, we write \| a \| = \sqrt{\langle a, a \rangle}.  Any time you have an inner product, you can define a norm \| \cdot \| in this way.

Here’s the exercise: let’s say you have a bunch of unit vectors (vectors with magnitude 1) and you take all of the pairwise inner products.  So if you have four unit vectors a, b, c, d, you’ll have all these inner products, which we’ll arrange in a matrix:

\left( \begin{array}{cccc} \langle a, a \rangle & \langle a, b \rangle & \langle a, c \rangle & \langle a, d \rangle \\    \langle b, a \rangle & \langle b, b\rangle & \langle b, c \rangle & \langle b, d \rangle \\    \langle c, a \rangle & \langle c, b\rangle & \langle c, c \rangle & \langle c, d \rangle \\    \langle d, a \rangle & \langle d, b\rangle & \langle d, c \rangle & \langle d, d \rangle \end{array} \right)

That’s all fine and dandy and well and reasonable.  Here’s the exercise: if I give you a matrix A, can you tell me if you can make A out of some unit vectors in this way with some inner product?  

Notice that in the question, we didn’t say a particular inner product (we’re using one example, remember?)  This just says if you have a matrix, can you find an inner product and some unit vectors that make it.  So to take a few stabs at this problem, we’ll have to use all those axioms of inner products?  Definitely we need A to be symmetric (that means that the entry in the 2nd row, 3rd column is equal to the entry in the 3rd row, 2nd column etc. etc.) by property 1.

Property 2 and our discussion above say that all the diagonal entries have to be 1 (because we started with unit vectors).

Those first two conditions are necessary for A to even be a candidate, but just because A satisfies those conditions doesn’t mean that there’s a collection of unit vectors that form it.  (These are “necessary” but not “sufficient” conditions.)

The actual answer is that A must be non-negative definite: this means that if A is rows by columns and you pick a collection of complex numbers, this sum \sum_{i,j} a_{ij}c_i\bar{c_j} is greater than or equal to zero (it’s not negative).  Parsing the sum expression: take each entry of A and multiply it by the complex number corresponding to its row, and by the conjugate of the complex number corresponding to its column.  Add up all of these products.  If this number is non-negative, then yes, your matrix can arise from a collection of unit vectors.

Done!

exercise

Just kidding!  Did you think I’d just throw an answer at you and not explain it?  That’s not how we do!

notdone

As always, the great question in math is why? Just getting a random condition thrown at you shouldn’t feel satisfying (well, it might, but too bad I’m taking away your satisfaction).

Let’s find out why the condition is necessary:

Squaring any number makes a positive number, so if you add up all your unit vectors and take the norm of that and then square it, you should get a positive number.  Even if you throw random complex scalar multiplication on to those vectors, the norm squared will still be positive.  So in equations, we have 0 \leq \| \sum_{i=1}^n c_iv_i \|^2.  If we unpack this definition, we get

0 \leq \| \sum_{i=1}^n c_iv_i \|^2

=\langle \sum c_iv_i, \sum c_jv_j \rangle

=\sum \langle v_i,v_j \rangle c_i \bar{c_j}

= \sum a_{ij} c_i \bar{c_j}

So we get the condition is necessary (didn’t even use the unit-ness of the vectors here).

How about sufficiency?  Well, we have to come up with an inner product and some unit vectors.  So take some unit vectors u_i.  Any vector will be written \sum t_i u_i.  We’ll define an inner product on this space by \langle \sum t_iu_i, \sum t_ju_j \rangle_A = \sum a_{ij} t_i \bar{t_j}.  Go check that this inner product satisfies the three axioms above.  Great!  It’s an inner product (this relies on the fact that A is non-negative definite).  If you let all the t_i=1, then you can form A from the inner product of the u_is.  That’s it!

done

Congratulations!  That was a hard exercise session!  We did it!

Advertisement

3 Responses to “A Linear Algebra Exercise”

  1. j2kun August 13, 2015 at 9:19 pm #

    This reminds me of an interesting open problem I spent a few weeks puzzling over (before giving up). I wonder if you have any insight.

    Given a square, nonnegative symmetric matrix B \in \mathbb{Z}^{n \times n}, determine if B is the square of the adjacency matrix of an undirected graph.

    A variant that may or may not be equivalent: given a non-negative symmetric matrix, does it have non-negative symmetric square root?

    • j2kun August 13, 2015 at 9:20 pm #

      To clarify: the square is the standard operation of squaring a matrix, so it’s the Euclidean inner product.

    • yenergy August 18, 2015 at 2:24 pm #

      Looking at it for a few minutes I don’t have anything, but also do you want to write a post about this problem and your stabs at it? =D?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: