Tag Archives: graph

Phylogenetic trees

2 Aug

I just listened to a two hour talk on phylogenetic trees, and they seem fun enough that I thought I’d share them with you!  Sorry I literally forgot to post last week, and then I realized I did not take notes on the stuff I wanted to post about (more pictures by Aaron Fenyes)- here’s a photo that I took and wanted to explain:


Something twisted representation something skein algebra something unit tangent bundle

Instead, you’ll read about the basics behind the research of my friend Gillian (no website), which is being supported somehow by people who want to know about cancer.  First, some background: the field of study of phylogenetic trees is inspired by and informs applications in evolutionary biology.  All of the life-forms today (or at some slice of time) all trace back to a root life-form at time 0.  Each current life-form follows a series of branching away from the root to get to where it is now

Then we can represent this evolution via a labeled rooted binary tree: the root represents the root life-form at time 0, and each branching of the tree represents a different evolution.  The labels mark which life-form is which.  Of course this model isn’t perfect (I can’t find the word for it but it’s a thing where two different species evolve separately from the same ancestor, then meet again and make one species.  If we were to represent this information in a graph, it’d make a cycle and not be a tree), but it’s been fruitful.


The rooted binary tree of the wikipedia picture: node 0 is the root life-form, then 1-7 are the life-forms at our current time.

Now let’s mathify this.  We’d like to encode the evolutionary information into our tree.  We’ve already decided that all life-forms will end at the same time (now), so if we just assign lengths to each of the non-leaf edges this will automatically determine the lengths of the leaf edges.  A leaf in a tree is a vertex with only one neighbor, and we call the edge leading to that vertex a leaf-edge.  Let’s call the non-leaf edges interior edges.  In the picture above, we have 5 non-leaf edges, which determine a tree with 7 leaves.  Using this exact configuration of labels and edges, we have five degrees of freedom: we can make those interior edges whatever lengths we want, as long as they are positive numbers.  So in math-terms, the set of phylogenetic trees (aka rooted, binary, labeled trees) in this configuration forms a positive orthant of \mathbb{R}^5.  You can smoothly change any one of the edges to a slightly longer or shorter length, and still have a phylogenetic tree with the same combinatorial data.


This is from the paper I’m writing, but it does show that in 3D, there are 8 orthants cut by the three axes (red dot is the origin).  The pink box represents a single orthant.

What about phylogenetic trees with different combinatorial data?  Say, with different labels or different branching, but the same number of leaves and the same number of interior edges?  First we need to figure out what we mean by ‘different’.  For instance, the following picture from the seminal paper in this field shows three trees that don’t immediately look the same, but we don’t count as different:

Why aren’t they different?  Because they encode the same data for each life-form: reading from node 0 we see that first 1 branches off, then 2, then 3 and 4 in all three cases.  There’s some combinatorics here with partitions that you can do (one can label a tree with a set of partitions).  However, changing the labels so that first 2 branches off, then 1, then 3 and 4 will be a different phylogenetic tree.  In fact I can smoothly go from one to the other in the space that we’re creating: first I shrink the length of the green edge below to zero, which takes us to the middle tree (not binary!), and then extend the blue edge.


Shrink the colored edges to get the same tree in the middle (not a binary tree)

We’re going to add these non-binary trees with one less edge length to our space.  Remember the tree on the left has an entire positive orthant, and the tree on the right has an entire positive orthant.  Shrinking the green length to zero means that we’re moving to one side of the left orthant: so we add this axis to our space (we have \{x,y\in \mathbb{R}^2: \ x,y\geq 0\} instead of strictly greater than 0).  We can glue the green and blue orthants together along this axis.  Here’s a picture from the paper:


Notice that they also have the origin filled in, with a tree with no interior edges.  This is the cone point of this space.  Now we’re finally ready to describe the space of phylogenetic trees: within each combinatorial structure/labeling, we have a Euclidean orthant in (n-2) dimensions.  Then these orthants are glued together along their axes in a specific way, and all of them are connected to the cone point.  This is called BHV(n), short for Billera-Holmes-Vogtmann space (in the paper they call it T(n) but that’s confusing to everyone else).  Here’s the picture of T(4):


Each triangle represents an infinite orthant

There are 15 different orthants glued together in this picture, because the number of labelled rooted binary trees on vertices is (2n-3)!!.  The double !! means you only multiply the odds, a.k.a. (2n-3)(2n-5)(2n-7)… This is also known as Schroeder’s fourth problem , which as far as I can tell was open for 100 years.  Pretty cool!

If you truncate BHV(n) so it’s not infinite (just pick some compact bound), then it forms a nonpositively curved cube complex, and we love those!  CAT(0) cube complexes are great.  I haven’t blogged too much about them (first terrible post and then those truncated Haglund notes) but they are the basis of all that I do and the number one thing I talk about when I give math talks.  Whoops!  The gist is that you glue cubes together in not-terrible ways, and then the resulting complex has great and fun properties (like you can cut it in half the way you want to).

That’s about all I have to say about this!  Gillian is working on some stuff about putting a probability measure on BHV(n) [you can’t do it with certain conditions], embedding it into a small enough Euclidean space that still preserves some of its features, and finding an isometrically embedded copy of the phylogenetic tree inside BHV(n) instead of just the coordinate point.  Also, fun fact to prove to yourself (actually please don’t scoop my friend), find the automorphism group of BHV(n)!  It’s just the symmetric group on some number that has to do with n (n+1 or something like that; I can’t remember and didn’t take notes).

Again, the main reference for this is the seminal paper that should also be accessible as it’s meant for biologists and statisticians.

Minimum rank of graphs with loops

28 Jun

A few months ago I was one of two Distinguished Graduate Speakers at this awesome conference, Underrepresented Students in Topology and Algebra Research Symposium (USTARS).  The conference started in 2011, when a bunch of advanced grad students and postdocs (I don’t think anyone on the committee was a professor yet) decided that there needed to be a place for underrepresented students to see other people like them doing the math they did.  And now they’re all professors and still organizing this traveling conference!  So many people of color!  So many women!  So many homosexual people!  (Not so many trans people…) So many first-generation college students!  So great!  Too many slides!  (I pretty actively hate slide talks unless there are pictures that are impossible to make on a chalk board.)


Credit: Erik Insko for this picture of me giving a talk at USTARS 2016!

Anyway, I wanted to blog about some math I saw there, based on a talk by Chassidy Bozeman, a grad student at Iowa State and a fellow EDGE-r.  The talk is from a paper that resulted from her graduate research course, which Iowa offers to intermediate grad students and gets them started on research (hence the eight authors on that paper).  I thought it was fun and approachable!  And I have one crumpled page of two-month old notes, so we’ll see how this goes.

First, let’s remember what a graph is: a collection of vertices (dots) and edges (lines between dots).  A graph is simple if there’s at most one edge between any two vertices, and if no vertex has an edge to itself (aka a loop).  A loop graph allows loops, but still no multiple edges.


Left is a simple graph, right is a loop graph

You can associate an infinite family of symmetric matrices to a loop graph.  These square matrices will have the number of vertices of the graph as the number of columns and rows, and the entry a_{ij} will be 0 if and only if there is no edge between the corresponding vertices and  in the graph.  The rest of the entries will just be real nonzero numbers.  This infinite family is useful for defining the minimum rank of a loop graph: it’s the minimum rank over all of this infinite family of matrices.  The rank of a matrix is a measure of how large it is.  For definition by example, \left( \begin{array}{cc} 1 & 0 \\ 1 & 0 \end{array} \right) has rank 1, and \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) has rank 2.

So the goal of Chassidy’s talk was to characterize all loop graphs whose minimum rank is the full size of the graph.  A couple more definitions before the theorem: a generalized cycle is a subgraph whose connected components are either a loop, a single edge, or a cycle.  It’s spanning if it touches every vertex in the graph.  Spanning generalized cycles don’t always exist!


Components of a generalized cycle: loop, edge, cycle


No spanning generalized cycle, womp womp

Theorem: A loop graph has minimum rank equal to its size if and only if there exists a unique spanning generalized cycle.

Quick nonexample: here’s a picture of a loop graph that has more than one spanning generalized cycle, so from the theorem we know it doesn’t have minimum rank four.


This is theorem 3.1 in their paper.  It’s also comically the end of my notes on this talk.  Here are a few facts they use to prove the theorem:

  • If is an induced subgraph of G (that means the vertices of H are a subset of the vertices of G, and its edges are all the edges that appear in G within that subset), then it minimum rank is bounded above by the minimum rank of G.  Succinctly, mr(H)\leq mr(G).
  • If is a disjoint union of a bunch of connected components, the its minimum rank is the sum of the minimum ranks of those components.  mr(G)=\sum mr(G_i).

Here’s a picture of my notes!  Read the actual paper if you want to know more!


The fundamental theorem of geometric group theory, Part II: proof

1 Oct

A refresher from last week: If a group G acts on a proper geodesic metric space X properly discontinuously and cocompactly by isometries, then G is quasi-isometric to X.  Moreover, G is finitely generated.

Yes, I put “proper geodesic metric space” in italics because I forgot it in the statement of the theorem last week.  [Aside: actually, you only need “length space” there, and proper geodesic will follow by Hopf-Rinow.  But let’s skip that and just go with proper geodesic.]  I also added the second sentence (which isn’t really a moreover, it comes for free during the proof).

At the end of last week I defined proper: closed balls are compact. A space is geodesic if there is a shortest path between any two points which realizes the distance between those points.  For instance, the plane is geodesic: you can just draw a line between any two points.  But if you take away the origin from the plane, it’s not geodesic anymore.  The distance between (1,1) and (-1,-1) is 2\sqrt{2}, but the line should go through the origin.  There is no geodesic between those two points in this space.

Now we have all our words, so let’s prove the theorem!  I’ll italicize everywhere we use one of the conditions of the theorem.

Since our action is cocompact, we have a compact set K so that translates of it tile X.  Pick a point inside K, call it x_0, and a radius R so that K is entirely contained inside a ball of radius R/3 centered at x_0.  For notation, this ball will be labelled B(x_0,R/3).

Schematic: special point is the yellow dot, yellow circle is radius R/3, lighter circle is radius R.  Cartoon on right illustrates this relationship

Schematic: K is the red square, special point is the yellow dot, yellow circle is radius R/3, lighter circle is radius R. Cartoon on right illustrates this relationship

We’ll pick a subset of the group G: Let A =\{ g\in G: g.B(x_0,R)\cap B(x_0,R) \neq \emptyset\}.  X is proper, so closed balls are compact.  Since the action is properly discontinuous, this means that is finite.  [Reminder: properly discontinuous means that only finitely many group elements translate compact sets to intersect themselves].

Now we’ll show that G is finitely generated, and it’s generated by A.  Choose some group element g in G.  Draw a geodesic in between your special point x_0 and its g-translate g.x_0.  Now we’ll mark some points on that geodesic: mark a point every R/3 away from x_0, all the way to the end of the geodesic.  You’ll have [(length of the segment)/(R/3) rounded up] many points marked.  Let’s call that number n.


There are n blue points, and they’re all R/3 away from each other. Notice the last blue point might be closer to g.x_0, but it’s definitely less than or equal to R/3 away.

Here’s the clever part.  Remember that K tiles X by G-translates (cocompactness), so each of those blue points lives inside a G-translate of K.  Since x_0 lives inside K, that means there’s a nearby translate of x_0 to each blue point.  And since K fits inside a R/3 ball, each translate is less than or equal to R/3 away from its blue point.

The green points are translates of x_0: I also colored x_0 and g.x_0.  The yellow circle indicates the the green point is within R/3 of its blue point.

The green points are translates of x_0: I also colored x_0 and g.x_0. The yellow circle indicates the the green point is within R/3 of its blue point.

We can bound how far apart the consecutive green points are from each other: each one is within R/3 of its blue point, which are all R/3 apart from their neighbors.  So the green points are at most R/3+R/3+R/3= R from each other.

Middle portion is exactly R/3 long.  So by the triangle inequality, the green points are less than or equal to R from each other.

Middle portion is exactly R/3 long. So by the triangle inequality, the green points are less than or equal to R from each other.

Remember that the green points represent G-translates of x_0.  In the picture above I numbered them g_0.x_0=x_0,g_1.x_0,g_2.x_0,\ldots g_nx_0=g.x_0.  We just said that d(g_1.x_0,g_2.x_0)\leq R.  Since G acts by isometries, this means that d(g_2^{-1}g_1.x_0,x_0)\leq R.  So g_2^{-1}g_1 lives inside our set A that we defined above- it moves x_0 within of itself.

Here’s a bit of cleverness: we can write g=g_n=g_0^{-1}g_1\cdot g_1^{-1}g_2 \cdots g_{n-1}^{-1}g_n, because all of the middle terms would cancel out and we’d be left with g=g_0\cdot g_n = 1\cdot g = g.  But each of those two-letter terms lives in A, so we just wrote as a product of elements in A.  That means that A generates G.  We said above that A is finite, so G is finitely generated.

That was the “moreover” part of the theorem.  The main thing is to show that G is quasi-isometric to X.  Let’s try the function g\mapsto g.x_0.

Above, we wrote as a product of elements of A, so that means that the length of is at most n.  In other words, d_G(1,g)\leq n.  Now we’d like to bound it by d_X(x_0,g.x_0).  We found by dividing the geodesic into pieces, so we have n\leq \frac{d_X(x_0,g.x_0)}{R/3}+1, where we added a 1 for the rounding.  So we have one side of the quasi-isometry: d_G(g_1,g_2)\leq \frac{3}{R}d_X(g_1.x_0,g_2.x_0)+1 (using the action by isometries).

Now we need to bound the other side, which will be like stepping back through our clever argument.  Let M be the maximum distance that an element of translates x_0.  In symbols, M=max_{a\in A} d_X(x_0,a.x_0).  Choose some in G, with length n.  That means we can write as a product of elements in A: g=a_1\cdots a_n.  Each a_i moves x_0 at most M.  If we step between each translate, we have d(a_i.x_0,a_{i+1}.x_0)=d(a_{i+1}^{-1}a_i.x_0,x_0)\leq M.  There are steps from x_0 to g.x_0, and each step contributes at most M to the distance.  So d_X(x_0,g.x_0)\leq M d_G(1,g).

With bounds on both sides, we can just pick the larger number to get our actual quasi-isometry.  We also need the function to be quasi-onto, but it is because the action is cocompact so there are translates of x_0 all over the place.


The fundamental theorem of geometric group theory (part I), topology

24 Sep

I love the phrase “THE fundamental theorem of…” It’s so over the top and hyperbolic, which is unlike most mathematical writing you’ll run into.  So you know that it’s important if you run into the fundamental theorem of anything.  By now we all have some background on geometric group theory: you’ll want to know what a group action is and what a quasi-isometry is.  (Refresher: a group G acts on a space X if each group element g gives a homomorphism of the space X to itself.  A quasi-isometry between two spaces X and Y is a function f so that distances between points get stretched by a controlled scaling amount + an additive error term).  We say a group G is quasi-isometric to a space X if its Cayley graph is quasi-isometric to X.  Remember, a Cayley graph is a picture you can draw from a group if you know its generators.

Still from Wikipedia: a Cayley graph of the symmetries of a square

There are several more terms we’ll want to know to understand the theorem, but I’ll just do one more before we start.  We say a group G acts on a space X by isometries if it acts on X, and each homomorphism is actually an isometry (it preserves distance).  So for instance, the integers acting on the real line by multiplication isn’t by isometries, because each homomorphism spreads the line out (so the homomorphism of the reals to themselves given by 3 is x \mapsto 3x, which stretches out distances).  But if the action is defined by addition, then you’re okay: x\mapsto x+3 preserves distances.

Under the red function, f(2)-f(1)=6-3=3, but 2-1=1, so this isn't an isometry. Under the green function, f(2)-f(1)=5-4=1, which is equal to 2-1. This is always true, so this is an isometry.

Under the red function, f(2)-f(1)=6-3=3, but 2-1=1, so this isn’t an isometry.
Under the green function, f(2)-f(1)=5-4=1, which is equal to 2-1. This is always true, so this is an isometry.

So here’s the fundamental theorem:

If a group G acts properly discontinuously, cocompactly, and by isometries on a proper metric space X, then G is quasi-isometric to X. 

You can parse this so far as “If a group G acts by isometries on a space X with condition condition condition, then G is quasi-isometric to X.”  Putting aside the conditions for now, how would we prove such a theorem?  Well, to show something is quasi-isometric, you need to come up with a function so that the quasi-isometry condition holds: for all x,y in X, we need \frac{1}{K} d_G(f(x),f(y))-C\leq d_X(x,y) \leq K d_G(f(x),f(y))+C.

So let’s deal with those conditions!  An action is cocompact if there’s some compact subset S of X so that G-translates of S cover all of X.  Remember, each element g in G gives an isometry of X, so it’ll send S to some isometric copy of itself somewhere else in X.  In our example above, the integer 3 will send the compact subset [5,7] to the isometric copy [8,10].  In fact, our example action is cocompact: you can keep using [5,7] as your compact set, and notice that any point on the real line will eventually be covered by a translate of [5,7].  For instance, -434.32 is covered by [-435,-433], which is the image of [5,7] under the isometry given by -440.

This action is also cocompact. Here I have the plane, conveniently cut up with an integer lattice. Can you figure out what the action is? Hint: the red square is a unit square, and the pink squares are suppose to be various translates of it.

This action is also cocompact. Here I have the plane, conveniently cut up with an integer lattice. Can you figure out what the action is? Hint: the red square is a unit square, and the pink squares are suppose to be various translates of it.

G acts on X properly discontinuously if for any two points x,y in X, they each have a neighborhood U_x, U_y so that only finitely many g make g.U_x\cap U_y\neq\emptyset.  Let’s look at our example action again.  If I take the points 4365.234 and 564.54 in the real line, I’d like to find neighborhoods around them.  Let’s choose the intervals [4365,4366] and [564,565].  The only integers that make these hit each other are -3801 and -3800.  In particular, 2 is finite, so this indicates proper discontinuity.  If we actually wanted to prove the action is properly discontinuous, we’d want to show this is possible for all numbers, not just these two specific ones I chose.

Schematic of proper discontinuity: only finitely many g will send the yellow oval to hit the blue blob

Schematic of proper discontinuity: only finitely many g will send the yellow oval to hit the blue blob

Finally, a metric space X is proper if all closed balls are compact.  Balls are intuitively defined: they’re all the points that are at a fixed distance or less from your center.  In the plane, balls are circles, centered around points.  And compact-well, aw shucks I haven’t defined compact and we’ve been using it!  Time for some topology.  We’ll prove this theorem next time around, this post is just definitions and background.  (Sorry for the cliffhanger, but it should be clear what we’re going to do next time: make a function, show it’s a quasi-isometry).

Just like groups are the fundamental object of study in algebra, open sets are the fundamental object of study in topology.  You’re already familiar with one type of open set, the open interval (say, (6,98), which includes all numbers between 6 and 98 but doesn’t include 6 and 98).  I just described another above: balls.  So, open circles in the plane are open sets.  Sets are closed if their complement is open: that is, the rest of the space minus that set is open.  In the real line example, [6,74] is closed because (-\infty,6)\cup(74,\infty) is open (it’s the union of infinitely many open sets, say (74,76) with (75,77) with (76,78) and on and on).

Notice that I haven’t yet defined what an open set is.  That’s because it’s a choice- you can have the same space have different topologies on it if you use different definitions of open sets.  I’ll point you to these wikipedia articles for more examples on that.

A set is compact if every covering of it by open sets has a finite subcover.  That means that any time you write your set S as a union of open sets, you can choose finitely many of those and still be able to hit all the points of S.  From above, the set (74,\infty) is not compact, because you can’t get rid of any of the sets in that infinite covering and still cover the set.  On the real line, a set is compact if it’s closed and bounded (this is the Heine-Borel theorem, a staple of real analysis).

So that’s enough for today.  More next time (like a proof!)  Also, I’m using my husband’s surface to blog this, which means I did all the pictures using my fingers.  It’s like finger painting.  What d’you think?  Better than usual pictures, or worse?

Ramsey theory: most accessible math post yet

23 Jul

Every time I see the ‘categories’ bar on the right side of this blog I get a guilty twinge because there’s so much baking and so little math.  I like math, I really do!  I just find it easier to blog about baking, which I do to relax, than math, which I do as my job.

During my senior year of college we had this awesome program where interested seniors could give a 20-minute scholarly presentation to their peers. We’d spent four years getting to know each others’ party habits, laundry hours, cleanliness, etc. etc., and finally had a chance to find out what everyone did during those hours when we weren’t together.  I decided to talk about graph theory, which I’ve introduced in this previous post.  I basically gave this blog post as my presentation, only with a powerpoint and being dressed up in front of an audience instead of being in pajamas on my couch.  Also, funny note, that post is called Part I but I never got around to Part II… maybe eventually!  I’ll have to re-learn Schreier’s theorem.

ANOTHER TANGENT: I by NO MEANS have an encyclopedic knowledge of mathematical facts.  Some people do, but I suspect a lot of mathematicians are like me.  We “learn” something like all of us did in high school, studying it for a few weeks, taking a test or using it some way, and then forgetting it completely a few weeks later.  But some part of it stays with us, and the next time we need that fact or technique it’s easier to re-learn it.  For instance, I think it’s taken me four tries to learn linear algebra, and if you had me take a linear algebra test tomorrow I would maaaaybe get a C.  But if you told me about it and gave me a week to study I could probably ace it.  This is not because I’m smarter than the students in the class, whoever they are, but because I’ve been exposed to the material before and have acquired the skills to learn algorithms/facts/theorems quickly for short-term memory.

OK so back to Ramsey theory!  Let’s say you’re a manager of a pretty big department of some company, and you need a committee of three people.  And, in order for this committee to work well together, you have this requirement: either

  1. None of the people on the committee have ever worked with anyone else on the committee before (pairwise strangers) OR
  2. All three people have worked with every other person before (so pairwise, they’ve all worked together before)

And you start going through your employee records, and you look at a single person and you think darnit!  How many of these records will I have to look at to guarantee that I’ll get a committee?!

Duh-duh-DUHHHH math to the rescue!

This is the way we start approaching such problems: first, model it.  I’ll say each employee is a vertex of my graph, and draw a red edge between two vertices if they haven’t worked together before, and draw a blue edge if they have.  So here’s what we’re looking for:

  1. A red triangle (so three people who are pairwise strangers)
  2. A blue triangle (so three people who have all worked together before)

Next, try baby examples.  Below I drew a blue line between two students, and a blue line between two administrators, and red lines between everything else.

No triangles!  Guess we'll have to TRI again

No triangles! Guess we’ll have to TRI again

So we don’t have any triangles.  So if you look at just 4 employee records, you aren’t guaranteed to find a committee (you still might find one, obviously).

What about five?  Well, here’s a picture proof that 5 doesn’t work:


Not satanism, just math! Though this problem can certainly bedevil you

(Convince yourself this is a proof)

What about six?  Could that be the answer?  I can certainly draw an example with six (also I could draw one with five and four and three) but is it guaranteed?  Hint: the answer is yes!

Let’s prove it.  I love this proof so much; I share it with all my non-math friends.  It’s my party trick.  That, and petting the cat.  Does that count as a party trick?

To start we need a little fact called the pigeonhole principle It says if you have n boxes, and n+1 pigeons, at least one box must contain two pigeons.

One of my favorite pictures of all time.

One of my favorite pictures of all time.

This picture illustrates the principle with n=9: the top left box contains two pigeons.  You could also have one box holding 10 pigeons and 9 boxes with zero in them.  The point of the principle is that there is always one box with at least two pigeons.

Let’s put this principle to work!  If I’m looking at one employee’s records and comparing with five others, I have two boxes: red (has not worked together) and blue (has), with five pigeons.  By the pigeonhole principle, at least one of these boxes contains three pigeons (think about this for a minute before continuing.

Get it?  Got it?  Good.  Ask in the comments if you don’t).

Here’s the picture:

At least three of the edges have the same color.  I arbitrarily picked blue.

At least three of the edges have the same color. I arbitrarily picked blue.

NOTE: I colored the two other edges black.  These guys could be blue or red (so the possibilities are 5R, 4R1B, 3R2B, 2R3B, 1R4B, or 5B).  In all of the possibilities, there’s a color that has at least three lines in it.

Now let’s look at those three guaranteed vertices that connect to our original one via edges of the same color (let’s say it’s blue).  If I want to guarantee the existence of a blue or red triangle, I must prove it has to exist.  E.g. instead of assuming the edges between these three vertices are blue, which would automatically make a blue triangle, I should assume they’re red (we’re going for worst-case scenario).

Filling it up

Filling it up

And here’s the end of the proof: there’s one edge left from that picture.  What color can it be?


If the dotted edge is red, then we’ve found our committee no. 1.  If it’s blue, then we’ve used option 2.

So we’ve proved that six is the number!  In math speak, since this is a fundamental problem in Ramsey theory, we say R(3,3)=6.  This is a Ramsey number (named after a mather) for red three and blue three (I’ve never heard anyone say it like that but the quote at the bottom makes no sense otherwise).

I’ll leave you with a funny quote from Paul Erdos on the problem of Ramsey numbers, taken from wikiquote.

“Suppose aliens invade the earth and threaten to obliterate it in a year’s time unless human beings can find the Ramsey number for red five and blue five. We could marshal the world’s best minds and fastest computers, and within a year we could probably calculate the value. If the aliens demanded the Ramsey number for red six and blue six, however, we would have no choice but to launch a preemptive attack.”

almost as much as I love stock photos

I love clipart

Intrinsically knotted graphs on 21 edges

4 Apr

I was skimming through http://www.arxiv.org the other day and found this paper by a student named Barsotti and a professor named Mattman.  Barsotti is/was an undergraduate at CSU Chico and this is his honors thesis, while Mattman is a professor at CSU Chico.  Weirdly enough it looks like Mattman also directed the undergraduate theses of two of my colleagues, Arielle Leitner from UCSB  (wow, nice website!  way better than mine) and Ryan Ottman.  Small world.

So let’s talk a little bit about this paper, shall we?  I gave a half hour talk on it in our little seminar on Tuesday.  Graph theory is one of those fields that is ridiculously  useful.  Computer scientists and anyone who plays with data loves graph theory.  Knot theory is also surprisingly useful since math biology is blowing up right now with knotting of DNA and molecules and such.  This paper lies in the intersection of graph theory and knot theory.

Graph theory.  From my last math post you know what a graph is: a collection of vertices with some edges connecting them.  We’ll be talking about a certain operation you can do to graphs, taking a minor.  A minor of a graph is a graph that you derive from your original graph.  There are three different ways to find minors of a graph: 1) delete a vertex and all the edges connected to it, 2) delete an edge, 3) contract an edge.  The picture shows examples of all three of these.  For number 3, you delete an edge, and you put the two vertices at the end of it together.  Note that the new vertex has 4 edges attached to it, or has degree 4, while the original two each had degree 3.

The big one is ok, but the other three are jail bait

The big one is ok, but the other three are jail bait.

So there’s this big old theorem from Robertson and Seymour, called the Graph Minor Theorem, which says a couple of things.  For one, if you have infinitely many graphs, you’re definitely holding one that is a minor of another in your hand.  Another way to think of the theorem is if you have a collection of graphs which is closed under minors (e.g. for any graph in your collection, all of the minors of it are in there), you can define this collection by a finite set of forbidden minors.  

A quick example is the collection of all trees (graphs with no cycles or loops in them).  In our picture, 1 and 2 are both trees, while the original graph and 3 both have a loop and so aren’t a tree.  Trees are closed under minors, since taking a minor of a tree will never make a new loop.  Then the theorem says there is a finite set  of graphs which cannot be minors of any tree, and that the set of trees is characterized by not having these as minors.  So to be a tree, it’s necessary and sufficient, as mathematicians like to say, that you have no forbidden minor as a minor.  In the case of trees, the minor is this:

Lady C… er no I meant Sir Cle

This is a loop with a single vertex on it.  If you’re a graph that’s not a tree, you can do a finite sequence of taking minors (e.g. contract the edges in a cycle) until you end up with a single vertex with a loop.  If you are a tree, you can never do this.  Robertson and Seymour’s graph minor theorem took 20 papers to prove.  It’s pretty insane.  But this is how you use it!

So a fun thing that undergraduates and professors do is try to take a family closed under minors and find the finite forbidden minor set.  In the case of a tree, there’s a single forbidden minor.  To talk about other graphs and forbidden minors, let’s switch gears for a moment and go to knot theory.

File:TrefoilKnot 01.svg

Trefoil! Versus treplasticwrap. That was a way worse name

Knot theory.  A knot is an embedding of the circle in some crazy way in some crazy place.  Right now we’ll just talk about knots embedded in three space (the world we live in, \mathbb{R}^3).  Embedding is a technical word, but just think of it as putting something somewhere.  In fact, think of a knot as a piece of string, lying one end on the ground, throwing the other end all over itself (looping in and out) and then gluing the two ends together.  Or put in a bunch of ropes, so long as you glue all the ends together to make one continuous loop.


Not a knot… not yet

In this picture, we’d have to glue a gray end and a green end together, and do the same with the other pair, to have an actual knot.  Or you could glue the green ends together and the grey ends together, in which case you would have two knots linked together.

Lots of knots secretly look very simple, like a circle:

Even crazy complicated ones.  Look at this diagram to see what I mean about ‘secretly looking like’:

The Culprit

This diagram is from a paper by Henrich and Kauffman from 2010 (disclaimer: Kauffman is at my school).  [I don’t know why I put in that disclaimer.]

But the trefoil, above, for example, doesn’t secretly look like a circle.  No matter how you pull and prod the strands around, you can’t get a plain old circle again.

This brings us back to graph theory.  A graph is intrinsically knotted if, no matter how you draw it in \mathbb{R}^3, you end up with a not-boring knot.  This paper I just read lists all the intrinsically knotted graphs with 21 edges (as you’d expect from the title).  The proof uses some results from Robertson and Seymour’s graph minor theorem.  For instance, it uses the fact that all non-planar graphs are characterized by two forbidden minors- this is Kuratowski’s theorem and I’ll do another post on it sometime.  The proof uses a lot of cool little tricks, but this is just an intro post so I’ll save those for another time.

In case you were wondering, there are 14 intrinsically knotted graphs with 21 edges.  That’s the takeaway!

Cayley graphs, trees, Schreier’s theorem Part I (actually just groups)

6 Mar

So that’s a big agenda, but hey, dream big, right?  I’m giving a talk in our geometry, topology, and dynamics graduate seminar tomorrow on Schreier’s theorem, so I thought I’d try to write a blog post or three about it.   And then maybe blog about some cranberry cookies I made tonight after preparing my talk.

First, we’ve got to talk about the fundamental object of abstract algebra, pretty much the first thing you learn about as a math major: groups.  Abstractly, a group is a collection of objects that have some type of operation between them, where the operation satisfies a good number of properties: doing the operation to any two objects (or elements) results in another element in the group (this is called closure); there’s an identity element so that if you operate it with any other element, you get that other element back; the operation should be associative which means you can move parentheses around; and each element should have an inverse that takes you back to the identity.  Concretely, here are some groups:

  • The integers, with addition as your group operation.  Adding any two integers gives you an integer (closure); adding 0 to any number gives you back, so 0 is the identity; addition is associative; and x+ (-x) = 0, so every element has an inverse.
  • Symmetries of a square.  This means all the ways you can turn, flip, and change a square.  The group operation is doing one symmetry, and then another (like flipping, and then turning).  What’s interesting about this group is that it can be generated by just two different symmetries: rotating to the right by 90 degrees, and flipping vertically.   You can convince yourself of this by taking a piece of paper and numbering the corners 1, 2, 3, 4 and figuring out how many ways there are total to get the square back again.  Or you can try drawing a non-symmetric smiley face in Paint!

    Symmetries of a square.  Red guy is totally random

    Symmetries of a square. Red guy is totally random

  • Another is \mathbb{Z}/ n\mathbb{Z}, which is also written as \mathbb{Z}_n.  This can be read as the integers modulo n, which means every time you hit n, you jump back to 0.  It’s exactly how we count the hours up to 11, and then 12, and then we’re back at 1 again.  So our way of telling time \mathbb{Z}_{12}.  If I tell you that it’s 10 o’clock now, and ask you what time it’ll be in 70 hours, and you answer 8 o’clock, you just did modular arithmetic.  So you figured out that the closest multiple of 12 to 70 was 60, and then you knew we had to add 10 to our time, but since you’re not in not-the-US you knew the answer couldn’t be 20 o’clock, and you got 8 o’clock.  Good for you, you just did some group theory!

    Flava Flav gets group theory

Okay, but I’m interested in geometric objects and pictures we can draw besides these little smiley face squares.  So how can I encode the data in our examples (the integers, the integers mod n, and the reflections of a square) into a geometric object?  I’ll use a diagram that computer scientists use a lot, called a graph.

Building our graph is like when we built our curve complex last time we did math.  We’ve got vertices, each of which represents a group element.  To connect two group elements with edges, we’ve got to figure out a generating set for our group.  This is a subset of the group elements, where if you operate with them over and over again, you end up with all of the group.  For instance, I can use {1,-1} as my generating set for the integers: I can add 1 to itself n times to get any positive integer n, and similarly for the negative integers.  And we used the rotation r and the vertical flip s in our smiley face picture above to generate the symmetries of a square.

So to connect two vertices, we draw an edge if we can operate by one of our generating set to get from one vertex to another.  So the picture of the integers looks like:

Here, we use little arrows for the vertices, and lines for the edges.  I very obviously didn’t make this picture.


This picture is from wikipedia!  It’s the picture for those rotations of the square.  Wikipedia used a letter ‘F’ instead of a smiley face.

So we can make these pictures for any group given some generating set (aside: generating sets aren’t unique.  For instance, you can generate the integers with \pm 2, \pm 3 instead of using 1, -1.  Different generating sets give you different pictures).  This picture is called the Cayley graph of the group with respect to the generating set S

A graph without any loops/cycles/ways to get from one vertex back to itself is called a tree.  Boom, we’ve got two vocabulary words from the title down!  In Part II, I’ll actually explain what Schreier’s theorem is and maybe give some idea as to its proof.

%d bloggers like this: