Archive | Math RSS feed for this section

The fundamental theorem of geometric group theory (part I), topology

24 Sep

I love the phrase “THE fundamental theorem of…” It’s so over the top and hyperbolic, which is unlike most mathematical writing you’ll run into.  So you know that it’s important if you run into the fundamental theorem of anything.  By now we all have some background on geometric group theory: you’ll want to know what a group action is and what a quasi-isometry is.  (Refresher: a group G acts on a space X if each group element g gives a homomorphism of the space X to itself.  A quasi-isometry between two spaces X and Y is a function f so that distances between points get stretched by a controlled scaling amount + an additive error term).  We say a group G is quasi-isometric to a space X if its Cayley graph is quasi-isometric to X.  Remember, a Cayley graph is a picture you can draw from a group if you know its generators. Still from Wikipedia: a Cayley graph of the symmetries of a square

There are several more terms we’ll want to know to understand the theorem, but I’ll just do one more before we start.  We say a group G acts on a space X by isometries if it acts on X, and each homomorphism is actually an isometry (it preserves distance).  So for instance, the integers acting on the real line by multiplication isn’t by isometries, because each homomorphism spreads the line out (so the homomorphism of the reals to themselves given by 3 is $x \mapsto 3x$, which stretches out distances).  But if the action is defined by addition, then you’re okay: $x\mapsto x+3$ preserves distances. Under the red function, f(2)-f(1)=6-3=3, but 2-1=1, so this isn’t an isometry.
Under the green function, f(2)-f(1)=5-4=1, which is equal to 2-1. This is always true, so this is an isometry.

So here’s the fundamental theorem:

If a group G acts properly discontinuously, cocompactly, and by isometries on a proper metric space X, then G is quasi-isometric to X.

You can parse this so far as “If a group G acts by isometries on a space X with condition condition condition, then G is quasi-isometric to X.”  Putting aside the conditions for now, how would we prove such a theorem?  Well, to show something is quasi-isometric, you need to come up with a function so that the quasi-isometry condition holds: for all x,y in X, we need $\frac{1}{K} d_G(f(x),f(y))-C\leq d_X(x,y) \leq K d_G(f(x),f(y))+C$.

So let’s deal with those conditions!  An action is cocompact if there’s some compact subset S of X so that G-translates of S cover all of X.  Remember, each element g in G gives an isometry of X, so it’ll send S to some isometric copy of itself somewhere else in X.  In our example above, the integer 3 will send the compact subset [5,7] to the isometric copy [8,10].  In fact, our example action is cocompact: you can keep using [5,7] as your compact set, and notice that any point on the real line will eventually be covered by a translate of [5,7].  For instance, -434.32 is covered by [-435,-433], which is the image of [5,7] under the isometry given by -440. This action is also cocompact. Here I have the plane, conveniently cut up with an integer lattice. Can you figure out what the action is? Hint: the red square is a unit square, and the pink squares are suppose to be various translates of it.

G acts on X properly discontinuously if for any two points x,y in X, they each have a neighborhood $U_x, U_y$ so that only finitely many g make $g.U_x\cap U_y\neq\emptyset$.  Let’s look at our example action again.  If I take the points 4365.234 and 564.54 in the real line, I’d like to find neighborhoods around them.  Let’s choose the intervals [4365,4366] and [564,565].  The only integers that make these hit each other are -3801 and -3800.  In particular, 2 is finite, so this indicates proper discontinuity.  If we actually wanted to prove the action is properly discontinuous, we’d want to show this is possible for all numbers, not just these two specific ones I chose. Schematic of proper discontinuity: only finitely many g will send the yellow oval to hit the blue blob

Finally, a metric space X is proper if all closed balls are compact.  Balls are intuitively defined: they’re all the points that are at a fixed distance or less from your center.  In the plane, balls are circles, centered around points.  And compact-well, aw shucks I haven’t defined compact and we’ve been using it!  Time for some topology.  We’ll prove this theorem next time around, this post is just definitions and background.  (Sorry for the cliffhanger, but it should be clear what we’re going to do next time: make a function, show it’s a quasi-isometry).

Just like groups are the fundamental object of study in algebra, open sets are the fundamental object of study in topology.  You’re already familiar with one type of open set, the open interval (say, (6,98), which includes all numbers between 6 and 98 but doesn’t include 6 and 98).  I just described another above: balls.  So, open circles in the plane are open sets.  Sets are closed if their complement is open: that is, the rest of the space minus that set is open.  In the real line example, [6,74] is closed because $(-\infty,6)\cup(74,\infty)$ is open (it’s the union of infinitely many open sets, say (74,76) with (75,77) with (76,78) and on and on).

Notice that I haven’t yet defined what an open set is.  That’s because it’s a choice- you can have the same space have different topologies on it if you use different definitions of open sets.  I’ll point you to these wikipedia articles for more examples on that.

A set is compact if every covering of it by open sets has a finite subcover.  That means that any time you write your set S as a union of open sets, you can choose finitely many of those and still be able to hit all the points of S.  From above, the set $(74,\infty)$ is not compact, because you can’t get rid of any of the sets in that infinite covering and still cover the set.  On the real line, a set is compact if it’s closed and bounded (this is the Heine-Borel theorem, a staple of real analysis).

So that’s enough for today.  More next time (like a proof!)  Also, I’m using my husband’s surface to blog this, which means I did all the pictures using my fingers.  It’s like finger painting.  What d’you think?  Better than usual pictures, or worse?

Quick post: research updates of friends

18 Aug

I noticed a few papers up on arXiv last week that correspond to some old posts, so I thought I’d make a quick note that these people are still doing math research and maybe you are curious about it!

We last saw Federica Fanoni and Hugo Parlier when they explored kissing numbers, and they gave an upper bound on the number of systoles (shortest closed curves) that a surface with cusps can have.  This time they give a lower bound on the number of curves that fill such a surface.  Remember, filling means that if you cut up all the curves, you end up with a pile of disks (and disks with holes in them).  So you can check out that paper here.

Last time we saw Bill Menasco, he was working with Joan Birman and Dan Margalit to show that efficient geodesics exist in the curve complex.  This new paper up on arxiv was actually cited in that previous paper- it explains the software that a bunch of now-grad students put together with Menasco when they were undergrads in Buffalo, NY (UB and Buffalo State) during this incredible sounding undergrad research opportunity– looks like the grant is over, but how amazing was that- years of undergrads working for an entire year on real research with a seminar and a semester of preparation, and then getting to TA a differential equations class at the end of your undergraduate career.  Wow.  I’m so impressed.  I got sidetracked: the software they made calculates distances in the curve complex and the paper explains the math behind it and includes lots of pretty pictures.

My friend Jeremy did a guest post about baklava and torus knots a long time ago, and of course he’s got his own wildly popular blog.  He also has a bunch of publications up on arXiv, including one from this summer.  They’re all listed in computer science but have a bunch of (not-pure) math in them.

The paper I worked on over that summer at Tufts with Moon Duchin, her student Andrew Sánchez (note to self: I need a good looking website I should text Andrew), my old friend Matt Cordes, and graduate student superstar Turbo Ho is up on arXiv and has been submitted: it’s on random nilpotent quotients.

Moon and Andrew and others from that summer have another paper which has been accepted to a journal, it’s also about random groups and is here.  It was super cool, I saw a talk at MSRI during my graduate summer school there and John Mackay (also a coauthor on that paper) was in the audience and this result came up organically during the talk.  Pretty great!

There’s another secret project from that summer which isn’t out yet, but I just checked two of the three co-authors webpages and they had three and four papers out in 2015 (!!!)  That’s so many papers!  So I don’t know when secret project will be out but I’ll post about it when it is.

A Linear Algebra Exercise

13 Aug

Back when I went to that excellent MSRI graduate summer school (video of lectures available at the link), one professor had us try a few seemingly-random exercises in order to motivate some definitions.  The exercises by themselves were pretty fun, so I thought I’d share them and maybe get to the definitions (which I have since used in my research).

Linear Algebra is really hard, and it’s one of the first courses you take after calculus (if you take courses after calculus).  I’m very often surprised by how often I use it and how applicable it is (I’m pretty sure my husband also uses it often).  A few months after I first saw an eigenvalue, I ran into an old friend who told me “I still don’t know what eigenvalues are useful for.”  Fast forward five or six years, and now I see eigenvalues EVERYWHERE!  My senior seminar of undergraduate was basically a semester all about eigenvalues (spectral analysis).  Anyways, this is all to say I still don’t feel very proficient at linear algebra, but I am very appreciative of it.  I hope you feel that way about lots of math/baking too!

In these exercises, we’ll live in vector spaces.  We’ll do a “definition” by example here: Euclidean space, or real space, is an example of a vector space.  Instead of thinking of coordinates as points in space, we think of them as little arrows that point to a point and have a specific length/magnitude (the distance of the point to the origin). The purple point is (2,3). The orange vector has magnitude (square root of 13) and is pointing at the point (2,3).

Just as groups are the fundamental object of study in algebra, vectors are the fundamental object of study in linear algebra. Vectors are abstract sets with two operations (instead of one, which groups have): you can add vectors and you can scalar multiply them (which means make them longer or shorter).  There are a bunch of axioms that vector spaces have to satisfy, but you can just think of arrows in Euclidean space.

Next we need to understand the concept of an inner product.  This is an operation where you put in two vectors and get out a number (in our case, real, but generally complex or in any “field” in quotes because I haven’t ever defined what a field is).  In our example, if you have one vector that points to (2,3) and another that points to (9,11), you could do 2*9 + 3*11 = 18+ 33 = 51 and get 51 as the inner product of the two.  This is written $\langle (2,3), (9,11) \rangle =51$.  So we multiply all the matching coordinates, and then we add up all those products.

There’s a few axioms that inner products satisfy:

1. Conjugate symmetry: $\langle b, a \rangle = \bar{\langle a, b \rangle}$  In our case, since we’re in the reals and not the complex numbers, we can ignore the bar and read this as symmetry.  From our definition this is true.
2. Positive-definiteness: $\langle a, a \rangle \geq 0$ for any a, and equals 0 only if a is 0.  In our case, $\langle a, a \rangle$ will be a sum of squares, which is only 0 if every single coordinate is 0.
3. Linearity in the first argument: $\langle ax+c, b\rangle = x\langle a, b \rangle + \langle c, b \rangle$.  You can/should check this is true for our example.

From our definition of inner product over Euclidean space, we can come up with a formula for the magnitude of a vector (remember, that’s the distance from the origin to the point that we’re pointing at).   We have $\langle (a_1, a_2, \ldots, a_n), (a_1,\ldots, a_n) \rangle = a_1^2+a_2^2+\ldots+a_n^2$.  By the Pythagorean theorem, we already know the distance of a point to the origin: it’s the square root of this quantity!  So we have that the magnitude of a vector is the square root of its inner product with itself.  In symbols, we write $\| a \| = \sqrt{\langle a, a \rangle}$.  Any time you have an inner product, you can define a norm $\| \cdot \|$ in this way.

Here’s the exercise: let’s say you have a bunch of unit vectors (vectors with magnitude 1) and you take all of the pairwise inner products.  So if you have four unit vectors a, b, c, d, you’ll have all these inner products, which we’ll arrange in a matrix: $\left( \begin{array}{cccc} \langle a, a \rangle & \langle a, b \rangle & \langle a, c \rangle & \langle a, d \rangle \\ \langle b, a \rangle & \langle b, b\rangle & \langle b, c \rangle & \langle b, d \rangle \\ \langle c, a \rangle & \langle c, b\rangle & \langle c, c \rangle & \langle c, d \rangle \\ \langle d, a \rangle & \langle d, b\rangle & \langle d, c \rangle & \langle d, d \rangle \end{array} \right)$

That’s all fine and dandy and well and reasonable.  Here’s the exercise: if I give you a matrix A, can you tell me if you can make A out of some unit vectors in this way with some inner product?

Notice that in the question, we didn’t say a particular inner product (we’re using one example, remember?)  This just says if you have a matrix, can you find an inner product and some unit vectors that make it.  So to take a few stabs at this problem, we’ll have to use all those axioms of inner products?  Definitely we need A to be symmetric (that means that the entry in the 2nd row, 3rd column is equal to the entry in the 3rd row, 2nd column etc. etc.) by property 1.

Property 2 and our discussion above say that all the diagonal entries have to be 1 (because we started with unit vectors).

Those first two conditions are necessary for A to even be a candidate, but just because A satisfies those conditions doesn’t mean that there’s a collection of unit vectors that form it.  (These are “necessary” but not “sufficient” conditions.)

The actual answer is that A must be non-negative definite: this means that if A is rows by columns and you pick a collection of complex numbers, this sum $\sum_{i,j} a_{ij}c_i\bar{c_j}$ is greater than or equal to zero (it’s not negative).  Parsing the sum expression: take each entry of A and multiply it by the complex number corresponding to its row, and by the conjugate of the complex number corresponding to its column.  Add up all of these products.  If this number is non-negative, then yes, your matrix can arise from a collection of unit vectors.

Done! Just kidding!  Did you think I’d just throw an answer at you and not explain it?  That’s not how we do! As always, the great question in math is why? Just getting a random condition thrown at you shouldn’t feel satisfying (well, it might, but too bad I’m taking away your satisfaction).

Let’s find out why the condition is necessary:

Squaring any number makes a positive number, so if you add up all your unit vectors and take the norm of that and then square it, you should get a positive number.  Even if you throw random complex scalar multiplication on to those vectors, the norm squared will still be positive.  So in equations, we have $0 \leq \| \sum_{i=1}^n c_iv_i \|^2$.  If we unpack this definition, we get $0 \leq \| \sum_{i=1}^n c_iv_i \|^2$ $=\langle \sum c_iv_i, \sum c_jv_j \rangle$ $=\sum \langle v_i,v_j \rangle c_i \bar{c_j}$ $= \sum a_{ij} c_i \bar{c_j}$

So we get the condition is necessary (didn’t even use the unit-ness of the vectors here).

How about sufficiency?  Well, we have to come up with an inner product and some unit vectors.  So take some unit vectors $u_i$.  Any vector will be written $\sum t_i u_i$.  We’ll define an inner product on this space by $\langle \sum t_iu_i, \sum t_ju_j \rangle_A = \sum a_{ij} t_i \bar{t_j}$.  Go check that this inner product satisfies the three axioms above.  Great!  It’s an inner product (this relies on the fact that A is non-negative definite).  If you let all the $t_i=1$, then you can form A from the inner product of the $u_i$s.  That’s it! Congratulations!  That was a hard exercise session!  We did it!

30 Jul

This talk happened in March and I still remember it (and I was super sleep deprived at the time too).  Immediately after the talk, another grad student and I were chatting in the hallway and marveling at how good it was.  He said something like “I feel like a better person for having gone to that talk.”

A few days later, I ran into the speaker and told her that I had loved her talk, and she said “I’m super unintimidating so feel free to email me or ask me if you have any questions.”

During the talk, at one point she said (again, up to sleep-deprived memory coarseness)

“It’s more important that you learn something than that I get through my talk.  There’s no point in rushing through the material if you don’t take something away from this.”

All of these quotes are to say that this was probably the best talk I’ve seen (and I’ve seen lots of talks).  Particularly because of that last quote above.  Speaker put audience before ego, and that is a rare and beautiful thing (the other contender for best talk I’ve ever seen was by someone who recently won a big award for giving incredible talks).

The good news is that this is something anyone can do – mathematics at this level is a matter of practice and good habits, and not “talent” or “genius”.

OK, done fangirling!  On to the math!

We’ll be talking about a property of groups, so brush up from a previous blog post or wikipedia.  First we need to define a  (total) ordering on a group: a binary relation ≤ that satisfies three properties (which you’d expect them to satisfy):

1. Transitivity: if a≤b and b≤c, then a≤c
2. Totality: for any a, b in the group, a≤b and/or b≤a
3. Antisymmetry: if a≤b and b≤a, then a=b.

A few examples and nonexamples:

• The usual ≤ (less than or equal to) on the real numbers is an ordering.  For the rest of this post, I’ll freely switch between using ≤ to denote being in the group, and being in the real numbers (it should be clear when we’re talking about real numbers).
• Comparing heights of people is not an ordering: it’s not antisymmetric (see picture)
• Ordering words in the dictionary is an ordering: if you’re both before/at the same place and after/at the same place as me, then we must be the same word.
• Consider the group $\mathbb{Z}_2\times\mathbb{Z}_2$, which you can think of as a collection of ordered pairs $\latex \{ (0,0), (0,1), (1,0), (1,1)\}$.  If we define an ordering by (x,y)≤(a,b) if x+a≤y+b, then we’d break antisymmetry.  If we defined it by (x,y)≤(a,b) if x<a and y≤b, then we’d break totality (couldn’t compare (0,1) and (1,0)). Top: reals are good to go. Middle: just because we’re the same height doesn’t mean we’re the same person! Bottom: (0,1) and (1,0) don’t know what to do

• Can you come up with a relation that breaks transitivity but follows totality and antisymmetry?

Notational bit: we say that a<b if a≤b and a does not equal b.

Now we say a group is left orderable if it has a total order which is invariant under left multiplication: this means that a<b implies ga<gb for every g in the group.

Let’s go back to the reals.  If you use multiplication (like 3*2=6) as the group operation, then the usual ordering is not a left(-invariant) order: 2<3, but if you multiply both sides by -2 you get -4<-6, which isn’t true.  However, if you use addition (like 3+2=5) as the group operation, then you see that the reals are left orderable: 2<3 implies 2+x<3+x for every real x.  This is a good example of the fact that a group is a set and a binary operation.  It doesn’t make sense to say the real numbers are left orderable; you need to include what the group operation is.

Here’s an interesting example of a left orderable group: the group of (orientation-preserving) homeomorphisms on the real numbers. (Orientation preserving means that a<b means that f(a)<f(b), all in the reals sense).  If you don’t feel like clicking the link to prev. post, just think of functions.  To prove that the group is left orderable, we just have to come up with a left-invariant order.  Suppose you have two homeomorphisms g and defined on the reals.  If g(0)<f(0), then say g<f.  If f(0)<g(0), then say f<g.  If g(0)=f(0), then don’t define your order yet.  If g(1)<f(1), then say g<f.  And so on, using 2, 3, 4…  Looks like a good left order, right?  WRONG!

If g and f agree on all the integers, they could still be different functions.  So we haven’t defined our order.  We need a better left order.  What can we do?  I know, let’s use a fact!

FACT: the rationals (numbers that can be written as fractions) are countable and dense (roughly, wherever you look in the reals, you’re either looking at a rational or there are a bunch in your peripheral vision).

So now we do the same thing, but using the rationals.  Enumerate them (remember, they’re countable) so use $q_1,q_2,q_3\ldots$ in place of 1,2,3… above.  It’s another fact that if g and f agree on all rationals, then they’re equal to each other.  Let’s make sure we have an ordering:

1. Transitivity: If f≤g and g≤h, then that means there’s some numbers (call them 2 and 3) so that f(2)<g(2) and g(3)<h(3).  But since we had to go to 3 to compare g and h, that means g(2)=h(2).  So f(2)<h(2), so f≤h.
2. Totality: if I have two different homeomorphisms, then there has to be a rational somewhere where they don’t agree, by the second fact.
3. Antisymmetry: We sidestepped this by defining < instead of ≤.  But it works.

Here’s a “classical” THEOREM: If G is a countable group, then it is left orderable if and only if it has an injective homomorphism to $\text{Homeo}_+(\mathbb{R})$.

Remember, injective means that each output matches to exactly one input.  Since we showed that there’s a left order on the group of orientation preserving homeomorphisms on the reals, we’ve already proven one direction: if you have an injection, then take your order on G from the order of the homeomorphisms that you inject onto.  So if h is your injection and g, k are your group elements, say that g<k if h(g)<h(k) in $\text{Homeo}_+(\mathbb{R})$.

One thing Mann does in her paper is come up with an example of an uncountable group that doesn’t do what the theorem says (she also does other stuff).  Pretty cool, huh?  Remarkably, the paper seems pretty self-contained.  If you can read this blog, you could probably do good work getting through that paper (with lots of time), which is more than I can say for most papers (which require lots of background knowledge).

That brings me to the “also”: I’ve been quite tickled to be asked about applying to grad school/what grad school entails a handful of times, some of those times by people who found me via this blog.  So please email me if you’re interested in whatever I have to say on the subject!  I’ve applied to grad school twice and have friends in many different departments and areas.  I hear I can be helpful.

Efficient geodesics in the curve complex

15 Jul

I have a not-secret love affair with blogging the curve complex: I (intro), II (dead ends), III (connected).  I’m surprised I didn’t blog the surprising and cute and wonderful proof that the curve complex is hyperbolic, which came out two years ago.  Maybe I’ll do that next math post (but I have a large backlog of math I want to blog).  Anyways, I was idly scrolling through arXiv (where mathematicians put their papers before they’re published) and saw a new paper by the two who did the dead ends paper, plus a new co-author.  So I thought I’d tell you about it!

If you don’t remember or know what the curve complex is, you’d better check out that blog post I (intro) above (it is also here in case you didn’t want to reread the last paragraph).  Remember that we look at curves (loops) up to homotopy, or wriggling.  In this post we’ll also talk about arcs, which have two different endpoints (so they’re lines instead of loops), still defined up to homotopy.

The main thing we’ll be looking at in this post are geodesics, which are the shortest path between two points in a space.  There might be more than one geodesic between two spaces, like in the taxicab metric.  In fact, in the curve complex there are infinitely many geodesics between any two points. It’s easy to get metrics messed up, but the taxicab metric is pretty straightforward- there are lots of geodesics between the red star and the starting point.  I guess if you’re an alien crossed with a UFO crossed with a taxi then maybe the metric is difficult (butI totally nailed portraits of UFO-taxi-aliens)

Infinity is sort of a lot, so we’ll be considering specific types of geodesics instead.  First we need a little bit more vocabulary.  Let’s say I give you an arc and a simple (doesn’t self intersect) closed curve (loop) in a surface, and you wriggle them around up to homotopy.  If you give me a drawing of the two of them, I’ll tell you that they’re in minimal position if the drawings you give me intersect the least number of times of all such drawings. All three toruses have the same red and green homotopy classes of curves, but only the top right is in minimal position – you can homotope the red curve in the other two pictures to decrease the number of times red and green intersect.  I just couldn’t make a picture w/out a cute blushing square.

If you have three curves a, b, c all in minimal position with each other, then a reference arc for a,b,c is an arc which is in minimal position with b, and whose interior is disjoint from both and c. Green is a reference arc for red, orange, yellow: its interior doesn’t hit red or yellow, and it intersects orange once.  Notice that it starts and ends in different points, unlike the loops.  (This picture is on a torus) [Also red and yellow aren’t actually in minimal position; why not?]

Now if you give me a series of curves on a surface, I can hop over to the curve complex of that surface and see that series as a path.  If the path $latex v_0,v_1,\ldots,v_nis geodesic, then we say it is initially efficient if any choice of reference arc for $v_0,v_1,v_n$ intersects $v_1$ at most n-1 times. The geodesic $v_0,v_1,\ldots,v_n$ is an efficient geodesic if all of these geodesics are initially efficient: $(v_0,\ldots, v_n), (v_1,\ldots,v_n),\ldots,(v_{n-3},\ldots,v_n)$. In this paper, Birman, Margalit, and Menasco prove that efficient geodesics always exist if $v_0,v_n$ have distance at least three. Note that there are a bunch of choices for reference arcs, even in the picture above, and at first glance that “bunch” looks like “infinitely many,” which sort of puts us back where we started (infinity is a lot). Turns out that there’s only finitely many reference arcs we have to consider as long as $d(v_0,v_n)\geq 3$. Remember, if you’ve got two curves that are distance three from each other, they have to fill the surface: that means if you cut along both of them, you’ll end up with a big pile of topological disks. In this case, they take this pile and make them actual polygons with straight sides labeled by the cut curves. A bit more topology shows that you only end up with finitely many reference arcs that matter (essentially, there’s only finitely many interesting polygons, and then there are only so many ways to draw lines across a polygon). So the main theorem of the paper is that efficient geodesics exist. The reason why we’d care about them is the second part of the theorem: that there are at most $n^{6g-6}$ many curves that can appear as the first vertex in such a geodesic, which means that there are finitely many efficient geodesics between any two vertices where they exist. I DID NOT MAKE THIS PICTURE IT IS FROM BIRMAN, MARGALIT, MENASCO. But look at how cool it is!!! Look at this picture! The red curve and blue curve are both vertices in the curve complex, and they have distance 4 in the curve complex, and here they are on a surface! So pretty! If you feel like wikipedia-ing, check out one of the authors on this paper. Birman got her Ph.D. when she was 41 and is still active today (she’s 88 and a badass and I want to be as cool as she is when I grow up). Universal acylindrical actions 25 Jun I’m at a fantastic summer graduate school at MSRI (the Mathematical Sciences Research Institute, a.k.a. “math heaven”) right now and re-met a friend I’d seen at a few earlier conferences. I saw that she’d posted a preprint up on arXiv recently, so I thought I’d try to blog about it! Remember that a group is a collection of elements paired with some kind of operation between them (the integers with addition, rational numbers with multiplication, symmetries of a square with composition). For that operation, you put in two group elements and get another group element out. You can imagine different functions with different inputs and outputs. Like you might have a function where you put in Yen and late night, and it outputs pumpkin. Or you could put one group element in, and a location, and get a different location [like if you put in the group element -2 to the location (3,3), maybe you get (1,1)]. More precisely, a group action on a space is a homomorphism* which takes in a group element and a point in the space and outputs a (possibly different) point on that space. For instance, you can give an action of the integers on the circle by saying that rotates the circle by $n/2\pi$. Each integer rotates the circle by pi/2 times the integer. Looks like circle is getting a little sick of the action… In the picture above, if you input the integer 2 and the original purple dot, you get the new location of the dot (180 degrees from its original location, aka pi away). If you say the original purple dot is location and the new location is y, the notation is that 2.x=y. A homomorphism is a function that respects this: f(xy)=f(x)f(y). We say a space is hyperbolic if it locally “looks like” hyperbolic space (there’s a particularly nice function between it and hyperbolic space). The title of Carolyn’s paper is “Not all acylindrically hyperbolic groups have universal acylindrical actions,” so we need to learn what “acylindrical” means (look, we’ve already learned what groups and actions are, and we know the words “not”,”all”,and “have”! We’re doing great!) Here’s the precise definition, and then I’ll break it down: An action of a group on a hyperbolic space is called acylindrical if, for any $\epsilon >0,$ there exist numbers M,N>0 such that for every pair of points x,y with distance d(x,y)>M, the number of group elements that translate both x,y by less than epsilon is bounded by N: $|\{g: d(x,g.x)\leq \epsilon, d(y,g.y)\leq \epsilon\}| \leq N$. Here’s the non math-y intuition for this: if you have a pool noodle and you spin one end around, the other one generally will fly away from where it used to be. Here’s the math-y intuition for this: choose two points that are M-far apart. Make a little $\epsilon$-circle around each, then connect the two with a cylinder. The condition says that only a few group elements preserve the cylinder (that means that when acts on all the points in the cylinder, it maps them back into other points in the cylinder). So if you have a bunch (perhaps infinitely many) elements that preserve one circle, most of them send the other circle/rest of the cylinder away. A group is called acylindrically hyperbolic if you can find a hyperbolic space on which the group acts acylindrically. In practice, such groups actually act on a whole bunch of different spaces acylindrically. Now suppose that you’ve got an element in G and you want to see how that particular element acts. We say is loxodromic if you can find a space and a point in it so that the map $\mathbb{Z}\to X$ that sends an integer to the orbit of the point$n\mapsto g^n.s$is a quasi-isometry– roughly, if you draw all the points that gets mapped to if you apply over and over again, you get something that looks like a line. The older tree is the same as the younger tree up to scaling (multiplication) and adding some constants (the leaves). This is an example of a quasi-isomeTREE. [Also pretend both trees go on forever.] Just for fun here’s a picture of something that’s not a quasi-isometry: The ribbon on the right goes on forever in both directions, so it’s not quasi-isometric to the tree You might’ve noticed above that we say an element is loxodromic if we can find space on which it acts in this particular way. But we also said that a group can act on several different spaces. So even if an element acts loxodromically on one space, that doesn’t necessarily mean it acts loxodromically on another space (even if the group acts on that other space). We actually call an element generalized loxodromic if there exists some space on which it acts loxodromically. Then if you can find an action so that all generalized loxodromic actions are, in fact, loxodromic, you’ve found a universal acylindrical action. So this paper gives an example of an acylindrically hyperbolic group that doesn’t have such an action. Blog notes: For the summer I’m going to blog every Thursday (day was chosen arbitrarily). Also, I went back and tagged all the gluten-free recipes as gluten-free. And you should know that whenever I mention a person in this blog by name or link to them, that means that I admire them/am inspired by them. Cool earrings and the Pythagorean Theorem 4 Jun I’m so bad at suspense/surprises. I wanted to write this post and say LOOK AT THE COOL THING AT THE END SURPRISE but instead I put the cool thing in the title. In any case, last weekend I went to my college reunion and saw a dear friend who is doing an incredibly cool summer project about teaching and very thoughtfully teaches math to high school students during the school year. She was wearing an amazing pair of earrings, which are the topic of this post. https://instagram.com/p/3Ug9CvpM06/ Let’s look at them separately and then together. First, the top, with the square embedded in the larger square. We’re going to use some variables (which we talked about in this old post). Notice that the triangles on the four sides are all identical right triangles. Let’s label the sides of them: a for the short side, for the longer leg, and for the hypotenuse. I LOVE visual proofs. Personally I find them much more convincing than lists of equations with no pictures. Spoiler alert that’s what we’re doing right now (visual proof, not a list of equations)! Since the inner gray square is a square, and all of its sides are labeled by c, the area of the inner gray square must be $c^2$. Once, during a calculus test, a student asked me for the formula for the area of a triangle. I got mildly upset and said to think about it for a few minutes/come back to the question. Eventually he got it, but I think it’s because another TA told him the formula. Anyways, the area of the blue triangle is half of the base times the height, so it’s $\frac{1}{2}ab$. With four blue triangles, we have a total of 2ab area from the triangles. So the total area of the larger square is $c^2+2ab$. But I can also look at the outside square. Its sides are made up of one short blue leg and one long blue leg, so the large square side length is a+b, which means the outside square area is $(a+b)^2$. Then the first earring gives me the equation $c^2+2ab=(a+b)^2$. Now let’s look at the second earring. I’m going to use the same variables since they’re the same triangles. Here, all four pink triangles are identical. The orange square has for all of its sides, and the green has sides, so we know their areas. We also still know the area of the pink triangles. If, like that calculus student, you happened to forget the formula for the area of a triangle, you can see it here: two triangles together form a rectangle with base and height b, so the area of the rectangle is ab. As the rectangle was made up of two identical triangles, you can see the area of each individual triangle is 1/2ab. Again, the length of the side of the larger square is a+b, so the larger square’s area is $(a+b)^2$. Summing up the orange square, green square, and four pink triangles gives another expression for the area of the larger square. So the earrings are both pretty cool separately- we got to prove that binomial expansion works in the second earring, and we got to play around with this technique with the first earring. What if we put them together? Both of the earrings have the same large area, which we decided was $(a+b)^2$. So we can set the other sides of the equations equal to each other, and… WHAT IT’S THE PYTHAGOREAN THEOREM WHERE DID THAT COME FROM THIS IS SO COOL! Just look at how happy the squares are. That’s how happy I felt when I saw Shira’s earrings, and also how happy I hope you feel after seeing them too. Side note, I need to bake something. Unfortunately those lime pies from a few months ago were so delicious that every time I feel like baking, I make those pies. Another dear friend who does not do math asked me to try making pretzel salad, so that’s on the agenda, but my in-laws just visited and left a box of graham crackers. Yes, we could feed them to the baby, but they’re just sugar and honey (which you shouldn’t feed to babies), so I’ll clearly make a graham cracker crust and might accidentally make another lime pie unless inspiration strikes. We’ll see! What is the fundamental group? 29 May I’m at Yale for my fifth reunion, and it’s my birthday! Happy birthday to me! I’m a little overwhelmed by seeing all the old friends/catching up/showing off pictures of the baby so I’m hiding in my suite and pumping milk and writing a quick post about the fundamental group. We’ve talked before about what a group is– a set of elements with some operation that takes two elements to another one (like addition with the group of integers takes 5+3 to 8 or multiplication takes 1*9 to 9) which satisfies some group axioms. Given a geometric or topological object, we can associate a group with it my defining these elements and an operation, and making sure that they satisfy the axioms. First we fix a basepoint of our space, which means that you pick a point and say that’s the one, that’s the special one I want. Then our group elements with be isotopy classes of loops (this means you can wiggle loops to be the same, as in the red ones below) that go through the basepoint. The group operation is concatenation– first you do one loop starting and ending at the basepoint, then you do the next loop starting and ending at the basepoint. You can homotope away the middle connection to more clearly see the resulting loop. Here’s the example: Red and green make… more red. I didn’t want to make a brown curve I don’t want to FOMO my reunion (I already ran away to take a nap) so we’ll make this super fast and just look at the fundamental group of the circle and of the torus. How can I make a loop around a circle? Well, there’s one obvious way- make one full circle and end up where you starting. You can homotope to something that backtracks for a bit and then comes forward again, and you could go around two different directions (counterclockwise vs. clockwise). So let’s call these +1 and -1. Red goes around once counterclockwise, even though it backtracks a bit, and orange goes around once clockwise. Imagine the colors down on the black circle. If you put the red and orange curves together, concatenating like we did above, you’d fully backtrack over yourself, which means you could homotope to just a point. Let’s call that 0 (can you guess where we’re going here?) You can go around any integer number of times, but no fractions because you won’t end up back at the basepoint. So this is a rough schematic of why the fundamental group of the circle is the integers. I want to go back to my reunion, so I’ll just tell you that the fundamental group of the torus is $\mathbb{Z}^2$, a.k.a. ordered pairs of integers, as I hinted in a previous post using the following picture. Left: follow the numbers to see the knot. Right: look at the bottom-most green line. Sorry for the short and delayed post. It’s my birthday, YOLO. (I’m not that sorry about this post being delayed but I am sorry if it is unclear/too short please comment/let me know if you need more explanation). Prees, prees, pretty prees 13 May Last weekend I had a wonderful time at the Cornell Topology Festival- I went because my internet and now real life friend tweeted about it! Good things can come out of the internet! Another very talented friend made an icosohedron out of balloons (following a template by Vi Hart, so you can do it too!) and now I have a picture of me holding it: Anyways, apologies for delay in posting. I’ll try to double up this week to make up for it. There were a bunch of great talks at the conference, and here’s a post about one of them that really intrigued me, “Universal Groups of Prees” by Bob Gilman. I thought “prees” was a typo and meant to say “trees”, but nope, the word is “pree.” Hopefully you remember or know what a group is. A pree is something on the way to being a group, but not quite: it’s a set with partial multiplication, identity, inverses, and the associative law when defined. So in a group, whenever you have two elements and b, closure ensures that the product ab exists and is an element of the group. In a pree, it’s not necessarily the case that ab exists. But if it does, and bc exists and a(bc) exists, then (ab)c must also exist, and equal a(bc). [If it’s hard to think of something non-associative, check out this wikipedia article on the cross product and use your right hand and a friend’s right hand]. We can show this visually: Start from the top vertex. If you follow the arrow right and the arrow up, you’ll end up at the bottom left vertex. This tells us that we should label the edge from the bottom left vertex to the top by ab. This part of the triangle just shows that a*b=ab, that is, the product of and are defined in our pree. Next we’ll add a triangle for b*c: Notice that the arrow for b is the same direction as the previous picture; I just took the arrows out of the big triangle for cleanness Now we add the blue triangle in to the big triangle. There are two different ways to read the last face, and that fact means that those two expressions better be equal. This is associativity. Using the big triangle labels, we get a(bc). Using the small triangle, we have (ab)c. These are the same edge, so they must be equal If you’ve done any group theory you might have the same reaction I did: “THIS IS SO WEIRD TELL ME MORE I WANT TO KNOW!” This combinatorial pictorial thing is reminiscent of vector multiplication, but with group elements and it intrigues me no end. For short, they say this is an axiom of associativity: The official axiom probably lacks smiley eyes, but that’s clearly an oversight So you can think of prees as partial multiplication tables, and they determine a graph. But I tagged this post “group theory”, and prees do relate to groups. In fact, it’s a theorem that any finitely presented group (see here for reminder of definition) is a universal group of a finite pree. This group is defined as having generators equal to the elements of the pree, and relators are the products of the pree. Here’s the first example. If your pree (which is a set with partial multiplication) consists of two groups K and L which share a subgroup A, then the universal group of that pree is $K \ast_A L$, the free product of K and L amalgamated over A. If you don’t know what that means, don’t worry about it. We’ll do amalgamated products some other time and I’ll add a link here for that. You can also do this with letting your pree be a graph of groups, and get the correct corresponding group. Something whacky! This isn’t an open problem, but an undecidable problem: whether a finite pree embeds in its universal group (this means that there’s a function sending the elements of the pree into the group which respects the pree multiplication and doesn’t send two different pree elements to the same group element). So even if you might be able to tell, given a specific pree, whether it embeds in its universal group, there’s no algorithm that works for all finite prees. Here’s one of the main theorems of the talk: if, every time you have a collection of elements that can be put together to form a triangulated rectangle or pentagon, as in the picture below, one of the orange lines exists, then the resulting universal group is biautomatic. At least one diagonal of the rectangle and at least one chord of the pentagon exist. Colors don’t mean anything Remember, prees only have partial multiplication. So in the rectangle case, if we have ab and bc along the diagonal, the orange line means that ac also exists. Like you, dear reader, I also don’t know what biautomatic means, and Gilman didn’t explain it during his talk. But he did draw lots of pictures of this sort-of group-like thing. More information: here’s slides from a talk he gave somewhere else Here is a survey article on prees. On deck if I get around to it: more blog posts from this conference- talks by Denis Osin and Mladen Bestvina. Also, I really need to bake something new. I’ve made that super easy lime pie a bunch by now; I even made it at this conference with ingredients from a mini-mart. The limes had no juice so this was mostly a sweetened condensed milk pie, which was still delicious but too sweet. Introduction to random groups 26 Apr Last summer, I participated in an awesome research program which ended up with a good handful of original research projects all having to do with random groups. Random group theory is rather modern- the first definition of a random group came about in the 1980s (just like me!) It’s a very rich field with lots and lots of open problems, since you can dive into most properties of groups and ask if they apply to random groups. [Here’s the post defining groups if you forgot.] Some brief motivation: sometimes you want to say things about a “typical” group. Like, if you picked a group at random out of all possible groups, it very likely has property A. Example: if you pick an alumna from Mount Holyoke at random, she very likely is/was a woman. If you ask me to name a random private college in the United States, I will very likely say “Mount Holyoke”. Instead of having to say “very likely” all the time, we say “random groups have property P” if the probability that they have property P approaches 1 (this will be more precise below). So, to build our intuition in the example above: if I pick one person who’s graduated from Mt. Holyoke, maybe he’s transgender and is not a woman. But if I pick, say, 1000 alums, chances are that fewer than 10 identify as men (I do not know any research on incidence of trans but I think 10 is a safe bet here). And if I pick 10000, probably there’s very very few [also maybe there aren’t this many alums in all time?]. So I’ll say a random alum is a woman, even if there are a few edge cases. Probability of being a man in first 5: 20% in first 25: 8% in first 100: 3% in first 1000: 0.3% Of course, the massive issue with this example is that the number of Mt. Holyoke alums is finite, and the probability of being a woman is a fixed number between 0 and 100%. Hopefully this example illustrated the idea though- we want to look at the probability of having property A as the number of cases goes to infinity. Let’s add some more precision to this idea. First, we need a good way to describe groups- I (secretly) claimed that there are infinitely many, but how do we know that? Well, we’ve already seen the group of integers,$latex \mathbb{Z}$. And you can also have ordered pairs or triples of integers, $\mathbb{Z}^2, \mathbb{Z}^3$. So there’s already infinitely many groups:n-tuples of integers, a.k.a. $\mathbb{Z}^n$. But not all groups are copies of the integers (and I didn’t prove that these all aren’t the same group, but I’ll let you convince yourself of that)- for instance, we’ve thought about homeomorphisms of the torus. We’re going to use group presentations to describe our groups- every element will be written as a word in the alphabet of generators. So we’ll use letters like a,b,c… to represent generators, and we’ll put them together like aba, aa, babb as words to represent group elements. Let’s do an example that we’ve seen before. In the group of symmetries of a square, we have two generators, which we named (for the clockwise rotation by 90 degrees) and (for flipping over the horizontal axis). And we wrote all of our group elements as words in the letters and s, above. One presentation for this group is $\langle r, s | \ r^4=s^2=1, sr^2=r^2s \rangle$. So for a group presentation we have < letters of the generating alphabet | relations >. A relation is an equation in the letters which is true in the group. And if you list enough relations, you’ll be able to characterize a finitely presented group. There are some groups which can’t ever be finitely presented: no matter how many relations you write down, there’ll still be more relations which aren’t derived from the ones you wrote down. Wikipedia has a good list of examples of group presentations. Also, there are groups which aren’t even finitely generated: that means the generating alphabet is infinite. If you aren’t finitely generated, you’re definitely not finitely presented. So when we talk about random groups, we’re *only* going to talk about finitely presented groups. This is just something that happens when you try to make things precise: you might cut out other cases that you wanted to talk about. Anyways. In the few-relators model of random groups, we fix a number K ahead of time, like K=4. And we fix the size of our alphabet ahead of time too, like m=2. So we’re looking at two-generator groups with four relations. Our relations will be in the form word=1. Pick a length for the words, so the word abbab has length 5, and so does the word$latex b^{-1}a^{-1}b^2a$. Out of all possible words of length l, choose four at random: in our example, we have 324 possible words of length 5 in the letters a,b and their inverses. [Note that the word $aa^{-1}b = b$, so it has length 1, not 3.] We say a random group has property P in the few-relators model if the probability that a group with m generators and relators has property P goes to 1 as approaches infinity- so as the length of the relators goes to infinity. How’d I get 324 in the previous paragraph? Well, we have four choices for the first letter: a, b, $a^{-1},b^{-1}$. The second letter only has three options: it can’t be the inverse of the first letter. And the third letter can’t be the inverse of the second, so it also has three options. Same deal for the fourth and fifth letters. So I have 4*3*3*3*3 options, which is 324. Instead of picking exactly five of those as relations, I could pick some proportion of them. This is called the density model of random groups: instead of fixing K as the number of relations ahead of time, we fix d, a number between 0 and 1. Then we say a random group has property P in the density model if the probability that a group with generators and$latex 2m(2m-1)^{dl}\$ relators has property P goes to 1 as approaches infinity.

The density model is pretty common, and when people talk about random groups they often mean the density model.

Here’s a cartoon I drew because there aren’t enough pictures in this post: