Tag Archives: topology

Subgroup separability problem set session(non-elementary)

22 Jun

Update #1, five hours later: Zach Himes posted a great comment fixing my iffy part, and I also video chatted with Nick Cahill about it and we came up with an alternate solution which is still maybe iffy.  Adding both below.  Nick also added a comment about the second exercise, using group actions.  What do you think?

I’ve read Sageev’s PCMI lecture notes several times by this point; it’s the basis of my and many other people’s work (in particular, my friend Kasia has an impressive number of publications related to this stuff).  And every single time I get stumped on the same exercise near the end, so I thought I’d try to write up a solution, crowd-source it among my blog readers, and figure out something correct.  For reference, these are exercise 4.27 and 4.28 in his notes, but I’ll lay out the problems so you don’t need to look them up if you don’t want to.  Please comment with corrections/ideas!

A thing that mathematicians care about is the structure of a group.  We say that a particular subgroup H<G is separable if for any group element that’s not in H, we can find a finite index subgroup K that contains H but doesn’t contain g.  Intuitively, we can separate from H using a finite index subgroup.  Here’s the cartoon:

output_37OpDk

If H is separable, then given any g not in it, we can find a finite index subgroup that separates H from g.

The first exercise is to show that this definition is implied by another one: that for any group element that’s not in H, we can find a homomorphism f: G\to F where F is a finite group, so that the image of under the map doesn’t contain f(g).

So let’s say we start with such a homomorphism, and our goal is to find a finite index subgroup that contains but not g.  Since we’ve got a homomorphism, let’s use it and try K:=f^{-1}(f(H)).  Since f(g)\not\in f(H), we know this definition of excludes g, as desired.  Then we need to show that K is finite index in G and we’ll be done.

What about the first isomorphism theorem?  We have a map G\to F, and we know f(H)<F, and is a proper subgroup since f(g) isn’t in f(H).  This next bit is iffy and I could use help!  

  1. (Original) Then we have a map G\to F/f(H) induced by the map f, and the kernel of this map is K.  By the first isomorphism theorem, the index of in is the size of the image of this map.  Since F/f(H) is finite, the image of the map is finite.  So has finite index in G, as desired.  [What’s iffy here?  You can’t take quotients with random subgroups, just with normal subgroups, and I don’t see why f(H) would be normal in F unless there’s something I don’t know about finite groups.]
  2. (based on Zach Himes’ comment) By the first isomorphism theorem, ker has finite index in F.  We know ker is contained in K, since 1 is contained in f(H) [since 1 is contained in H, and f(1)=1, where 1 indicates the identity elements of G and F].  It’s a fact that if \ker f \leq K \leq G, then [G: \ker f] = [G:K][K: \ker f].  Since the left hand side is finite, the right hand side is also finite, which means that K has finite index in G, as desired.
  3. (Based on conversation with Nick Cahill) We can think of F/f(H) as a set which is not necessarily a group, and say that G acts on this set by (g, x) \mapsto f(g)x.  Then K=Stab(1):=Stab(f(H)).  By the orbit-stabilizer theorem, [G:K] = |Orb(1)|.  Since F is finite, the size of the orbit must be finite, so K has finite index in G, as desired.

The second exercise has to do with the profinite topology.  Basic open sets in the profinite topology of a group are finite index subgroups and their cosets.  For instance, in the integers, 2\mathbb{Z}, 2\mathbb{Z}+1 are both open sets in the profinite topology.  Being closed in the profinite topology is equivalent to being a separable subgroup (this is the second exercise).

So we have to do both directions.  First, assume we have a separable subgroup H.  We want to show that the complement of is open in the profinite topology.  Choose a in the complement of H.  By separability, there exists a finite index subgroup that contains and not g.  Then there’s a coset tK of which contains g.  This coset is a basic open set, so is contained in a basic open set and the complement of is open.

Next, assume that is closed in the profinite topology, so we want to show that is separable.  Again, choose some in the complement of H. Since the complement of is open, is contained in a coset of a finite index subgroup, so that is not in this coset.  Let’s call this coset K, and call its finite index n.  We can form a map f: G\to S_n, to the symmetric group on letters, which tells us which coset each group element gets mapped to.  Then is in the kernel of this map, since is contained in K, but is not in the kernel of f since it is not in that coset.  In fact no element of H is in the kernel.  So we’ve made a homomorphism to a finite group so that and have disjoint images, which we said implies separability by the previous exercise.

Okay math friends, help me out so I can help out my summernar!  So far in summernar we’ve read these lectures by Sageev and some parts of the Primer on Mapping Class Groups, and I’m curious what people will bring out next.

Dodecahedral construction of the Poincaré homology sphere, part II

26 Apr

Addendum: I forgot to mention that this post was inspired by this fun New Yorker article, which describes a 120-sided die.  It’s not the 120-cell; as far as I can tell it’s an icosahedron whose faces are subdivided into 6 triangles each.  The video is pretty fun.  Related to last week, Henry Segerman also has a 30-cell puzzle inspired by how the dodecahedra chain together.  In general, his Shapeways site has lots of fun videos and visual things that I recommend.  

Side note: when I told my spouse that there are exactly 5 Platonic solids he reacted with astonishment.  “Only 5?!”  I’d taken this fact for granted for a long time, but it is pretty amazing, right?!

Last week we learned about how to make the Poincaré homology sphere by identifying opposite sides of a dodecahedron through a minimal twist.  I thought I’d go a little further into the proof that S^3/I^*\cong \partial P^4, where the latter is the way that Kirby and Scharlemann denote the Poincaré homology sphere in their classic paper.  This post is a guided meditation through pages 12-16 of that paper, and requires some knowledge of algebraic topology and group actions and complex numbers.

Honestly, I don’t know too much about I^*, but I do know that it’s a double cover of I, which is the group of symmetries of the icosahedron.  For instance, if you pick a vertex, you’ll find five rotations around it, which gives you a group element of order 5 in I.  Every symmetry will be a product of rotations and reflections.

icosahedron

Icosahedron from wikipedia, created from Stella, software at this website.

Last time we watched this awesome video to see how you can tessellate the three sphere by 120 dodecahedra, and explained that we can think of the three sphere as {Euclidean three space plus a point at infinity} using stereographic projection.

Hey that’s great!  Because it turns out that there are 60 elements in I, which means that I^* has 120 elements in it.  Let’s try to unpack how acts on the three sphere.

First, we think of how the three sphere acts on the three sphere.  By “three sphere,” I mean all the points equidistant from the origin in four space.  The complex plane is a way to think of complex numbers, and it happens to look exactly like \mathbb{R}^2.  So if I take the product of two copies of the complex plane, I’ll get something that has four real dimensions.  We can think of the three sphere as all points distance 1 from the origin in this \mathbb{C}^2 space.  So a point on the three sphere can be thought of as a pair of points (a,b) , where and are both complex numbers.  Finally, we identify this point with a matrix \left( \begin{array}{cc} a & b \\ -\bar{b} & \bar{a} \end{array} \right), and then we can see how the point acts on the sphere: by matrix multiplication!  So for instance, the point (a,b) acts on the point (c,d) by \left( c \ d \right) \left( \begin{array}{cc} a & b \\ -\bar{b} & \bar{a} \end{array} \right)= (ac - \bar{b}d, bc + \bar{a}d), where I abused a bit of notation to show it in coordinate form.

What does this actually do?  It rotates the three sphere in some complicated way.  But we can actually see this rotation somewhat clearly: set equal to 0, and choose a to be a point in the unit circle of its complex plane.  Because is a complex unit, this is the same as choosing an angle of rotation θ [a=e^{i\theta}].

Remember how we put two toruses together to make the three-sphere, earlier in the video?  Each of those toruses has a middle circle so that the torus is just a fattening around that middle circle.  Now think about those two circles living in our stereographic projection.  One is just the usual unit circle in the xy plane of \mathbb{R^3}, and the other is the axis plus the point at infinity.  So how does (a, 0) act on these circles?  We can choose the basis cleverly so that it rotates the xy unit circle by θ, and ‘rotates’ the axis also by θ.  That means that it translates things up the axis, but by a LOT when they’re far on the z-axis and by only a little bit when they’re small.

rotation

We rotate the blue circle by the angle, and also rotate the red circle.  That means the green points move up the z-axis, but closer to the origin they move a little and farther away they move a lot.

Side note: this makes it seem like points are moving a lot faster the farther you look from the origin, which is sort of like how the sun seems to set super fast but moves slowly at noon (the origin if we think of the path of the sun as a line in our sky + a point at infinity aka when we can’t see the sun because it’s on the other side of the Earth).

Similarly, if we don’t have b=0, we can do some fancy change of coordinate matrix multiplication and find some set of two circles that our (a,b) rotate in some way.  In either case, once we define how the point acts on these two circles we can define how it acts on the rest of the space.  Think of the space without those two circles: it’s a collection of concentric tori (these ones are hollow) whose center circle is the blue unit circle, and whose hole centers on the red axis.  If you have a point on one of those tori, we move it along that torus in a way consistent with how the blue and red circles got rotated.

rotateion

This is a schematic: the blue and green circles get rotated, so the purple point on the pink torus gets rotated the way the blue circle does, and then up the way the green circle does.

What does this have to do with I?  Fun fact: the symmetries of the icosahedron are the same as the symmetries of the dodecahedron!  (Because they’re duals).  So let’s look back at that tessellation of the 120-cell by dodecahedra, and stereographically project it again so that we have one dodecahedron centered at the origin, with a flat face at (0,0,1) and (0,0,-1), and a tower of ten dodecahedra up and down the z-axis (which is a circle, remember).

Dodecahedron_t0_A2

The origin dodecahedron.

Now imagine rotating around the green axis by a click (a 2pi/5 rotation).  This is definitely a symmetry of the dodecahedron.  It rotates the blue circle, and by our action as we described earlier, it also rotates the green circle, taking the bottom of our dodecahedron to the top of it (because |e^{-\pi i/5}| = |e^{\pi i/5}| =1).  So this identifies the bottom and top faces with that minimal rotation.  We said earlier that this rotation has order 5 in I, which means that it has some corresponding group element in I^* with order 10.  10 is great, because that’s the number of dodecahedra we have in our tower before we come back to the beginning: so if we keep doing this group element rotation, we end up identifying the top and bottom of every dodecahedron in our z-axis tower of 10.

towerdodec

This is definitely a screenshot of that youtube video above, plus a little bit of paint so I could indicate the origin dodecahedron.

Similarly, using change of coordinate matrix basis madness, we can figure out how the rotations around the centers of each of the faces acts on the 120-cell (hint: each one will identify all the dodecahedra in a tower just like our first one did).  With 120 elements in I^*, each element ends up identifying one of the dodecahedra in the 120 cell with our origin dodecahedron, including that little twist we had when we defined the space.

So that’s it!  That’s how you go from the tessellation of the 120-cell to the Poincare homology sphere dodecahedral space.  Huzzah!

 

Dodecahedral construction of the Poincaré homology sphere

19 Apr

Update: Thanks as usual to Anschel for catching my typos!

This semester some grad students put together a learning seminar on the Poincaré homology sphere, where each week a different person would present another of the 8 faces from this classic (1979) Kirby-Scharlemann paper.  It was a fantastic seminar that I recommend to any grad students interested in algebraic geometry, topology, geometric group theory, that sort of thing. I did the last description, which is actually description number 5 in the paper.  You can read this post as a definition of the Poincaré homology sphere, without me telling you why mathematicians would care (but it has properties that makes mathematicians care, I promise).

First, start with a dodecahedron: this is one of the five Platonic solids, which are three-dimensional objects that can be created by gluing together regular (all sides are the same, all angles are the same) polygons so that the same number of polygons meet at any corner.  The fast example of a Platonic solid is a cube (three squares meet at each corner), and a non-Platonic solid is a square pyramid (4 polygons meet at the top, but only three at each corner).  If you glue two square pyramids together, you do get a Platonic solid, the octahedron.

tetrahedron

Glue two pyramids together along their squares sides, and now four triangular faces meet at each vertex and you have a Platonic solid: the octahedron.

So after all that build up, here’s a dodecahedron: 12 pentagons glued together the only way you can: start with one pentagon, glue five to it (one on each edge), glue those together into a little pentagonal cap with a toothy bottom.  If you make two of these caps, you can glue them together; the teeth fit into each other just right.  This is the first step in this AWESOME VIDEO below (seconds 30-45 or so):

To make a the Poincare dodecahedral space, let’s first review the torus.  A long time ago, we learned about how to make a torus: take a square, identify opposite edges while preserving orientation.

torus

First we glue the green arrow edges up together and get a cylinder, then the blue edge arrows together…

P1010380

I’m a torus!

If you only identify one pair of edges and flip the orientation, you get a Mobius strip.  If you do that to both pairs of edges, you get a Klein bottle, which you can’t actually make in three dimensions.

256px-Möbius_strip

Mobius strip picture from wikipedia

 

This torus/Mobius/Klein side note is just to review that we know how to glue edges together.  So look at the dodecahedron.  Each pentagonal face has a pentagonal face exactly opposite it, but twisted by 1/10 of a turn (2pi/10).  So if you identify each face with the opposite one, doing just the minimal turn possible, you get the Poincare homology sphere.  We started with 12 faces in our dodecahedron, so this glued-up space will have 6 faces.  It also has 5 vertices and 10 edges (vs. 20 vertices and 30 edges pre-gluing).  I can’t draw it for you because it’s a 3-manifold.  But here is a funny video of walking through it!

If you draw out all the identifications and you know some group theory, you can find the fundamental group of the thing, and you can prove to yourself that it is a 3-manifold and nothing funky happens at edges or vertices.

The dual to the dodecahedron is the icosohedron.  “Dual” means you put a vertex into the middle of each face of the dodecahedron, and connect edges of the dual if the corresponding faces share an edge in the dodecahedron.

dualdodec

Image from plus.maths.org

So you can see that the dual to the cube is the octohedron , and the tetrahedron is its own dual.  That’s all five Platonic solids!

BluePlatonicDice2

Top row: tetrahedron, cube, octahedron.  Bottom row: dodecahedron, icosohedron.

There’s more to the story than this!  Let’s think about spheres.  The 1-sphere is a circle in the plane, aka 2-space.  Equivalently, the 1-sphere is all points that are equidistant from 0 in 2-space.  Similarly, the 2-sphere is all points equidistant from 0 in 3-space.   This gives you a notion of the 3-sphere.  How can we picture the 3-sphere?  We can use stereographic projection.

Here are the examples of stereographic projection of the circle and the 2-sphere onto the line and 2-space, respectively.  You cut out a single point from the north pole of the sphere, and attach the space to the south pole as a tangent.  Given some point on the sphere, run a line from the north pole through that point: it hits the space at exactly one point, and that’s the stereographic projection of the sphere-point.  Notice that the closer you get to the north pole, the farther out your projection goes.  If we pretend there’s one extra point (infinity) added to the plane, we can identify the n-sphere with n-space plus a point at infinity.  Look at this link and buy things from it if you want!

stereoproject

Projecting from the sphere to the plane: the bottom hemisphere of the red sphere maps to the pink circle in the plane, the top half maps to the rest of the plane.

What do circles that go through the north pole look like?  Just like when we projected the circle to the line, they look like infinite lines.

So we can see the three sphere as 3-space, plus a point at infinity.   Similarly here, circles that go through the north pole look like infinite lines.

Our math claim is that \mathbb S^3/I^* \cong \partial P^4, or that if I act on the 3-sphere by the binary icosohedral group, I get this exact dodecahedral space as the quotient.  Binary icosohedral goup is just some extension of the group of symmetries of the icosohedron, which is the same as the group of symmetries of the dodecahedron.  So we want to see a way to see this action.  The awesome video up top shows us how to start.  I’ll describe the contents of the video; you should read the next paragraph and re-watch the video after/while reading it:

Start with one dodecahedron.  Stack another on top of it, lining up the pentagons so you can glue one to another (that means the one on top is a 2pi/10 turn off from the bottom one).  Now make a tower of ten dodecahedra, all glued on top of each other.  Make a second tower of ten dodecahedra, and glue it to the first one (so it’ll twist around a bit).  Glue the top and bottom of the first tower together (they’ll line up because we did a 2pi total turn); this’ll automatically glue the top and bottom of the second tower together.  Nestle six towers like this together, so the toruses created from the towers all nestle together.  Now you have a torus of 60 dodecahedra.  Make a second torus of 60 dodecahedra.  Put the second torus through the hole of the first, so you get a solid ball.  (Here’s the weird 4-dimensional part!)  That is a 3-ball!  (The first torus also goes through the hole of the second one).  So now we have tesselated the 3-ball with dodecahedra; this is called the 120-cell.  

I might make a more technical second post on this topic explaining in detail the action, but suffice it to say that we have an action by a group that has 120 elements, so that if we quotient out this 120-cell by the action, we end up with just one dodecahedron with the faces identified the way we want them to (opposite faces identified by a twist).  What is this group of 120 elements?  It’s derived from the symmetries of the icosahedron, which has the same symmetries as the dodecahedron!

Final interesting notes on this: we identified opposite sides by just one turn.  If you do two turns (so a 4pi/10 turn), you get the Seifert-Weber dodecahedral space.  If you do three turns, you get real projective space.

More reading:

Jeff Weeks article on shape of space, a.k.a. is the universe a Poincare homology sphere?

Thurston book on geometry and topology

Fun website: Jeff Week’s geometry games

What is a manifold? What is not a manifold?

29 Mar

I just went to a talk and there was one tiny example the speaker gave to explain when something is not a manifold.  I liked it so much I thought I’d dedicate an entire blog post to what was half a line on a board.

I defined manifolds a long time ago, but here’s a refresher: an n-manifold is a space that locally looks like \mathbb{R}^n.  By locally I mean if you stand at any point in the manifold and draw a little bubble around yourself, you can look in the bubble and think you’re just in Euclidean space.  Here are examples and nonexamples of 1-manifolds:

1manis

Red and orange are manifolds: locally everything looks like a line.  But yellow and green are not.

At any point on the orange circle or red line, if we look locally we just see a line.  But the yellow and green both have bad points: at the yellow bad point there are 2 lines crossing, which doesn’t happen in \mathbb{R}, and in the green bad point there’s a corner.

1manisbad

I messed up a little on the orange one but imagine that that is a smooth little curve, not a kink.

We call 2-manifolds surfaces, and we’ve played with them a bunch (curves on surfaces, curve complex, etc. etc.).  Other manifolds don’t have fun names.  In general, low-dimensional topology is interested in 4 or less; once you get to 5-manifolds somehow everything gets boring/collapses.  It’s sort of like how if you have a circle in the plane, there’s something interesting there (fundamental group), but if you put that circle into 3-space you can shrink it down to a point by climbing a half-sphere to the North Pole.

circle

Empty pink circle in the plane can change size, but not topology (will always have a hole).  In 3-space, it can contract to a point.

The other thing we’ll want to think about are group actions.  Remember, a group acts on a set X if there’s a homomorphism that sends a group element g to a map \phi_g:X\to X such that the identity group element maps to the identity map, and group multiplication leads to composition of functions: gh \mapsto \phi_g \circ \phi_h.  That is, each group element makes something happen on the set.  We defined group actions in this old post.  Here’s an example of the integers acting on the circle:

rotations

Each integer rotates the circle by pi/2 times the integer. Looks like circle is getting a little sick of the action…

So far we’ve seen groups and manifolds as two different things: groups are these abstract structures with certain rules, and manifolds are these concrete spaces with certain conditions.  There’s an entire class of things that can be seen as both: Lie Groups.  A Lie group is defined as a group that is also a differentiable manifold.  Yes, I didn’t define differentiable here, and no, I’m not going to.  We’re building intuitions on this blog; we might go into more details on differentiability in a later post.  You can think of it as something smooth-ish without kinks (the actual word mathematicians use is smooth).

diffeomo

Top is smooth and differentiable. Bottom isn’t; there are weird kinks in its frown

So what are examples of Lie groups?  Well, think about the real numbers without zero, and multiplication as the group operation.  This is a manifold-at any point you can make a little interval around yourself, which looks like \mathbb{R}.  How is it a group?  Well, we have an identity element 1, every element has an inverse 1/x, multiplication is associative, and the reals are closed under multiplication.

Here’s another one: think about the unit circle lying in the complex plane.  I don’t think we’ve actually talked about complex numbers (numbers in the form x + iy, where is the imaginary square root of -1) on this blog, so I’ll do another post on them some time.  If you don’t know about them, take it on faith that the unit circle in the complex plane is a Lie group under multiplication.  Multiplying by any number on the unit circle gives you a rotation, which preserves the circle, again 1 is the identity, elements have inverses, and multiplication is associative.  Circles, as we said before, are 1-manifolds.

more

Examples of Lie Groups: the real line minus a point, and the unit circle in the complex plane

If you have a group action on a set, you can form a quotient of the set by identifying two points in the set if any group element identifies them: that is, and become one point if there’s a group element so that g.x=y.  For instance, in the action of the integers on the circle above, every point gets identified with three other points (so 12, 3, 6, and 9 o’clock on a clock all get identified under the quotient).  Your quotient ends up being a circle as well.  We denote a quotient of a group G acting on a set X by X/G.

quo

So here’s a question: when is a quotient a manifold?  If you have a Lie group acting on a manifold, is the resulting quotient always a manifold?  Answer: No!  Here’s the counterexample from the talk:

Consider the real numbers minus zero using multiplication as the group operation (this is the Lie group \mathbb{R}^{\times}) acting on the real line \mathbb{R} (this is a manifold).  What’s the quotient?  For any two non-zero numbers a, b on the real line, multiplication by a/b sends to a, so we identify them in the quotient.  So every non-zero number gets identified to a point in the quotient.  What about zero?  Any number times zero is zero, so zero isn’t identified with anything else.  Then the quotient \mathbb{R}/\mathbb{R}^{\times} is two points: one for zero, and one for all other numbers.

If the two points are “far apart” from each other, this could still be a 0-manifold (locally, everything looks like a point).  But any open set that contains the all-other-numbers point must contain the 0-point, since we can find real numbers that are arbitrarily close to 0.  That is, 0 is in the closure of the non-zero point.  So we have two points such that one is contained in the closure of the other, and we don’t have a manifold.  In fact our space isn’t Hausdorff, a word I mentioned a while back so I should probably define in case we run into it again.  Hausdorff is a serious way of explaining “far apart.”  A space is Hausdorff (adjective) if for any two points in the space, there exist disjoint neighborhoods of the two spaces.  So the real line is Hausdorff, because even if you take two points that look super close, like 2.000000000001 and  2.000000000002, you can find infinitely many numbers between them, like 2.0000000000015.

hausdorff

Any two points on the real line, if you zoom in enough, have space between them.  So the real line is Hausdorff.

If you’re curious as to when the quotient of a smooth Manifold by a Lie Group is a manifold, you should go take a class to fully appreciate the answer (the Quotient Manifold theorem). The phrasing of the Quotient Manifold Theorem below is from a book by John Lee, called Introduction to Smooth Manifolds (the version from wikipedia gets rid of one of the conditions but also gets rid of much of the conclusion).  Briefly: a smooth action means the function on M is smooth (see the picture above; we didn’t do an in-depth definition of smooth), a free action means there aren’t any fixed points, and a proper action has to do with preimages of certain types of sets.

Theorem 21.10. Suppose G is a Lie group acting smoothly, freely, and properly on a smooth manifold M. Then the orbit space M/G is a topological manifold of dimension equal to dimMdimG, and has a unique smooth structure with the property that the quotient map π:MM/G is a smooth submersion.

Current research: lifting geodesics to embedded loops (and quantification)

19 Nov

Last week we learned about covering spaces, and I made a promise about what we’d talk about in this post.  For those who are more advanced, this all has to do with Scott’s separability criterion, so you can take a look back at that post for a schematic.  I’ll put the picture in right here so this post isn’t all words:

Left side is an infinite cover, the real numbers covering the circle.  Middle is a happy finite cover, three circles triple covering the circle.  Right is a happy finite cover, boundary of the Mobius strip double covering the circle.

Left side is an infinite cover, the real numbers covering the circle. Middle is a happy finite cover, three circles triple covering the circle. Right is a happy finite cover, boundary of the Mobius strip double covering the circle.

In my friend Priyam Patel’s thesis, she has this main theorem:

Theorem (Patel): For any closed geodesic g on a compact hyperbolic surface \Sigma of finite type with no cusps, there exists a cover \tilde{\Sigma}\to\Sigma such that g lifts to a simple closed geodesic, and the degree of this cover is less than C_{\rho}\ell_{\rho}(g), where C_{\rho} is a constant depending on the hyperbolic structure \rho.

We know what geodesics are, and we say they’re closed if the beginning and end are the same point (so it’s some sort of loop, which might intersect itself a bunch).  But wait, Yen, I thought that geodesics were the shortest line between two points!  The shortest path from a point to itself is not leaving that point, so how could you have a closed geodesic?  Nice catch, rhetorical device!  A closed geodesic is still going to be a loop, but it won’t be the shortest path between endpoints because there are no endpoints.  Instead, just think locally: if a closed geodesic has length l, then if you look at any two points x and y less than l/2 apart from each other, the closed geodesic will describe an actual geodesic segment between x and y.  It’s locally geodesic.

What about hyperbolic surfaces of finite type with no cusps?  Well, we say a surface \Sigma is of type (g, b, n) if it has genus (that’s the number of holes like a donut), boundary components, and punctures or cusps.

Pink: (4,0,0) Orange: (3,0,2) Green: (1,2,1) Ignore the eyes they're just for decoration

Pink: (4,0,0)
Orange: (3,0,2)
Green: (1,2,1)
Ignore the eyes they’re just for decoration

Boundary components are sort of like the horizontal x-axis for the half plane: you’re living your life, totally happy up in your two-dimensional looking space, and then suddenly it stops.  This is also what a boundary of a manifold is: where the manifold locally looks like a half-space instead of all of \mathbb{R}^n.  Surfaces are 2-manifolds.

Finally, I drew punctures or cusps suggestively- these are points where you head toward them but you never get there, no matter how long you walk.  These points are infinitely far from the rest of the surface.

I think we know all the rest of the word’s in Priyam’s theorem *(hyperbolic structure is a hyperbolic metric).  The important thing to take from it is that she bounds the degree of the cover above by a constant times the length of the curve.  This means that she finds a cover with degree smaller than her bound (you can always take covers with higher degree in which the curve still embeds, but the one she builds has this bound on it).

Just looking at this old picture again so you can have a sort of idea of what we're thinking about

Just looking at this old picture again so you can have a sort of idea of what we’re thinking about

She’s looking for a minimum degree cover and finds an upper bound for it in terms of length of the curve.  Let’s write that as a function, and say f_{\rho}(L) gives you the minimum degree of a cover in which curves of length embed (using the hyperbolic structure \rho).   What about a lower bound?

Here’s where a theorem (C in that paper) by another friend of mine, Neha Gupta, and her advisor come in:

Theorem (Gupta, Kapovitch): If \Sigma is a connected surface of genus at least 2, there exists a c=c(\rho, \Sigma)>0 such that for every L\geq sys(\Sigma), f_{\rho}(L)\geq c(\log(L))^{1/3}.

So they came up with a lower bound, which uses a constant that depends on both the surface and the structure.  But it looks like it only works on curves that are long enough (longer than the systole length, which we’ve seen before in Fanoni and Parlier’s research: the length of the shortest closed geodesic on the surface).  Aha!  If you’re a closed geodesic, you’d better be longer than or equal to the shortest closed geodesic.  So there isn’t really a restriction in this theorem.  Also, that paper is almost exactly 1 year old (put up on arxiv on 11/20/2014).

Now we have c_{\rho,\Sigma}(\log(L))^{1/3}\leq f_{\rho}(L) \leq C_{\rho}L.

This is where it gets exciting.  We know from Scott in 1978 that this all can be done, and then Patel kickstarts the conversation in 2012 about quantification, and then two years later Gupta and Kapovich do the other bound, and boom! in January 2015, just three months after Gupta-Kapovich is uploaded to the internet, my buddy Jonah Gaster  improves their bound to get \frac{1}{c}L\leq f_{\rho}(L), where his constant doesn’t even depend on \rho.  He does this in a very short paper, where he uses specific curves that are super hard to lift and says hey, you need at least this much space for them to not run into each other in the cover.

Here’s a schematic of the curves that are hard to lift (which another mathematician used to prove another thing [this whole post should show you that the mathematical community is tight]):

This curve in the surface goes around one part of the surface 4 times, and then heads over to a different part and circles that.  This schematic is a flattened pair of pants, which we've seen before (so the surface keeps going, attached to this thing at three different boundary components)

This curve in the surface goes around one part of the surface 4 times, and then heads over to a different part and circles that. This schematic is a flattened pair of pants, which we’ve seen before (so the surface keeps going, attached to this thing at three different boundary components).  I did not make this picture it is clearly from Jonah’s paper, page 4.

So that’s the story… for now!  From Liverpool (Peter Scott) to Rutgers in New Jersey (Priyam) to Urbana/Champaign in Illinois (Gupta and Kapovitch) to Boston (Jonah), with some quick nods to a ton of other places (see all of their references in their papers).  And the story keeps going.  For instance, if you have a lower bound in terms of length of a curve, you automatically get a lower bound in terms of the number of times it intersects itself (K\sqrt{i(g,g)}\leq \ell(g), same mathematician who came up with the curves).  So an open question is: can you get an upper bound in terms of self-intersection number, not length?

What is a covering space?

14 Nov

We’ve briefly covered fundamental groups before, and also I’ve talked about what geometric group theory is (using spaces to explore groups and vice versa). One way to connect a group to a space is to look at a covering space associated to that group. So in this post, we’ll come up with some covering spaces and talk about their properties. This is in preparation for talking about separability (we already have an advanced post about that).

Aside: you might catch me slipping into the royal we during my math posts.  This is standard practice in math papers and posts, even if a paper is written by a single author.  Instead of saying “I will show” and proving stuff to you the reader, we say “we will show” and we go on a journey together.  I’m sure that’s not why mathematicians do this, but I like to think of it that way.

Also, sometimes I say “group” when I’m obviously referring to a space, and then I mean the Cayley graph of that group (which changes depending on generating set, but if it’s a finite generating set then all Cayley graphs are quasi-isometric).

Let’s start with an example, and then we’ll go on to the definition.  Here’s an old picture to get us in the mood:

This blue curve goes around the circle three times.

This blue curve goes around the circle three times.

This picture was from the short fundamental groups post: you’re supposed to see that the blue spiral up above represents a curve going three times around the circle below.  Now consider this next picture:

Blue line covering the happy circle below

Blue line covering the happy circle below

Here the blue spiral goes on forever in both directions.  If you unwound it, you’d get a line stretching on forever in both directions, which we’ll call the real line (the same number line you’re used to, with real numbers along it).  This picture sums up the intuition that the real line covers the circle: for any point on the circle, there are a bunch of points on the real line directly above it that project down to that point.  In fact, it does more than that:

Pink parts of blue line cover the pink part of the cirlce

Pink parts of blue line cover the pink part of the circle

For any point on the circle, there’s a neighborhood (the pink part) so that up in the real line, there are a bunch of neighborhoods that map down to that pink part.  And those neighborhoods aren’t next to each other nor all up in each other’s business: they’re disjoint.  So here’s the definition:

A covering space X of a space Y is a space with a map p: X->Y such that any point in Y has a neighborhood N whose preimage in p^{-1}(N)\subset X is a collection of disjoint sets which are homeomorphic to N.

So why is this helpful?  Well, in our example we can say that the real line covers the circle, from the pink pictures.  We could also say that the circle wound around itself three times covers the circle, from the first picture in this post:

The three highlighted parts up above are homeomorphic to the the pink part on the bottom circle's chin

The three highlighted parts up above are homeomorphic to the the pink part on the bottom circle’s chin

The picture I just drew might not convince you, because every point on the bottom space needs to have a neighborhood that lifts up to the top space, and what about the left most point of the circle?  Well, up above that neighborhood just winds around between the top and bottom copies:

Still a cover: each of those pink things up above are homeomorphic to the bottom cheek

Still a cover: each of those pink things up above are homeomorphic to the bottom cheek

The fundamental group of the circle is the integers, so maybe using geometric group theory (or algebraic topology, really) we can come up with conclusions about the integers using facts about the circle or the line, and vice versa.  In fact, there’s a correspondence between group structures and covering spaces!  With some conditions, covering spaces correspond to subgroups of fundamental group.

Let’s see how this correspondence works in our example with the integers.  We know that the even integers are a subgroup of the integers, and so are 3\mathbb{Z}, 4\mathbb{Z}, etc.  In fact, these are all of the subgroups (and the trivial subgroup, which is just the element {0}).  Above, we drew two covering spaces of the circle: the real line, where each neighborhood of the circle has infinitely many homeomorphic copies hanging out in the real line, and the circle wound around itself three times, where each neighborhood has three copies.  The number of copies is called the degree of the cover, and sometimes one says the cover is an n-fold covering.  You can wind the circle around itself times for any n, which will correspond to the n\mathbb{Z} subgroup.  How does this correspondence work?  Well, looking at the degree three/3-fold picture again, if you go around the covering circle once, you’ll project down to going around the base circle three times.  So if you go around the covering circle and count, you’ll get 0, 3, 6, 9… In contrast, the real line corresponds to the trivial subgroup (and is an infinite degree cover), and it’s called the universal cover of the circle.  Every space has a unique universal cover, which is a covering space with trivial fundamental group.

Now a preview of why we’ll like this.  Sometimes spaces are tricky and not fun and it’s easier to look upstairs at a cover, and then go back downstairs.  Let’s let the downstairs space be two circles pinched together at a point.

Pink and green above correspond to copies of neighborhoods downstairs

First, you should get convinced that the picture above is a cover; I colored the homeomorphic copies in order to highlight what’s happening.  Also, pretend the branching part goes on forever, a la the Cayley graph of the free group on 2 generators:

from wikipedia

So here’s an example: let’s say we have a path downstairs that goes around the green circle several times.  And maybe we don’t want this path to hit itself over and over again, so we look at a cover upstairs so it turns into a line instead.  So instead of just being immersed (locally injective), the path is embedded (injective) in the cover upstairs.

The orange scribble downstairs goes around the green loop over and over again, hitting itself.  Upstairs, it's a line and doesn't it itself

The orange scribble downstairs goes around the green loop over and over again, hitting itself. Upstairs, it’s a line and doesn’t hit itself

Next time I write about current math research, I’ll be using covering spaces a lot!  In fact, one of the main questions is this: if you have a path downstairs that hits itself (is immersed), what’s the minimum degree cover you need to ensure that the path is embedded in the cover?  This question isn’t explicitly answered yet for loops on surfaces, but the research I’ll blog about gives some bounds on the degree.

The fundamental theorem of geometric group theory, Part II: proof

1 Oct

A refresher from last week: If a group G acts on a proper geodesic metric space X properly discontinuously and cocompactly by isometries, then G is quasi-isometric to X.  Moreover, G is finitely generated.

Yes, I put “proper geodesic metric space” in italics because I forgot it in the statement of the theorem last week.  [Aside: actually, you only need “length space” there, and proper geodesic will follow by Hopf-Rinow.  But let’s skip that and just go with proper geodesic.]  I also added the second sentence (which isn’t really a moreover, it comes for free during the proof).

At the end of last week I defined proper: closed balls are compact. A space is geodesic if there is a shortest path between any two points which realizes the distance between those points.  For instance, the plane is geodesic: you can just draw a line between any two points.  But if you take away the origin from the plane, it’s not geodesic anymore.  The distance between (1,1) and (-1,-1) is 2\sqrt{2}, but the line should go through the origin.  There is no geodesic between those two points in this space.

Now we have all our words, so let’s prove the theorem!  I’ll italicize everywhere we use one of the conditions of the theorem.

Since our action is cocompact, we have a compact set K so that translates of it tile X.  Pick a point inside K, call it x_0, and a radius R so that K is entirely contained inside a ball of radius R/3 centered at x_0.  For notation, this ball will be labelled B(x_0,R/3).

Schematic: special point is the yellow dot, yellow circle is radius R/3, lighter circle is radius R.  Cartoon on right illustrates this relationship

Schematic: K is the red square, special point is the yellow dot, yellow circle is radius R/3, lighter circle is radius R. Cartoon on right illustrates this relationship

We’ll pick a subset of the group G: Let A =\{ g\in G: g.B(x_0,R)\cap B(x_0,R) \neq \emptyset\}.  X is proper, so closed balls are compact.  Since the action is properly discontinuous, this means that is finite.  [Reminder: properly discontinuous means that only finitely many group elements translate compact sets to intersect themselves].

Now we’ll show that G is finitely generated, and it’s generated by A.  Choose some group element g in G.  Draw a geodesic in between your special point x_0 and its g-translate g.x_0.  Now we’ll mark some points on that geodesic: mark a point every R/3 away from x_0, all the way to the end of the geodesic.  You’ll have [(length of the segment)/(R/3) rounded up] many points marked.  Let’s call that number n.

firststep

There are n blue points, and they’re all R/3 away from each other. Notice the last blue point might be closer to g.x_0, but it’s definitely less than or equal to R/3 away.

Here’s the clever part.  Remember that K tiles X by G-translates (cocompactness), so each of those blue points lives inside a G-translate of K.  Since x_0 lives inside K, that means there’s a nearby translate of x_0 to each blue point.  And since K fits inside a R/3 ball, each translate is less than or equal to R/3 away from its blue point.

The green points are translates of x_0: I also colored x_0 and g.x_0.  The yellow circle indicates the the green point is within R/3 of its blue point.

The green points are translates of x_0: I also colored x_0 and g.x_0. The yellow circle indicates the the green point is within R/3 of its blue point.

We can bound how far apart the consecutive green points are from each other: each one is within R/3 of its blue point, which are all R/3 apart from their neighbors.  So the green points are at most R/3+R/3+R/3= R from each other.

Middle portion is exactly R/3 long.  So by the triangle inequality, the green points are less than or equal to R from each other.

Middle portion is exactly R/3 long. So by the triangle inequality, the green points are less than or equal to R from each other.

Remember that the green points represent G-translates of x_0.  In the picture above I numbered them g_0.x_0=x_0,g_1.x_0,g_2.x_0,\ldots g_nx_0=g.x_0.  We just said that d(g_1.x_0,g_2.x_0)\leq R.  Since G acts by isometries, this means that d(g_2^{-1}g_1.x_0,x_0)\leq R.  So g_2^{-1}g_1 lives inside our set A that we defined above- it moves x_0 within of itself.

Here’s a bit of cleverness: we can write g=g_n=g_0^{-1}g_1\cdot g_1^{-1}g_2 \cdots g_{n-1}^{-1}g_n, because all of the middle terms would cancel out and we’d be left with g=g_0\cdot g_n = 1\cdot g = g.  But each of those two-letter terms lives in A, so we just wrote as a product of elements in A.  That means that A generates G.  We said above that A is finite, so G is finitely generated.

That was the “moreover” part of the theorem.  The main thing is to show that G is quasi-isometric to X.  Let’s try the function g\mapsto g.x_0.

Above, we wrote as a product of elements of A, so that means that the length of is at most n.  In other words, d_G(1,g)\leq n.  Now we’d like to bound it by d_X(x_0,g.x_0).  We found by dividing the geodesic into pieces, so we have n\leq \frac{d_X(x_0,g.x_0)}{R/3}+1, where we added a 1 for the rounding.  So we have one side of the quasi-isometry: d_G(g_1,g_2)\leq \frac{3}{R}d_X(g_1.x_0,g_2.x_0)+1 (using the action by isometries).

Now we need to bound the other side, which will be like stepping back through our clever argument.  Let M be the maximum distance that an element of translates x_0.  In symbols, M=max_{a\in A} d_X(x_0,a.x_0).  Choose some in G, with length n.  That means we can write as a product of elements in A: g=a_1\cdots a_n.  Each a_i moves x_0 at most M.  If we step between each translate, we have d(a_i.x_0,a_{i+1}.x_0)=d(a_{i+1}^{-1}a_i.x_0,x_0)\leq M.  There are steps from x_0 to g.x_0, and each step contributes at most M to the distance.  So d_X(x_0,g.x_0)\leq M d_G(1,g).

With bounds on both sides, we can just pick the larger number to get our actual quasi-isometry.  We also need the function to be quasi-onto, but it is because the action is cocompact so there are translates of x_0 all over the place.

Huzzah!

The fundamental theorem of geometric group theory (part I), topology

24 Sep

I love the phrase “THE fundamental theorem of…” It’s so over the top and hyperbolic, which is unlike most mathematical writing you’ll run into.  So you know that it’s important if you run into the fundamental theorem of anything.  By now we all have some background on geometric group theory: you’ll want to know what a group action is and what a quasi-isometry is.  (Refresher: a group G acts on a space X if each group element g gives a homomorphism of the space X to itself.  A quasi-isometry between two spaces X and Y is a function f so that distances between points get stretched by a controlled scaling amount + an additive error term).  We say a group G is quasi-isometric to a space X if its Cayley graph is quasi-isometric to X.  Remember, a Cayley graph is a picture you can draw from a group if you know its generators.

Still from Wikipedia: a Cayley graph of the symmetries of a square

There are several more terms we’ll want to know to understand the theorem, but I’ll just do one more before we start.  We say a group G acts on a space X by isometries if it acts on X, and each homomorphism is actually an isometry (it preserves distance).  So for instance, the integers acting on the real line by multiplication isn’t by isometries, because each homomorphism spreads the line out (so the homomorphism of the reals to themselves given by 3 is x \mapsto 3x, which stretches out distances).  But if the action is defined by addition, then you’re okay: x\mapsto x+3 preserves distances.

Under the red function, f(2)-f(1)=6-3=3, but 2-1=1, so this isn't an isometry. Under the green function, f(2)-f(1)=5-4=1, which is equal to 2-1. This is always true, so this is an isometry.

Under the red function, f(2)-f(1)=6-3=3, but 2-1=1, so this isn’t an isometry.
Under the green function, f(2)-f(1)=5-4=1, which is equal to 2-1. This is always true, so this is an isometry.

So here’s the fundamental theorem:

If a group G acts properly discontinuously, cocompactly, and by isometries on a proper metric space X, then G is quasi-isometric to X. 

You can parse this so far as “If a group G acts by isometries on a space X with condition condition condition, then G is quasi-isometric to X.”  Putting aside the conditions for now, how would we prove such a theorem?  Well, to show something is quasi-isometric, you need to come up with a function so that the quasi-isometry condition holds: for all x,y in X, we need \frac{1}{K} d_G(f(x),f(y))-C\leq d_X(x,y) \leq K d_G(f(x),f(y))+C.

So let’s deal with those conditions!  An action is cocompact if there’s some compact subset S of X so that G-translates of S cover all of X.  Remember, each element g in G gives an isometry of X, so it’ll send S to some isometric copy of itself somewhere else in X.  In our example above, the integer 3 will send the compact subset [5,7] to the isometric copy [8,10].  In fact, our example action is cocompact: you can keep using [5,7] as your compact set, and notice that any point on the real line will eventually be covered by a translate of [5,7].  For instance, -434.32 is covered by [-435,-433], which is the image of [5,7] under the isometry given by -440.

This action is also cocompact. Here I have the plane, conveniently cut up with an integer lattice. Can you figure out what the action is? Hint: the red square is a unit square, and the pink squares are suppose to be various translates of it.

This action is also cocompact. Here I have the plane, conveniently cut up with an integer lattice. Can you figure out what the action is? Hint: the red square is a unit square, and the pink squares are suppose to be various translates of it.

G acts on X properly discontinuously if for any two points x,y in X, they each have a neighborhood U_x, U_y so that only finitely many g make g.U_x\cap U_y\neq\emptyset.  Let’s look at our example action again.  If I take the points 4365.234 and 564.54 in the real line, I’d like to find neighborhoods around them.  Let’s choose the intervals [4365,4366] and [564,565].  The only integers that make these hit each other are -3801 and -3800.  In particular, 2 is finite, so this indicates proper discontinuity.  If we actually wanted to prove the action is properly discontinuous, we’d want to show this is possible for all numbers, not just these two specific ones I chose.

Schematic of proper discontinuity: only finitely many g will send the yellow oval to hit the blue blob

Schematic of proper discontinuity: only finitely many g will send the yellow oval to hit the blue blob

Finally, a metric space X is proper if all closed balls are compact.  Balls are intuitively defined: they’re all the points that are at a fixed distance or less from your center.  In the plane, balls are circles, centered around points.  And compact-well, aw shucks I haven’t defined compact and we’ve been using it!  Time for some topology.  We’ll prove this theorem next time around, this post is just definitions and background.  (Sorry for the cliffhanger, but it should be clear what we’re going to do next time: make a function, show it’s a quasi-isometry).

Just like groups are the fundamental object of study in algebra, open sets are the fundamental object of study in topology.  You’re already familiar with one type of open set, the open interval (say, (6,98), which includes all numbers between 6 and 98 but doesn’t include 6 and 98).  I just described another above: balls.  So, open circles in the plane are open sets.  Sets are closed if their complement is open: that is, the rest of the space minus that set is open.  In the real line example, [6,74] is closed because (-\infty,6)\cup(74,\infty) is open (it’s the union of infinitely many open sets, say (74,76) with (75,77) with (76,78) and on and on).

Notice that I haven’t yet defined what an open set is.  That’s because it’s a choice- you can have the same space have different topologies on it if you use different definitions of open sets.  I’ll point you to these wikipedia articles for more examples on that.

A set is compact if every covering of it by open sets has a finite subcover.  That means that any time you write your set S as a union of open sets, you can choose finitely many of those and still be able to hit all the points of S.  From above, the set (74,\infty) is not compact, because you can’t get rid of any of the sets in that infinite covering and still cover the set.  On the real line, a set is compact if it’s closed and bounded (this is the Heine-Borel theorem, a staple of real analysis).

So that’s enough for today.  More next time (like a proof!)  Also, I’m using my husband’s surface to blog this, which means I did all the pictures using my fingers.  It’s like finger painting.  What d’you think?  Better than usual pictures, or worse?

Proof of Scott’s Criterion for separability (hard math) (UPDATED)

13 Sep

UPDATED: Thanks to my dear friend Teddy (who hasn’t updated his website and is at Cornell now, not UCSB), I’ve made the converse direction of the proof more correct.  There’s definitely still a glaring defect, but that’s entirely my fault.  

This is out of character for this blog- it’s not accessible for most people.  If you have taken a course in algebraic topology, you can read this post and I’ll explain everything.  Otherwise, I’m not offering enough background to understand it.  Sorry!  Blame pregnancy!

I’ve been spending the past few months slowly slogging through a big paper that my advisor recently cowrote, on an alternative proof of Wise’s Malnormal Special Quotient Theorem.  In the paper they spend a few paragraphs explaining Scott’s Criterion for separability, from Scott’s 1978 paper (need access for this).  I did not understand it when reading, but after meeting with my advisor and drawing some pictures it made a lot more sense!  So I’m going to draw some pictures for anyone trying to understand this- probably other graduate students.

Here’s the theorem as it appears in the MSQT paper.

Theorem (Scott, 1978) Suppose X is a connected complex and  H \leq \pi_1(X).  Then H is separable in \pi_1(X) if and only if for every finite subcomplex A \subset X^H, there exists an intermediate finite degree cover \hat{X} such that A embeds in \hat{X}.

Okay let’s unpack the theorem.  First we need to say what it means for a subgroup to be separableis separable in if, whenever you pick an element which is not in H, there exists some finite index subgroup of so that H is contained in and isn’t contained in K.  Intuitively, you can “separate” from via some finite index subgroup.  There are other equivalent definitions, but this is the one we’re going with.

Recall that a finite degree cover is a covering space where each point in has finitely many preimages.

Left side is an infinite cover, the real numbers covering the circle.  Middle is a happy finite cover, three circles triple covering the circle.  Right is a happy finite cover, boundary of the Mobius strip double covering the circle.

Left side is an infinite cover, the real numbers covering the circle. Middle is a happy finite cover, three circles triple covering the circle. Right is a happy finite cover, boundary of the Mobius strip double covering the circle.

Notation wise, X^H just means the cover of corresponding to H, so that \pi_1(X^H)=H.  Also, recall that an embedding is an injective homeomorphism onto the image of the map.  So, for instance, a circle definitely embeds into the middle cover above, but not into the infinite one.  You can map a circle injectively to a subset of the real line, say to [0,1), but it’s not a homeomorphism where the two ends meet.  And by connected complex let’s say CW-complex.

Okay it’s proof time!  For notation we’re going to let G = \pi_1(X).

Suppose that the condition is met, and we want to show that is separable.  Take an element not in H.  Since is the fundamental group of Xcorresponds to a (class of homotopy equivalent) loop(s) in X.  Since isn’t in H, it isn’t a loop in $latex X^H$- let’s say it’s a line segment.  By the condition, since this line segment is a compact subset of X^H, there exists some intermediate finite degree cover \hat{X} so that the line segment embeds into it.  This finite degree cover corresponds to a finite degree subgroup K.  Since \hat{X} is intermediate, is contained in K.  So is separable.  

Here’s a schematic:

I feel like this picture is self-explanatory (this is a joke)

I feel like this picture is self-explanatory (this is a joke)

Okay let’s do from the other side now.  Suppose is a separable subgroup of G.  Pick a finite subcomplex of (the actual criterion just says compact, but we’re sticking with finite).  Look at all the elements g_i of which have preimages in A- since is finite, we only have finitely many of these.  For any given $latex g_i$, since is separable we have a finite index subgroup K_i which doesn’t include g_i and which contains H.

I guess we still need to show embeds- do you believe me that it does?  I’m not sure I believe me.

Pick a compact subcomplex of H.  Since it’s compact, there are finitely many open sets that we need to consider, which cover A.  And since it’s a subcomplex of a CW-complex, this means we’re only looking at finitely many open cells in H.  These open cells project down to X, say in a set D.  Look at all the preimages of up in X^H– there’s infinitely many, since we assumed X^H is an infinite cover.  And is one of the preimages of by construction.  (Also let’s make D, small enough so we have a “stack of pancakes” instead of batter all over the place).  Again, schematic:

I know it looks like three, but there are actually Infinitely many preimages of the image of A

I know it looks like three, but there are actually Infinitely many preimages of the image of A

Now if we want an intermediate cover into which embeds, it can’t include any elements of that send to itself- that is, if g.D\cap D \neq \emptysetwe don’t want g in \pi_1(\hat{X}).  Because then wouldn’t embed.  How many bad are there?  Well, since deck transformations act properly discontinuously, for any point in X^H there’s an open neighborhood that never gets sent to itself (besides when is the identity, of course).  And we’re in CW-complexes, so we mean an open cell.  Look at the other cells of this particular component of D.  Again by proper discontinuity, there’s only finitely many g that’ll send this to some other copy of D.

Since is separable, for any one of these we have a finite index subgroup that doesn’t include and which does contain H.  Take the intersection of all these subgroups– since they all contain H, this intersection (call it K) does too.  And K doesn’t include any of the bad g.  Back in topology-land, K corresponds to a finite degree cover of X, since the intersection of finitely many finite-index subgroups has finite index.  And this cover is intermediate by construction, and embeds in it since there aren’t any bad g.

And that’s a proof of Scott’s criterion!  My next blog post will either be baking/cooking or a reasonable math post.

Homeomorphisms of the torus, part IV (topology of the identity component)

30 Dec

See Part I for a definition of homeomorphism and torus and Part II for a bit more linear algebra.  

I still owe a Part III for the explanation of the linear algebraic classification of homeomorphisms.  But let’s take a step away from linear algebra and look at shapes (my favorite!)

We know what homeomorphisms are (continuous functions with continuous inverses), with the famous example in the picture below: a coffee cup turns into a donut and vice-versa.

New meaning to cup of (j)Oe.  From wikipedia
New meaning to cup of (j)Oe. From wikipedia

In fact, this pictures doesn’t just show us the homeomorphism (which says where each point of the coffee cup gets sent to in the torus and vice-versa).  It also shows us a homotopy (remember the definition from this post)- essentially, because we can see it traveling through time and back, it’s a homotopy.  And in fact, this homotopy is an isotopy– a type of homotopy where at each frozen point in time, the image is homeomorphic to what we started with.  An example of a homotopy which is not an isotopy is the map that ends up sending x\mapsto -x, where x is a real number.  Homotopies take place over time, so I would actually write this map as \mathbb{R}\times [0,1] \to \mathbb{R}; (x,t)\mapsto (1-t)x+t(-x).  So when t=0, we have x mapping to x, and when t=1, x maps to -x.  One reason this isn’t an isotopy is because when t=1/2, all of the real line gets mapped to the point 0.  And mapping everything to 0 isn’t a homeomorphism (what would the continuous inverse be?)

A big part of geometric group theory is using shapes to come up with algebraic theorems, and using algebra to come up with shapes.  One thing you can do (which we’re doing RIGHT NOW!) is take a shape, do some algebra, and then make a new shape.  To be specific, our first shape is the torus.  Our algebra was figuring out the group of homeomorphisms of the torus, also written as Homeo(T)- T for torus.  Sometimes you’ll see T^2, to specify that we’re talking about the 2-torus rather than a higher dimension (more on higher dimensional tori later.  Isn’t it cool that the plural of torus is tori?  Pronounced tor-eye.)  Now we’re going to make a new shape from this group of homeomorphisms.

We’ll only consider homeomorphisms isotopic to the identity, written as Homeo_0(T^2).  Starting in 1962 and finishing in 1965, badass Mary-Elizabeth Hamstrom proved in a series of papers that Homeo_0(X) is contractible (homotopic to a point) if X is a two-manifold with a short list of exceptions [torus, sphere, plane, disk, annulus, disk with a hole in it, plane with a hole in it.]

Abstractly, I realize that there are many 10-year olds out there who could make a better picture than this.  But I'm still so proud of myself.  Exceptions to the theorem that the space of homeomorphisms isotopic to the identity is contractible.
Abstractly, I realize that there are many 10-year olds out there who could make a better picture than this. But I’m still so proud of myself. Exceptions to the theorem that the space of homeomorphisms isotopic to the identity is contractible.

Let’s look at our current favorite from this list, $Homeo_0(T^2)$.  If we start from the identity, which homeomorphisms can we isotope to?  Well, I can rotate my torus around its hole-axis, and that ending homeomorphism is definitely isotopic to the identity (the rotation through time is the isotopy; where the points end up is the ending homeomorphism).

Orange dot moves to red dot.
Orange dot moves to red dot.

Since I can rotate by all the degrees up to 360, which brings me back to the identity, this means that Homeo_0(T^2) contains a circle- each point on the circle represents rotating the torus by that many degrees.

What else can I do that’ll be isotopic to the identity?  I can rotate the torus around its center circle (running through the middle of the donut), like if I was wringing out a towel.

Again, orange dot to red dot.
Again, orange dot to red dot.

Again, I can do this for 360 degrees before coming back to the identity.  So there’s another circle, different from the first one, in Homeo_0(T).

I can also do any combination of these two: I can rotate 27 degrees around the hole-axis, and then 78 around the center circle.  This is true for any numbers between 0 and 360, but then 0 and 360 are the same for both.  So far, we have the picture on the left.  I colored it to indicate that 0=360 on both axes.  Look familiar?

torus

Notice that all four corners are the same homeomorphism: the identity can be had by rotating by 360 degrees in either direction, or in both directions, or by doing nothing.  So we’ve shown that Homeo_0(T^2) actually contains a torus!  This is cool because all those other Homeo_0(X) were contractible.  In fact, Homeo_0(T^2) is a torus- we’ve actually described all the homeomorphisms of the torus which are isotopic to the identity- combinations of rotations around the center axis and around the center circle.

Personally I find this much more exciting than classifying the homeomorphisms by trace (yeup that’s happening in Part III, nope Part IV is coming out before Part III and you’re going to like it), probably because it involves shapes rather than numbers.

Update on health: I’m taking antibiotics to help with whooping cough.  So that explains why I’ve been sick for a month.  My boyfriend walked by and asked what I was doing an hour ago, and I told him I was very busy feeling sorry for myself.  Then I wrote this blog post to be less lump-ish.

%d bloggers like this: