Archive | Math RSS feed for this section

On eponyms

7 Mar

I recently listened to two episodes of a podcast which were entirely dedicated to eponyms.  That first episode of the Allusionist was cute and fun, referring to how British people call pens “Bics” or “Biros”, while this second episode is a bit darker and has to do with medical terminology.  For instance, by this point “Downs syndrome” has entered popular culture and so even as the medical community starts calling it Trisomy 21 (fun fact my first prenatal test with this pregnancy came back with a high risk of Trisomy 21 so I took a second genetic test which cleared me), it’s unclear if it’ll ever change in our minds.  But why should medical conditions be named after generally egotistical men who “discovered” them?  I think it’s ridiculous that Braxton-Hicks contractions are named after this English dude who “discovered” them in 1872, while women have been having false or practice contractions LITERALLY FOREVER.

This comes up a fair bit in math, as we like to name things after people but then later change the name to make more sense OR vice versa.  For instance, “Outer Space” is actually written as CV_n(X) which stands for Culler-Vogtmann space even though everyone says “outer space” aloud.  Funnily in that article I just linked Vogtmann writes it as \mathcal{O}_n but I haven’t seen anyone else write it that way.  Another funny one is right angled Artin groups, which were originally called “graph groups” but now everyone says “raags”.  Incidentally this is a great introduction to RAAGS (sometimes written raAgs).

Some spaces don’t have any alternative names and should.  The one I’m thinking of now is Teichmüller space– every day dozens of mathematicians and physicists refer to this space and the accompanying theory, which feels like we’re honoring Teichmüller.  This is not a person whom I particularly want to honor every day, but like the Downs syndrome problem I doubt we’ll be able to change the name to “complex structure space” or “marked surfaces space”.  I didn’t know any of this stuff about Teichmuller until reading a wonderful interview of Autumn Kent by Evelyn Lamb.  Here’s a pull quote; most of it is Autumn and the Note is by Evelyn.

There is a dangerous amount of tolerance of intolerable people in academia based on the principle that we are all dedicated to the pursuit of knowledge and beauty and that a person’s academic work makes them a person worthy of mutual respect. This principle is wrong.

Bers famously quoted Plutarch in defense of his admiration for Teichmueller’s work: “It does not of necessity follow that, if the work delights you with its grace, the one who wrought it is worthy of your esteem.” This is of course true, but Teichmueller was still a piece of sh*t and if he were alive today I would not be his friend on Facebook. [Note: Oswald Teichmueller (1913-1943) was a German mathematician and literally a card-carrying Nazi. As a student, he organized a boycott of Edmund Landau, a Jewish math professor at the University of Göttingen. He was killed fighting for the Third Reich in World War II.] I would not invite him to an academic conference. The pursuit of knowledge and beauty is admirable, but it should not be undertaken at the expense of the bodies and souls of marginalized people. If my work would result in violence I would abandon it.

There are a LOT of goodies in that interview and I highly, highly recommend it.  In fact I wrote this entire post just to share this interview with you, but I snuck it in via eponyms (and also I’ve been having a lot of practice contractions lately and wanted you to know.  Due date is March 29!)

Advertisements

And now for something completely different-cognitive neuroscience!

10 Jan

I sometimes trawl arxiv.org for short math papers to read, and occasionally I even blog about them (see: curve complex I and II), though generally my math blog posts arise from interesting talks I’ve seen (see: most of the rest of my math posts).  Recently a friend sent me a job listing that would require a Ph.D. in biology or similar, but the real job requirement is an ability to read biology papers.  The only related category on arxiv is “quantitative biology,” so I thought I’d try to bring up a short paper and read it and blog about it to see how I do.  Any cognitive neuroscientists who might read this, let me know if my reading is correct!

This post is based on the paper “Deep driven fMRI decoding of visual categories” by Michele Svanera, Sergio Benini, Gal Raz, Talma Hendler, Rainer Goebel, and Giancarlo Valente.  First, here’s my schematic of the paper:

data.jpg

We’ll read this schematic from top to bottom, left to right.

  1. On top is the experiment: they had a lot of people watch 5-10 minute movies.  The left white arrow indicates that the people were in fMRI machines (I know a fMRI machine does not look like an EEG but that’s the picture you get) and so they have a bunch of data sitting around from that.  The right white arrow indicates that they used a computer algorithm (“math!”) to extract information directly from the movies [this is the fc7 data].  So far they haven’t contributed anything new to the literature; just used existing techniques to come up with raw data.
  2. The orange diagonal arrows are when things get interesting.  The fMRI data and fc7 data comes in giant matrices, and they use another math algorithm to come up with a set of “decoding” matrices.  Not pictured in schematic: they test these matrices using some of the data.
  3. The goal is indicated by the green arrows: to use the brain data and these matrices they came up with to reconstruct what people are seeing and classify these things (aka are subjects seeing people’s faces on the screen, or entire human figures?)

Now for a few details on each of the steps.

0. The motivation behind the paper seems to be to link the brain imaging community (those who work the fMRI, EEG, etc. data) with the deep neural network community (computer people) to answer questions that involve both.  The main question they have is: how do people associate low-level information like colors, shapes, etc. with semantic concepts like car, person, etc.?  Here’s the picture:

braininfo

Eyes see a vague shape + different colors [low-level information]; brain tells us whether it’s a person or a tree with the sun behind it [semantic concepts]

There’s a lot of work in both communities on answering this question, and this paper uses work from both sides to form a decoder model: with an input of fMRI data, the model spits out predictions about what the subjects are seeing.  Specifically, the model is supposed to tell if subjects were looking at human faces or full human figures.  This is hard!  Those are pretty similar categories.

  1. The data: they grabbed a bunch of existing data from other experiments, where scientists took 5-10 minute clips from five different movies (side note I would never want to be in these studies because one of the clips was from The Ring 2) and showed them to subjects (ranging from 27 to 74 participants in each movie) and recorded all the fMRI data, which creates a huge three-dimensional datasetevery 3 seconds.  Then they threw the movie frames into a computer algorithm (called the faster R-CNN method) which detects objects in the video frames (with varying confidence levels) and spits out a 4096-dimensional vector for each frame.  They averaged these vectors over 15 frames so that the two datasets could match up (the movies were shown at 5 frames per second so this makes sense).  These vectors form the fc7 data.
  2. The math: they use an algorithm called Canonical Correlation Analysis (CCA) to spit out two orthogonal matrices and which are highly correlated (hence the middle C).  Looks like linear algebra with some linear projection!  The schematic is fMRI \cdot A = U \\ fc7 \cdot B = V.  To do this, they took a subset (about 75%) of the fMRI data and the corresponding fc7 data and plugged it into the math.  The goal of this step (the training step) is actually to get the helper matrices and B.  To make sure these matrices are a-OK, they used the remaining fMRI data to reconstruct the fc7 data within a reasonable margin of error fMRI \cdot A = U \rightarrow V \cdot B^{-1} = fc7.  Remember U and V are highly (in fact maximally) correlated so that middle arrow actually makes sense in this step (the testing step).
  3. The result: For one movie, they did the training math step using different subsets of data (they did it 300 times) to make sure those helper matrices and are the best possible ones.  Then to show that this whole paper does what they want it to do, they do the testing step using the other movies.  [The whole point of a decoding method is to predict what people are seeing].  They then try to classify whether subjects see faces or bodies using their method (the fancy fc7 method) and another method (some linear thing) and show that their method is way better at this discriminating task than the other method.  Fun caveat that they had to think about: it takes people a little while to react to stimuli, so they had to toss in time-shifts for the fMRI data, and also throw in another regulatory parameter to normalize the data.

Conclusion: their method works on this preliminary result (faces versus bodies)!  They want to expand to other movies and other semantic concepts in the future.

General keywords: machine learning, fMRI, linear algebra.  Also CCA, faster R-CCN, fc7 but those are keywords for specialists.

My conclusion: this was cool and fun!  I like reading new things and learning.  I hope you do too!

The (2,3,7) Triangle Group

8 Nov

Hi!  Today we’re going to use some stuff we learned about a long time ago (non-Euclidean geometry, manifolds, and groups) and put it together to explore a particular group.  This is based on a talk given by my dear friend and co-organizer Michelle Chu.  “Co-organizer?  Of what?” you ask.  Well thanks for asking!  Last weekend we did held the first Texas Women in Mathematics Symposium – we had over 60 people come, lots of talks, lots of networking, and lots of food.  By the end of the day I got to add “annual” to that description, and it seems like a lot of schools were interested in hosting it in future years.  Maybe some time I’ll write a post about how to found an annual conference (this is my second!).

Anyways, let’s first talk about tilings by triangles.  We first choose some integers p, q, r and set the three angles of a base triangle equal to \frac{\pi}{p}, \frac{\pi}{q}, \frac{\pi}{r}.  Now reflect over each of the three sides to start tiling your space.  Turns out this tiling will lead to a group.  Here’s an example with p=q=4 and r=2 (so we have a right isosceles triangle):

firstimgs

Start with the pink triangle, and reflect it over each of the three sides to make the colored triangles as shown.

secondimgs

Now do the reflections again.  I kept the pink base triangle and grayed out the first images.  Note that I colored the bottom left image yellow, for reflecting over the vertical side of the bottom orange triangle, but I also could color it orange, for reflecting over the horizontal side of the left yellow triangle.  This means that yellow+orange = orange+yellow in the group.

thirdimgs

A third iteration of the same process; there are more relations here (that I didn’t color)

I picked a particularly good example, so that my triangles could tile the Euclidean plain.  We learned some time ago about non-Euclidean geometries: the space is spherical if the sum of triangle angles is more than \pi, and hyperbolic if triangles are thin and their sum of angles is less than \pi.  So based on how I choose my p, q, and r, I’ll find myself tiling different spaces.  Here’s an example of one iteration on a sphere for p=q=2 and r=5:

This slideshow requires JavaScript.

It’s pretty easy to find the integer solutions for p, q, r to tile each space.  The only triangles that tile the flat plane are when (p,q,r) = (2,3,6), (2,4,4), and (3,3,3).  We already saw (2,4,4), and I’ll just tell you that (3,3,3) is when you use equilateral triangles (so there are 6 around each vertex), and (2,3,6) are those 30-60-90 triangles we learned about back in trigonometry class: here’s the picture from wikipedia:

Similarly there are only a few (relatively) that tile the sphere: (2,3,3), (2,3,4), (2,3,5), and (2,2, n), where is some number bigger than 1.  Of course this forms an infinite family of tilings, since you can choose n.  In the example above I chose n=5, and if is bigger the base pink triangle just gets skinnier.

But I say there’s only a few that tile the sphere because everything else tiles the hyperbolic plane.  That’s a lot of choices!  It might also make you think, “hm, maybe the hyperbolic plane is interesting.”

Let’s bring us back to groups.  How does a tiling of a space lead to a group?  Well, let the reflections over the (extended) sides of the base triangle be the generators of our group.  If I name these a, b, and c, I immediately get the relators a^2=b^2=c^2=1.  Next we have to figure out the rest of the relators.  I hinted at them in the pictures above.  They are (ab)^p=(bc)^r=(ca)^q.  Now we have a group presentation [link for definition] R\triangle(p,q,r)\cong \langle a, b, c \ | a^2=b^2=c^2=(ab)^p=(bc)^r=(ca)^q=1\rangle.

Also, fun coincidence: if you create the dual tree to the tiling by putting a vertex inside each triangle and connecting two vertices by a line if one triangle is the image of another under one of the reflections, you get something that looks a lot like the Cayley graph of the reflection triangle group.  The only difference is that each edge needs to be two edges (like a little loop) to reflect that each generator has order 2.

So what’s special about the (2,3,7) triangle group?  We know from above that it tiles the hyperbolic plane.  Check out this great picture from wikipedia of the tiling:

h2checkers_237

Maybe we’ll take a second here to point out that you can see the p, q, r values in the tilings, both in this picture and the other wikipedia picture above for (2,3,6): pick your favorite triangle, and look at its three vertices.  Each vertex is adjacent to other triangles, and since there are 2\pi angle around any vertex, we can figure out that p,q,r are just \frac{n}{2}.

Also, of all the integers you can pick for p, q, r, it turns out that 2, 3, and 7 maximize the sum \frac{\pi}{p}+\frac{\pi}{q}+\frac{\pi}{r} while staying less than \pi.  [It ends up giving you \frac{41\pi}{42} for those of you keeping track at home.]

So we maximize something with the numbers 2, 3, 7.  Well it turns out we do more than that- we also minimize the volume of the resulting quotient-we touched on and defined those in this post about manifolds.  And this is unique (up to conjugation/fiddling), and makes the smallest possible quotient.  Huzzah!

On a personal note, I’ve had a demonic cold keeping me in bed for the past two weeks, so forgive me if I messed up (pretty sure I did on the reflections; I’ll try to fix those soon).  Also, hope you voted today!  I voted a week and a half ago.

Carnival of Mathematics 138

19 Sep

I am super pumped to be hosting the 138th Carnival of Mathematics, a monthly round-up of great stuff on the internet that has to do with math.  Last month it was hosted over on the AMS Blog on Math Blogs by Anna Haensch (who also has a lovely podcast called The Other Half ) and Evelyn Lamb (my fairy blogmother who also writes for Scientific American, Slate, etc.).  Curiously, the first thing that came up when I looked up “138” was a 1978 song by the horror punk band The Misfits: https://www.youtube.com/watch?v=SOqVs-K1eEo

A little bit catchy, but not quite my cup of tea.  What is my cup of tea?  An integer sequence that does not appear in the amazing OEIS!  138 is the smallest product of three primes such that the third is the concatenation of the first two: so 138 is 2\cdot 3\cdot 23, 777 is 3\cdot 7 \cdot 37, and 4642 and 10263 are the next in this sequence (I applied for an account to OEIS to submit this, so if you find a smaller one or the next several in the sequence let me know).  So off we go, to venture into the great unknown (aka the internet)!  This month we have a fun mix of grammar, history/sociology, math, and games!

  • First, grammar: Manun Shah over at Math Misery [which hosted CoM#136] posted Does 11=8 + 3? to chat about linguistics and mathematics.  If you love those memes about the Oxford comma, this post might be right up your alley.  My favorite sentence: “I bought orange juice and dishwasher detergent and drank it for breakfast.”  He argues we should think more about linguistics when teaching math, and the implicit biases that language might have on students as they learn mathematical reasoning.
  • Also in grammar, Thomas Oléron Evans of Mathistophiles posted Understanding an Abused Unit: The Kilometre-Hour, which delves deeper into a specific example of how common language can hurt mathematical understanding, and he uses a DELIGHTFUL example of turtles.  Favorite sentence here: “Unless they sidestep across the beach, of course, like some sort of synchronised reptilian dressage.”  Here’s an amazing graph from his example:
turtle_time_space_diagram

A delightful graph from Mathistophiles

  • Keeping up with his fun theme, Thomas submitted a SUPER FUN game by David Peter called cube composer which is all about functions.  Do please do go play it!  It’s so fun!  It’s an intuitive individual puzzle game, and you can fly through the levels.  It also remembers your progress, but you can skip levels if you want too.  Here’s a screenshot of me picking random functions and not solving the puzzle:
cubecomposer

A screenshot of me poorly playing cube composer

  • A high school math teacher friend texted me about another game called Square It on a website for math educators supported by the University of Cambridge.  We proceeded to lose, lose, lose, and continue to lose against the computer.  I played against my spouse a few times, and he won his first time.  Those darn programmers and their algorithm-minds!  For the record I did eventually win.  This one is faster and lighter than cube composer, and it offers different mathematical questions to think about in different size grids.
squareit

Here I am losing in Square It to a computer

Square It! is also great to play with kids.  Two more submissions are also kid-friendly.  

  • Brent Yorgey of Math Less Traveled submitted his post about Making Tesselations, which delves into the math behind the delightful new children’s book Tessalation! that I will definitely be reading to my toddler.  In his post he also talks about ants on a doughnut, and y’all know how we mathematicians love our analogies to lean toward the absurd.  Here’s a wonderful joke Brent nonchalantly sneaks into his post:  “The ant is so small that it can’t tell that the surface it lives on is curved. To the ant, it just looks flat. (You may know some tiny creatures in a similar situation who live on a sphere.)”  So fun.
  • Matthew Scroggs submitted a post dear to my heart, written by Belgin Seymenoglu over at Chalkdust Magazine: an ode to the delightful half hour film Donald in Mathemagic Land.  The post includes a link to the 1959 cartoon, which I’ve watched many times since I first saw it when I was 15.  I guess this is a fun month, because I really really recommend watching this cute and funny cartoon that includes actual math in it.

Now for a sort of random assortment of history/sociology/whatever posts that have to do with math:

  • Paul Plowman wrote a 2 minute 20 second guitar song using the number e, and posted about it over on his blog as Transcendental Music.  Sort of a silly but fun little exercise like memorizing digits of \pi, and the riff sounds nice.
  •  Poppy Brady submitted a story she wrote for The Voice about Nira Chamberlain, titled Teachers said I could be a boxer, but Never a Mathematician, one of those feel-good stories about mathematicians.  One fun quote by Chamberlain: “I also like what British mathematician Sir John Kingman once said that ‘mathematicians are better if they stay a bit childish and play the game as a game.’”  I think that keeps in line with our theme this month!
  • Over in Nature, Davide Castelvecchi wrote a news story about how the Majority of mathematicians hail from just 24 scientific ‘families’, a result by Floriana Gargiulo who analyzed the Mathematics Genealogy Project.  Every grad student has used the MGP to stalk their advisor/potential advisor to see their lineage, and you can print out a tree and give it to your advisor as a defense present!  So it was fun to read about someone actually analyzing this trove of data.
  • Finally, saving the best for last, the brilliant Evelyn Lamb, explainer extraordinaire, wrote a post on Roots of Unity at Scientific American about How Converting Between Addition and Multiplication makes Math Easier.  If you don’t follow/read everything that Evelyn writes, you really should.  So approachable and so lucid.  She also wrote a fun piece about using different bases for your birthday candles, so you should read these two articles, follow her on Twitter, and tell her happy birthday!

If you run into anything interesting on the math-internet from now until October 15th, submit them to the Carnival of Mathematics site; the Carnival will be hosted next month at the online magazine Gonit Sora.  Hope you had as much fun as I did with the submissions this month!

Phylogenetic trees

2 Aug

I just listened to a two hour talk on phylogenetic trees, and they seem fun enough that I thought I’d share them with you!  Sorry I literally forgot to post last week, and then I realized I did not take notes on the stuff I wanted to post about (more pictures by Aaron Fenyes)- here’s a photo that I took and wanted to explain:

20160719_114650

Something twisted representation something skein algebra something unit tangent bundle

Instead, you’ll read about the basics behind the research of my friend Gillian (no website), which is being supported somehow by people who want to know about cancer.  First, some background: the field of study of phylogenetic trees is inspired by and informs applications in evolutionary biology.  All of the life-forms today (or at some slice of time) all trace back to a root life-form at time 0.  Each current life-form follows a series of branching away from the root to get to where it is now

Then we can represent this evolution via a labeled rooted binary tree: the root represents the root life-form at time 0, and each branching of the tree represents a different evolution.  The labels mark which life-form is which.  Of course this model isn’t perfect (I can’t find the word for it but it’s a thing where two different species evolve separately from the same ancestor, then meet again and make one species.  If we were to represent this information in a graph, it’d make a cycle and not be a tree), but it’s been fruitful.

Spindle_diagram

The rooted binary tree of the wikipedia picture: node 0 is the root life-form, then 1-7 are the life-forms at our current time.

Now let’s mathify this.  We’d like to encode the evolutionary information into our tree.  We’ve already decided that all life-forms will end at the same time (now), so if we just assign lengths to each of the non-leaf edges this will automatically determine the lengths of the leaf edges.  A leaf in a tree is a vertex with only one neighbor, and we call the edge leading to that vertex a leaf-edge.  Let’s call the non-leaf edges interior edges.  In the picture above, we have 5 non-leaf edges, which determine a tree with 7 leaves.  Using this exact configuration of labels and edges, we have five degrees of freedom: we can make those interior edges whatever lengths we want, as long as they are positive numbers.  So in math-terms, the set of phylogenetic trees (aka rooted, binary, labeled trees) in this configuration forms a positive orthant of \mathbb{R}^5.  You can smoothly change any one of the edges to a slightly longer or shorter length, and still have a phylogenetic tree with the same combinatorial data.

octant

This is from the paper I’m writing, but it does show that in 3D, there are 8 orthants cut by the three axes (red dot is the origin).  The pink box represents a single orthant.

What about phylogenetic trees with different combinatorial data?  Say, with different labels or different branching, but the same number of leaves and the same number of interior edges?  First we need to figure out what we mean by ‘different’.  For instance, the following picture from the seminal paper in this field shows three trees that don’t immediately look the same, but we don’t count as different:

Why aren’t they different?  Because they encode the same data for each life-form: reading from node 0 we see that first 1 branches off, then 2, then 3 and 4 in all three cases.  There’s some combinatorics here with partitions that you can do (one can label a tree with a set of partitions).  However, changing the labels so that first 2 branches off, then 1, then 3 and 4 will be a different phylogenetic tree.  In fact I can smoothly go from one to the other in the space that we’re creating: first I shrink the length of the green edge below to zero, which takes us to the middle tree (not binary!), and then extend the blue edge.

axis

Shrink the colored edges to get the same tree in the middle (not a binary tree)

We’re going to add these non-binary trees with one less edge length to our space.  Remember the tree on the left has an entire positive orthant, and the tree on the right has an entire positive orthant.  Shrinking the green length to zero means that we’re moving to one side of the left orthant: so we add this axis to our space (we have \{x,y\in \mathbb{R}^2: \ x,y\geq 0\} instead of strictly greater than 0).  We can glue the green and blue orthants together along this axis.  Here’s a picture from the paper:

spacetrees

Notice that they also have the origin filled in, with a tree with no interior edges.  This is the cone point of this space.  Now we’re finally ready to describe the space of phylogenetic trees: within each combinatorial structure/labeling, we have a Euclidean orthant in (n-2) dimensions.  Then these orthants are glued together along their axes in a specific way, and all of them are connected to the cone point.  This is called BHV(n), short for Billera-Holmes-Vogtmann space (in the paper they call it T(n) but that’s confusing to everyone else).  Here’s the picture of T(4):

t4

Each triangle represents an infinite orthant

There are 15 different orthants glued together in this picture, because the number of labelled rooted binary trees on vertices is (2n-3)!!.  The double !! means you only multiply the odds, a.k.a. (2n-3)(2n-5)(2n-7)… This is also known as Schroeder’s fourth problem , which as far as I can tell was open for 100 years.  Pretty cool!

If you truncate BHV(n) so it’s not infinite (just pick some compact bound), then it forms a nonpositively curved cube complex, and we love those!  CAT(0) cube complexes are great.  I haven’t blogged too much about them (first terrible post and then those truncated Haglund notes) but they are the basis of all that I do and the number one thing I talk about when I give math talks.  Whoops!  The gist is that you glue cubes together in not-terrible ways, and then the resulting complex has great and fun properties (like you can cut it in half the way you want to).

That’s about all I have to say about this!  Gillian is working on some stuff about putting a probability measure on BHV(n) [you can’t do it with certain conditions], embedding it into a small enough Euclidean space that still preserves some of its features, and finding an isometrically embedded copy of the phylogenetic tree inside BHV(n) instead of just the coordinate point.  Also, fun fact to prove to yourself (actually please don’t scoop my friend), find the automorphism group of BHV(n)!  It’s just the symmetric group on some number that has to do with n (n+1 or something like that; I can’t remember and didn’t take notes).

Again, the main reference for this is the seminal paper that should also be accessible as it’s meant for biologists and statisticians.

Minimum rank of graphs with loops

28 Jun

A few months ago I was one of two Distinguished Graduate Speakers at this awesome conference, Underrepresented Students in Topology and Algebra Research Symposium (USTARS).  The conference started in 2011, when a bunch of advanced grad students and postdocs (I don’t think anyone on the committee was a professor yet) decided that there needed to be a place for underrepresented students to see other people like them doing the math they did.  And now they’re all professors and still organizing this traveling conference!  So many people of color!  So many women!  So many homosexual people!  (Not so many trans people…) So many first-generation college students!  So great!  Too many slides!  (I pretty actively hate slide talks unless there are pictures that are impossible to make on a chalk board.)

YenTalk

Credit: Erik Insko for this picture of me giving a talk at USTARS 2016!

Anyway, I wanted to blog about some math I saw there, based on a talk by Chassidy Bozeman, a grad student at Iowa State and a fellow EDGE-r.  The talk is from a paper that resulted from her graduate research course, which Iowa offers to intermediate grad students and gets them started on research (hence the eight authors on that paper).  I thought it was fun and approachable!  And I have one crumpled page of two-month old notes, so we’ll see how this goes.

First, let’s remember what a graph is: a collection of vertices (dots) and edges (lines between dots).  A graph is simple if there’s at most one edge between any two vertices, and if no vertex has an edge to itself (aka a loop).  A loop graph allows loops, but still no multiple edges.

graphs

Left is a simple graph, right is a loop graph

You can associate an infinite family of symmetric matrices to a loop graph.  These square matrices will have the number of vertices of the graph as the number of columns and rows, and the entry a_{ij} will be 0 if and only if there is no edge between the corresponding vertices and  in the graph.  The rest of the entries will just be real nonzero numbers.  This infinite family is useful for defining the minimum rank of a loop graph: it’s the minimum rank over all of this infinite family of matrices.  The rank of a matrix is a measure of how large it is.  For definition by example, \left( \begin{array}{cc} 1 & 0 \\ 1 & 0 \end{array} \right) has rank 1, and \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) has rank 2.

So the goal of Chassidy’s talk was to characterize all loop graphs whose minimum rank is the full size of the graph.  A couple more definitions before the theorem: a generalized cycle is a subgraph whose connected components are either a loop, a single edge, or a cycle.  It’s spanning if it touches every vertex in the graph.  Spanning generalized cycles don’t always exist!

generalizecycle

Components of a generalized cycle: loop, edge, cycle

nonexist

No spanning generalized cycle, womp womp

Theorem: A loop graph has minimum rank equal to its size if and only if there exists a unique spanning generalized cycle.

Quick nonexample: here’s a picture of a loop graph that has more than one spanning generalized cycle, so from the theorem we know it doesn’t have minimum rank four.

nonunique

This is theorem 3.1 in their paper.  It’s also comically the end of my notes on this talk.  Here are a few facts they use to prove the theorem:

  • If is an induced subgraph of G (that means the vertices of H are a subset of the vertices of G, and its edges are all the edges that appear in G within that subset), then it minimum rank is bounded above by the minimum rank of G.  Succinctly, mr(H)\leq mr(G).
  • If is a disjoint union of a bunch of connected components, the its minimum rank is the sum of the minimum ranks of those components.  mr(G)=\sum mr(G_i).

Here’s a picture of my notes!  Read the actual paper if you want to know more!

20160628_221615

Subgroup separability problem set session(non-elementary)

22 Jun

Update #1, five hours later: Zach Himes posted a great comment fixing my iffy part, and I also video chatted with Nick Cahill about it and we came up with an alternate solution which is still maybe iffy.  Adding both below.  Nick also added a comment about the second exercise, using group actions.  What do you think?

I’ve read Sageev’s PCMI lecture notes several times by this point; it’s the basis of my and many other people’s work (in particular, my friend Kasia has an impressive number of publications related to this stuff).  And every single time I get stumped on the same exercise near the end, so I thought I’d try to write up a solution, crowd-source it among my blog readers, and figure out something correct.  For reference, these are exercise 4.27 and 4.28 in his notes, but I’ll lay out the problems so you don’t need to look them up if you don’t want to.  Please comment with corrections/ideas!

A thing that mathematicians care about is the structure of a group.  We say that a particular subgroup H<G is separable if for any group element that’s not in H, we can find a finite index subgroup K that contains H but doesn’t contain g.  Intuitively, we can separate from H using a finite index subgroup.  Here’s the cartoon:

output_37OpDk

If H is separable, then given any g not in it, we can find a finite index subgroup that separates H from g.

The first exercise is to show that this definition is implied by another one: that for any group element that’s not in H, we can find a homomorphism f: G\to F where F is a finite group, so that the image of under the map doesn’t contain f(g).

So let’s say we start with such a homomorphism, and our goal is to find a finite index subgroup that contains but not g.  Since we’ve got a homomorphism, let’s use it and try K:=f^{-1}(f(H)).  Since f(g)\not\in f(H), we know this definition of excludes g, as desired.  Then we need to show that K is finite index in G and we’ll be done.

What about the first isomorphism theorem?  We have a map G\to F, and we know f(H)<F, and is a proper subgroup since f(g) isn’t in f(H).  This next bit is iffy and I could use help!  

  1. (Original) Then we have a map G\to F/f(H) induced by the map f, and the kernel of this map is K.  By the first isomorphism theorem, the index of in is the size of the image of this map.  Since F/f(H) is finite, the image of the map is finite.  So has finite index in G, as desired.  [What’s iffy here?  You can’t take quotients with random subgroups, just with normal subgroups, and I don’t see why f(H) would be normal in F unless there’s something I don’t know about finite groups.]
  2. (based on Zach Himes’ comment) By the first isomorphism theorem, ker has finite index in F.  We know ker is contained in K, since 1 is contained in f(H) [since 1 is contained in H, and f(1)=1, where 1 indicates the identity elements of G and F].  It’s a fact that if \ker f \leq K \leq G, then [G: \ker f] = [G:K][K: \ker f].  Since the left hand side is finite, the right hand side is also finite, which means that K has finite index in G, as desired.
  3. (Based on conversation with Nick Cahill) We can think of F/f(H) as a set which is not necessarily a group, and say that G acts on this set by (g, x) \mapsto f(g)x.  Then K=Stab(1):=Stab(f(H)).  By the orbit-stabilizer theorem, [G:K] = |Orb(1)|.  Since F is finite, the size of the orbit must be finite, so K has finite index in G, as desired.

The second exercise has to do with the profinite topology.  Basic open sets in the profinite topology of a group are finite index subgroups and their cosets.  For instance, in the integers, 2\mathbb{Z}, 2\mathbb{Z}+1 are both open sets in the profinite topology.  Being closed in the profinite topology is equivalent to being a separable subgroup (this is the second exercise).

So we have to do both directions.  First, assume we have a separable subgroup H.  We want to show that the complement of is open in the profinite topology.  Choose a in the complement of H.  By separability, there exists a finite index subgroup that contains and not g.  Then there’s a coset tK of which contains g.  This coset is a basic open set, so is contained in a basic open set and the complement of is open.

Next, assume that is closed in the profinite topology, so we want to show that is separable.  Again, choose some in the complement of H. Since the complement of is open, is contained in a coset of a finite index subgroup, so that is not in this coset.  Let’s call this coset K, and call its finite index n.  We can form a map f: G\to S_n, to the symmetric group on letters, which tells us which coset each group element gets mapped to.  Then is in the kernel of this map, since is contained in K, but is not in the kernel of f since it is not in that coset.  In fact no element of H is in the kernel.  So we’ve made a homomorphism to a finite group so that and have disjoint images, which we said implies separability by the previous exercise.

Okay math friends, help me out so I can help out my summernar!  So far in summernar we’ve read these lectures by Sageev and some parts of the Primer on Mapping Class Groups, and I’m curious what people will bring out next.

%d bloggers like this: