Tag Archives: science

## The AAAS Mass Media Fellowship might change your life

9 Aug
I’m going to edit this thank you note that I sent to the AMS into an article to run in the AMS Notices, but thought I’d put the unedited version here for my readers to know just how much this fellowship has meant to me.  Sorry that I have been neglecting the blog over the summer as I wrote full time, and hopefully I’ll be able to keep putting my thoughts to paper (actually, screen), and maybe bake some yummy things.  Enjoy!
From the bottom of my heart, I want to thank the American Mathematics Society so much for enabling me to write for the Raleigh News & Observer this summer.  As I wrap up this amazing summer with just a week left, I want to reflect a bit on what I’ve learned and accomplished during the past nine weeks.
I’ve accepted a part time job (hopefully full time next year) with the award-winning nonprofit news organization North Carolina Health News and plan on filling the rest of my time with freelance science journalism, connecting with the Science Communicators of North Carolina group for freelance leads.  None of this would have happened without the support of the AMS.  I cannot imagine what my life would have looked like without this fellowship- it is the jumping off point for the rest of my career doing what I love.
Of the 20 stories I’ve written so far, eight have ended up on the front page of the N&O.  The summer has convinced me that there’s a desire and thirst for science stories among the public- people want to know what’s happening in science research which can affect their lives.  One of my favorite stories of the summer was a dive into peanut allergies and upcoming treatments for them, where I interviewed biopharmaceutical companies, medical researchers, parents, and a six year old kid.  My story on using polio to treat brain cancer was also a big hit, and I covered a few other medical stories too.
A really fun reporting experience I had was covering a new kinkajou at the Carolina Tiger Rescue- I’m pretty sure I will never have another opportunity to pet a kinkajou!  So cute, so soft, so dangerous.
I told a high school girl this morning who is thinking of majoring in math that once you do math, you can do anything.  I’ve read many abstruse, dense research papers over this summer, and they were a breeze compared to the papers I read for my thesis work.  Math doesn’t exist outside of communication of it, and I think my math background really prepared me for adapting arguments and creating interesting analogies and ways to explain different ideas to different audiences.  I look forward to continuing to be involved in the math community, maybe as one of those people they trot out as ‘alternative careers’ in panels (which I am very excited about).  Please let me know if there’s ever anything I can do for the AMS.
One surprising aspect of this summer has been the fellowship component of the fellowship.  Though we only met for three days during orientation, the 2018 fellows have kept in close contact online through the summer, supporting and tweeting each others’ clips, reading cover letters, offering a space to vent about science misunderstandings and editing out science details, and exploring our own trepidation and excitement of this sometimes overwhelming plunge into a new field.  I am so privileged to be part of this network of comrades who I am certain will support me for the rest of my career.
It’s been so fun to spend a few hours learning all about fields I know nothing about- big dives into paleontologygenetics, and climatology, just to name a few.  I feel so lucky that I’ve had my mathematical experiences to ground me and give me confidence in my ability to learn anything.
On a more personal note, I love that this fellowship supports women, and I’m so grateful that I could find a site near my home so I could go home on weekends and see my baby and toddler.  Incidentally, my husband devoured the women’s history month issue of the AMS Notices.
I talked with Evelyn Lamb, who was also a AMS-sponsored Mass Media fellow, some time ago about the guilt of not being an exemplar of a woman mathematician by exiting academia, and she pointed out that she might be doing more good for the world of mathematics by spreading knowledge and awareness of it through her stories than she was as a postdoc.  I’m so grateful to the AMS for giving me this choice and this opportunity to do the same- math will always be part of me and I will always spread my love of it, and thanks to the AMS, I can now do that in a way that better matches my strengths and vision of what I want my life to look like.
With so much gratitude,
Yen Duong

## And now for something completely different-cognitive neuroscience!

10 Jan

I sometimes trawl arxiv.org for short math papers to read, and occasionally I even blog about them (see: curve complex I and II), though generally my math blog posts arise from interesting talks I’ve seen (see: most of the rest of my math posts).  Recently a friend sent me a job listing that would require a Ph.D. in biology or similar, but the real job requirement is an ability to read biology papers.  The only related category on arxiv is “quantitative biology,” so I thought I’d try to bring up a short paper and read it and blog about it to see how I do.  Any cognitive neuroscientists who might read this, let me know if my reading is correct!

This post is based on the paper “Deep driven fMRI decoding of visual categories” by Michele Svanera, Sergio Benini, Gal Raz, Talma Hendler, Rainer Goebel, and Giancarlo Valente.  First, here’s my schematic of the paper:

We’ll read this schematic from top to bottom, left to right.

1. On top is the experiment: they had a lot of people watch 5-10 minute movies.  The left white arrow indicates that the people were in fMRI machines (I know a fMRI machine does not look like an EEG but that’s the picture you get) and so they have a bunch of data sitting around from that.  The right white arrow indicates that they used a computer algorithm (“math!”) to extract information directly from the movies [this is the fc7 data].  So far they haven’t contributed anything new to the literature; just used existing techniques to come up with raw data.
2. The orange diagonal arrows are when things get interesting.  The fMRI data and fc7 data comes in giant matrices, and they use another math algorithm to come up with a set of “decoding” matrices.  Not pictured in schematic: they test these matrices using some of the data.
3. The goal is indicated by the green arrows: to use the brain data and these matrices they came up with to reconstruct what people are seeing and classify these things (aka are subjects seeing people’s faces on the screen, or entire human figures?)

Now for a few details on each of the steps.

0. The motivation behind the paper seems to be to link the brain imaging community (those who work the fMRI, EEG, etc. data) with the deep neural network community (computer people) to answer questions that involve both.  The main question they have is: how do people associate low-level information like colors, shapes, etc. with semantic concepts like car, person, etc.?  Here’s the picture:

Eyes see a vague shape + different colors [low-level information]; brain tells us whether it’s a person or a tree with the sun behind it [semantic concepts]

There’s a lot of work in both communities on answering this question, and this paper uses work from both sides to form a decoder model: with an input of fMRI data, the model spits out predictions about what the subjects are seeing.  Specifically, the model is supposed to tell if subjects were looking at human faces or full human figures.  This is hard!  Those are pretty similar categories.

1. The data: they grabbed a bunch of existing data from other experiments, where scientists took 5-10 minute clips from five different movies (side note I would never want to be in these studies because one of the clips was from The Ring 2) and showed them to subjects (ranging from 27 to 74 participants in each movie) and recorded all the fMRI data, which creates a huge three-dimensional datasetevery 3 seconds.  Then they threw the movie frames into a computer algorithm (called the faster R-CNN method) which detects objects in the video frames (with varying confidence levels) and spits out a 4096-dimensional vector for each frame.  They averaged these vectors over 15 frames so that the two datasets could match up (the movies were shown at 5 frames per second so this makes sense).  These vectors form the fc7 data.
2. The math: they use an algorithm called Canonical Correlation Analysis (CCA) to spit out two orthogonal matrices and which are highly correlated (hence the middle C).  Looks like linear algebra with some linear projection!  The schematic is $fMRI \cdot A = U \\ fc7 \cdot B = V$.  To do this, they took a subset (about 75%) of the fMRI data and the corresponding fc7 data and plugged it into the math.  The goal of this step (the training step) is actually to get the helper matrices and B.  To make sure these matrices are a-OK, they used the remaining fMRI data to reconstruct the fc7 data within a reasonable margin of error $fMRI \cdot A = U \rightarrow V \cdot B^{-1} = fc7$.  Remember U and V are highly (in fact maximally) correlated so that middle arrow actually makes sense in this step (the testing step).
3. The result: For one movie, they did the training math step using different subsets of data (they did it 300 times) to make sure those helper matrices and are the best possible ones.  Then to show that this whole paper does what they want it to do, they do the testing step using the other movies.  [The whole point of a decoding method is to predict what people are seeing].  They then try to classify whether subjects see faces or bodies using their method (the fancy fc7 method) and another method (some linear thing) and show that their method is way better at this discriminating task than the other method.  Fun caveat that they had to think about: it takes people a little while to react to stimuli, so they had to toss in time-shifts for the fMRI data, and also throw in another regulatory parameter to normalize the data.

Conclusion: their method works on this preliminary result (faces versus bodies)!  They want to expand to other movies and other semantic concepts in the future.

General keywords: machine learning, fMRI, linear algebra.  Also CCA, faster R-CCN, fc7 but those are keywords for specialists.

My conclusion: this was cool and fun!  I like reading new things and learning.  I hope you do too!