Today is Ada Lovelace Day, where writers across the blogosphere celebrate women in the STEM fields. For the uninitiated, Ada Lovelace is the world's first computer programmer. (She literally wrote the first algorithm intended to be used on Charles Babbage's analytical engine.)
On Ada Lovelace day, bloggers generally write about a woman in the field of science, technology, engineering, or mathematics. It's relevant because even today, in 2012, we have far, far too few female STEM professionals. I'm not sure if the reason has anything to do with people thinking women can't succeed as well in these fields, but if so, it makes sense to celebrate feminist scientists on at least one day each year. And today is that day.
However, I wanted to do something a little different this year. Instead of telling the story of someone significant in one of these fields today, I'd like to share the results of a study that was published only a few short months ago: Science faculty’s subtle gender biases favor male students (published in the Proceedings of the National Academy of Sciences by Corinne A. Moss-Racusin, John F. Dovidio, Victoria L. Brescoll, Mark J. Graham, and Jo Handelsman).
This study is perhaps the most depressing study on sexism in science academia I have ever read. It is so distressing that upon first encountering it, I thought that it was surely wrong, and that a quick glance through their methodology would show why their findings were wrong. But, after careful consideration, I have to admit that this study is completely legit. I couldn't find even a single flaw in their approach.
As Scientific American reports, this study has conclusively proven that significant gender bias exists in science academia. They used a double-blind randomized controlled experiment -- and when I say, controlled, I mean controlled. They even made sure that the names used (John & Jennifer) were pretested as equivalent in likeability and recognizability. They covered every conceivable base. And the results are horrifying.
They created a single fake resume/application that was good enough to warrant a hire, but not so good as to necessitate it (as established in a prestudy). They then sent this application to 127 science faculty as though it were real. (After the study was done, they went back and asked these people if they suspected it was fake; none did.) The 127 faculty chosen had demographics corresponding to both the averages for the selected departments and faculty at all United States research-intensive institutions, meeting the criteria for generalizability even from nonrandom samples. Not only was their sample representative of the underlying population, but they specifically chose 127 as the optimal sample size needed to detect effects without biasing results toward obtaining significance.
These 127 science faculty judged the applications on competence, hireability, and whether they would offer to mentor the applicant. Males were significantly preferred over females on all metrics. Furthermore, the faculty were asked to estimate what salary would be appropriate for the successful applicant. Males were offered far higher salaries.
The sexism in today's science academia is real. While this doesn't mean that science faculty are overtly sexist, nor even consciously sexist, there is a distinct significant privilege that exists for male newcomers to science academia.
So today, on Ada Lovelace Day, when you read stories of female success stories in science across the web, realize just how hard it was for those standouts to achieve what they did. Even in today's world, being female in science is tough.
EDIT: After writing this article, commenters pointed out problems with the graphs used. In particular, satt pointed out that the use of dynamite plots here is possibly misleading, and unnecessarily obfuscates the actual data points at best.
Unfortunately, the charts were taken directly from the original paper, and so I do not have access to the actual data needed to create better plots for this article review. I have e-mailed the lead author, Corinne A. Moss-Racusin, to see if they have any violin plots with better axes that they could use to alleviate these concerns.
Please fix your chart. The origin of the y axis is at 25000 rather than at zero, which makes a 15% difference appear as a 200% difference visually. When comparing two values, proportion is as vital as magnitude.
ReplyDeleteNormally I'm not so stuck on having 0 be the bottom of a graph, but this is a case where there's no reason for anything else. You're comparing only two things, so you aren't zooming in to help the reader pick out fine gradations of detail.
DeleteWhy should the origin of the y-axis be 0 rather than 15000, or wherever the average minimum wage falls, or what the average 5th percentile lab manager wages are? When comparing two values, deciding which proportion to report can determine which values are actually being compared.
DeleteAt the very least the y-axis should match the caption which says "The scale ranges from $15000 to $50000".
DeleteSeveral times I've seen recommendations to start graphs' y axes at zero by default, but it's a tip that's starting to grate on me for several reasons.
Delete1: Usually, when I look at a graph, the y values' variation is at least as relevant as the values themselves. I want that variation to be clear & obvious; if someone's going to represent it on a graph, I want it spread across the available space. Cramming it into a small range near the top is a waste.
2: Visually compressing variation can be just as misleading as visually expanding it. Which is more misleading is case-dependent.
3: Sometimes I want to read numbers off a graph as accurately as I can. If the plotter stretches the y axis because they think I'm too dumb to read labels, that makes my task harder.
4: If the y axis is on a log scale, you can't make it go to zero without some distracting gimmick like making the axis discontinuous.
5: People can't decide whether this rule applies to bar charts specifically or graphs in general.
For me points 1 & 2 apply here. (Although, as it happens, I don't like that figure 2. It's too close to a dynamite plot for comfort, and it's a space-hungry way to show me two averages & two standard errors. You could communicate the same information with a small table, or even a line of prose. And Kindly's right about the caption. But starting the y axis at $25k is the least of that chart's problems.)
These are excellent points. Unfortunately, I'm a bit hampered by the fact that I stole the chart in question from the original study (pdf), and they used only "dynamite plots" in their paper. After reading your links on the topic, I can definitely see why this is bad. I'm appending a short note to this effect as an edit to my original article.
DeleteThank you for bringing this stuff to my attention.