TR Member Perks!

A recent study has been all over the media recently, and it claims to show both that women in open source are more competent than men and that women face gender-based bias in the open source community. Considering that the study is still awaiting peer review, it might be a bit premature to jump to conclusions based on it, but for some reason numerous media outlets, both major and minor, have all written about this. Perhaps it might be worth taking a closer look at this study as an antidote to the uncritical, and frankly irresponsible, coverage it’s received so far. By time we get to the end of this article you’ll see what I mean by irresponsible coverage, but first lets just get a basic overview of the study.

The study is based on data from GitHub, where the researchers looked at millions of pull requests to determine if there was a difference in acceptance of pull requests made by men compared to women. Some GitHub accounts can be identified as male or female, based on the name or photo, while others have no obvious signs of gender. For some of these accounts with an unknown gender, it was possible to link the accounts to an email and look at an associated Google+ account to determine their gender. While the study admits there may be privacy concerns about this method of linking a GitHub account to a gender, they believe it is okay because the data will not be publicly released.

The study determined that pull requests made by women were more likely to be accepted than those made by men, with an acceptance rate of 78.6% for women and 74.6% for men. However, the rate of acceptance is far lower among women whose gender is known compared to those with a gender neutral account. This drop is attributed to gender bias against women. What is glossed over by the authors of the paper is that men also have a drop in acceptance if their gender is known, compared to gender neutral accounts. Although, it does appear to be a smaller drop than the one faced by women. The fact that men also have a drop in acceptance was considered so unimportant by reporters, most sites didn’t even bother to mention it. The BBC, to their credit, was one of the few who actually did mentioned it.

In addition to claims of gender bias, the paper also concludes that women in open source are, on average, more competent than men because they have a higher percentage of accepted pull requests. The paper does not state that women are inherently superior to men at coding, but offers some other explanations as to why women would be more competent than men. For example, it cites statistics that women in open source are on average more educated than men, being more likely to hold a Master’s or PhD. At no point does the paper ever suggest that just being born a woman would make you naturally better suited to programming than being born a man.

Before raising my own concerns about the study, let’s head to the threats section of the paper and see what the authors consider to be some potential problems. The major issue raised is the possibility that accounts in this study may have their gender misidentified, because the users deliberately misrepresented their gender. It’s really easy to lie about who you are online, and I know plenty of people who have done so. As far as the investigation into bias goes, this is not really too big of a deal in my opinion. Someone might discriminate against a man with a girly username, for example, and even if its due to a mistaken belief, it would still be bias based on assumed gender. However, when we get claims that women are more competent than men, it becomes a bit more important to know the actual gender of the people in involved. Now, if the rate at which people misrepresented their gender was known, you could take it into account to get an idea of how much it’s screwing up your data. Unfortunately, the authors have no idea how often people misrepresent their gender.

For this next bit of criticism I will focus on a section of the paper that divides contributors into insiders and outsiders, and compares the acceptance rate of those groups. If a person makes a pull request, and they are listed as an owner or member of that project, they are an insider. Anyone else making a pull request is considered an outsider. The data is summarized in the table below, with insiders on the left and outsiders on the right.

One thing of note is that the chart starts at 60% rather than 0%, which may be a trick to exaggerate small differences. This is a concern that has already been raised in the feedback section discussing the paper. Additionally, the exact figures for most of the bars in the graph are not given anywhere in the paper, although a few of them are. These two issues together raise the suspicions that the data is being manipulated to prove a point.

Among insiders, whose gender is not known, the acceptance rates between men and women is just about even. However, when the gender is known, it remains just about the same for women, with a bit of a drop in acceptance for men. The study makes no mention of this drop for men, and simply states “For insiders, we observe little evidence of bias when we compare women with gender-neutral profiles and women with gendered profiles, since both have about equivalent acceptance rates.”

Among outsiders the situation is a bit different. When the genders are not known, women have a higher acceptance rate. When genders are known, both men and women have a drop in acceptance, and the drop is larger among women. The researchers conclude that this is evidence of bias against women. They do mention that fact that men also have a decline in acceptance if their gender is known but state it is not as strong. No attempt is made to explain why men might see a decline in acceptance. There is just a single sentence mentioning it, and that’s all they have to say about it.

This is a good time to link to Scott Alexander’s blog post at Slate Star Codex, which is also critical of this study. I encourage you to read it, as I don’t want to reiterate every single point he makes, but I will quote one point I think is important:

7. The study has no hypothesis for why both sexes have fewer requests approved when their gender is known, without which it seems kind of hard to speculate about the significance of the phenomenon for one gender in particular. For example, suppose that the reason revealing gender decreases acceptance rates is because corporate contributors tend to use their (gendered) real names and non-corporate contributors tend to use handles like 133T_HAXX0R. And suppose that the best people of all genders go to work at corporations, but a bigger percent of men go there than women. Then being non-gendered would be a higher sign of quality in a man than in a woman. This is obviously a silly just-so story, but my point is that without knowing why all genders show a decline after unblinding, it’s premature to speculate about why their declines are of different magnitudes – and it doesn’t take much to get so small a difference.

Even though he just made up a half-assed explanation on the spot to make a point, it’s still arguably a better explanation than the one presented in the paper, because it attempts to take all the data into account. When drawing their conclusions, the authors of the paper have just outright ignored the fact that men also have a decreased acceptance when their gender is known.

Now as you are reading about this study, or maybe actually looking at the study itself, you will see much talk about the gender of the people making pull requests, and none about the gender of the people accepting or rejecting them. It turns out someone actually wanted to know about the gender of the users accepting requests to see if it was just men being biased against women, or if women were also biased. The answer from one of the researchers is interesting:

Our analysis (not in this paper — we’ve cut a lot out to keep it crisp) shows that women are harder on other women than they are on men. Men are harder on other men than they are on women.

The fact that a lot has been cut to “keep it crisp” is perhaps a big part of the problem with this paper. Maybe some of the issues raised by myself and others would be explained by what has been cut. But we can’t know for sure. The researcher later states that the part that was cut was a series of four related investigations, three of which were unsuitable to be released at that time, and the last couldn’t “in good conscience” be released without the others. However, the response does suggest those four investigations will eventually be released. Who knows, maybe there will be some interesting information in there.

Given the serious claims of sexism be thrown around, I think it’s important to reiterate, according to the researchers who conducted the study, “men are harder on other men than they are on women.” Keep that statement in your mind as you read this excerpt from Vice’s coverage of the study:

As this GitHub data shows, whether or not bros think that they view women as equals, women’s work is not being judged impartially. On the web, a vile male hive mind is running an assault mission against women in tech.

I mean, sure you can blame the researchers for this. Maybe they should have waited until all their investigations were ready before submitting a paper for peer review. But I think more of the blame falls on the media for making a big deal about a non-peer reviewed, and apparently incomplete, study. And beyond that, reporters have drawn completely unwarranted conclusions from the study. This is absolutely disgraceful “journalism.”

Special thanks to fellow TechRaptor writer Kindra Pring, who shared some comments about the study as I was writing this article.

Max Michael

Senior Writer

I’m a technology reporter located near the Innovation District of Kitchener-Waterloo, Ontario.