TR Member Perks!

A man was “harassed” off Twitter last week. That isn’t a story, because there are plenty of examples of people, men and women, getting “harassed” off Twitter, and their oft-spectacular exits don’t warrant entire articles about them.

A member of propaganda website Polygon had his knowledge of the game industry called in to question for an article he wrote of dubious quality. Gamers and game developers started asking pointed questions, and, to be fair, some random people on Twitter started to mock and deride this author. That isn’t a story, either. Deep space scientists, data scientists, knighted biochemists, and university students (some real, others imaginary) have all been gleefully character assassinated on social media, or in the regular media without so much as the raising of pulses from the usual outrage peddlers.

No, the story here is deeper; the story is about (whitespace included) 235 lines of Perl code; the story is this: Why didn’t ggautoblocker save Tauriq Moosa?

According to, Moosa was a ggautoblocker user. In theory, autoblocker was supposed to prevent this sort of thing from happening; however, Moosa was so “harassed” by Twitter users he left Twitter and deleted his account. Check out the following paragraph from the OAPI’s, not the African Intellectual Property Organization’s, the other one’s, github site:

Good Game Auto Blocker compares the follower lists for a given set of Twitter accounts. If anyone is found to be following more than one of these accounts, they are added to a list and blocked.

Most discussions of ggautoblocker are referencing the GamerGate-specific block list. The GamerGate block list filters the majority of Twitter interactions by GamerGate supporters. This list is maintained and shared by the author, Randi Harper, as well as a number of volunteers. OAPI does not maintain that block list.

At this point, I’m confused. What amounts to the design paragraph for autoblocker says it is specifically designed to filter the majority of “Twitter actions by GG supporters.” If the software was developed in accordance with its design and requirement(s), and Tauriq Moosa was indeed using ggautoblocker, then it should have been impossible for there to be, as The Mary Sue puts it, “a sustained harassment campaign led by…” that could chase Moosa off Twitter.

2 Possibilities

I want to address the simple possibility first. There’s every possibility Deepfreeze is wrong, and Moosa was not a user of ggautoblocker. If that’s the case, then Deepfreeze should be corrected. I call on those who know Moosa to find out for sure if he was/is a ggautoblocker user and for Deepfreeze to be corrected if necessary.

The other possibility is Moosa was a ggautoblocker user, and ggautoblocker was unable to save him from criticism of his utter lack of knowledge of videogames or their developers. The previous sentence is the “what,” but it’s far more interesting to examine “why” ggautoblocker couldn’t save Tauriq Moosa.

In spite of a noble goal in its design paragraph, ggautoblocker is, in fact, designed to be a guilt-by-association blacklist for “ideologically pure” members of the videogame or technology industry to use. A list of seven Twitter users, up to November 2014 named blacklist.txt, are the source for assembling a second list of Twitter users to automatically block. Included in the seed list of seven names is Breitbart commentator and journalist Milo Yiannopoulos and attorney Mike Cernovich.

Any Twitter user that follows 2 or more of the blacklist members is added to a list to be blocked. At best, ggautoblocker is a broad stroke blacklist with thousands of false positives. At worst, ggautoblocker is the worst designed and implemented software, perhaps ever. To determine which, let’s allow metrics to decide.

Ggautoblocker’s Tale of the Tape

To start, we’ll go back to data in a report from Women, Action, & the Media about Twitter harassment. As was stated in the report, only 65 of the nearly 10,000 accounts in the ggautoblocker list were characterized as having harassed over the three-week period under study. Therefore, the success rate of ggautoblocker for the three-week period is roughly .65%. By any standard, that’s a failing grade.

This leads in to our second metric, requirements success rate. Based on the design paragraph, it would seem the lone requirement for ggautoblocker would read something like this:

Ggautoblocker shall prevent harassing tweets from appearing in a user’s mentions.

Seems like a simple enough requirement to write code to. To decompose this requirement a little, some of the functions of the code might be the following: Grab tweets in a user’s mentions, mine them for harassing language, remove the tweets, and block the offending account.

Perl, as it turns out, is pretty good at this sort of thing. I used a Perl script in 2013 to mine hundreds of MB of log files for specific strings of text, to compare results against known values, and to report values and success or failures in comparisons to a display for the user.

However, the implementation of the ggautoblocker code, as we demonstrated above, doesn’t satisfy the requirement, considering the 99.35% failure rate of identifying accounts as harassers that did not harass in the three-week period in the WAM study.

Also, ggautoblocker didn’t save Tauriq Moosa from getting “harassed” off Twitter; further, the people who allegedly “harassed” Moosa off Twitter were supposed to be the people blocked by ggautoblocker. Doesn’t that mean autoblocker just publicly and catastrophically failed it’s only requirement?

I’d say so, which makes ggautoblocker, according to metrics, the worst piece of software ever written for a professional environment. Ggautoblocker passes 0 of its 1 requirement, a requirement pass rate of 0%.

At worst, software written in the real world obtains requirements pass rates in the 70-90+% range—software that cannot achieve this level of requirements compliance is not allowed to proceed to testing, let alone get delivered, in general. Indeed, in 2014 and early 2015, software I tested was allowed to go into formal test with an estimated requirements pass rate in the low 90% range only after a briefing to our customer.

Professional Incompetence

If the story of ggautoblocker ended at how woefully inadequate it is as an anti-harassment tool, there wouldn’t be much of a story, either. Sure, a character assassination of the developer(s) of the tool could be written on the basis that a person lost their presence on social media because of autoblocker’s ineffectiveness. Everyone who read it could have a laugh, or solemnly nod their head in acknowledgement of the damage the education academic elite has already done to STEM.

The story is that IGDA, a professional organization, gave the green light to list autoblocker as an anti-harassment resource in spite of its design as an obvious industry blacklist and the fact autoblocker doesn’t do what the design paragraph says its supposed to do. One question remains: Did IGDA list autoblocker without reviewing it, demonstrating gross incompetence in leadership, or did IGDA review autoblocker and support blacklisting its own members?

The story is anyone considering membership in IGDA must question the software development competence of not only the Executive Director, but of the entire Board of Directors. Reading less than 250 lines of code, noticing the totally unprofessional variable and array names, evaluating the effectiveness of the code, and rejecting ggautoblocker as an ineffective, inflammatory, and potentially defamatory piece of software seems like the kind of thing the leadership of a professional software development organization would have an obligation to do. So why didn’t they?

The story is the strong circumstantial evidence the videogame academic elites at DiGRA are using ggautoblocker. Again, one must call in to question their competence in software development to passively approve the use of an industry blacklist for their Twitter account. Further, one has to ask why an organization advertised as one dedicated to the study of videogames would utilize a tool capable of blocking important voices in the industry.

Think about this: If John Carmack leaned libertarian politically, he could very easily be on autoblocker’s list. How could DiGRA possibly fulfill its role academically without John Carmack’s voice as a commenter? The same is true for Sid Meier or Miyamoto or Kojima or Williams and so on. No amount of “research” into videogames would be complete without at least offering the research to industry veterans for comment. I’ll go so far as to say any research about videogames that doesn’t have comments from at least one notable industry member is fraudulent.

Of course, this is DiGRA we’re talking about, who made it abundantly clear what they think about creative freedom and the will of the consumer. Maybe it’s no surprise that a having a culture of incestuous peer review and disdain for the industry upon which it is supposed to study is using the worst piece of software ever written.

A man was “harassed” off the internet, and the worst piece of software ever written was incapable of saving him. There is a story in there, but it isn’t character assassinations or concern trolling. The story is exposing a terrible piece of Perl code for what it really is, and the story is exposing the professional and research organizations who aren’t competent enough coders to realize the worst piece of code ever written is just a blacklist that can’t satisfy its requirement.

UPDATE: Apparently, Moosa is back on Twitter, so it seems the whole thing might have just been an attention grab.

Todd Wohling

A long time ago on an Intellivision far, far away my gaming journey started with Lock n' Chase, Advanced Dungeons & Dragons The Cloudy Mountain, and Night Stalker. I earned both a BS-Physics and a BS-Mathematics from the University of Wisconsin-Eau Claire. Today I spend most of my time on PC. I left a career of 14 years in aerospace in Colorado, so I could immigrate to Norway.