TR Member Perks!

I’m going to get a little personal with you guys for a second, so bear with me. The last couple months I’ve been going through a hard time, for a variety of personal reasons. I won’t go into the gritty details, but I am a fairly closed off person. I don’t like talking to people I know about my issues. The nice thing about the Internet is it offers a lot of places where you can go and vent, or get advice, or just feel like people are listening to you, anonymously or pseudonomously. One of these places is Reddit. While most people have a poor impression of Reddit, there are a lot of places where people can go seeking support for any number of reasons, from just general depression to breakups to job problems. I figured Reddit might be a decent place to go, and surely I’d find a place where I could let off steam. I ran into a snag though, because I also have a posting history on KotakuinAction. Obviously nothing I’ve ever posted there is bad—just my opinions. But I found out there are a number of subreddits, including ones designed specifically to support people, that automatically ban users who have posted in KotakuinAction. Including /r/OffMyChest, one of the largest subreddits designed for venting and emotional support.

Fortunately, I was able to find subreddits that don’t engage in this, but it got me thinking about just how dangerous a prospect this is. In the ever growing quest to create “safe spaces,” it seems like people are actively shutting people out who may truly need the support. The same thing happened with GaymerX, the convention designed specifically to show support to LGBT folks in the gaming community. I loved the idea when it first popped up, and even thought about volunteering. Then they began using The Block Bot on Twitter, a tool in which a select few individuals can add anyone they feel is offensive. My personal account was added to this after I asked people not to misgender someone and when I wrote an article daring to call out people who use the title of progressive for the wrong reasons. So, my personal Twitter was officially cut off, as well as hundreds of queer and transgender gamers. 

On Reddit, mods at KotakuinAction assert that, along with /r/OffMyChest, those who post in KIA may also be banned from /r/Rape, a subreddit specifically for survivors of sexual abuse and assault. The problem is widespread enough that they have implemented a huge warning sign on every comment box, warning new users that they could be banned. It speaks to the integrity of KIA moderation to be open about this and warn people ahead of time. But it isn’t something that should be an issue. When you get right down to it, this desperation to be “safe” is endangering people. My situation was not so dire, but what if it’s a person who frequents one of these subreddits, particularly a survivor of sexual assault? What if they already had a support network there, suddenly lost just because they posted on KIA, likely something inoffensive? 

kia warning

The warning now posted on KiA.

This isn’t limited to Reddit. Twitter has already been going through this problem, with the widespread use of indiscriminate blocking tools and allegations of blacklisting and shadowbanning. Such lists for obvious trolls and spammers wouldn’t be a problem, but these are used for regular users, and more to push out conflicting opinions than it is to protect people.

And as described above, this has the potential to split people off from communities. Ones they may need. Can you honestly say you’re protecting people when you’re alienating some, just for thinking differently? Everyone thinks a little differently, and it shows because block bots are popping up that hide the very people the ones making them once supported. Eventually, there will be a whole segment who just sees no one. Or worse, a whole segment who is unseen. Only for speaking their mind. Not necessarily saying anything offensive, but simply going against the grain. 

This is the Reddit Problem—the hugbox problem, the censorship problem, the “safe space” problem. Because you can never block out everything that might be unpleasant, and even if you could, you do it at the expense of others. Especially for places like /r/OffMyChest, if I need to go somewhere to talk about a problem, or vent, I’m not going to feel safe doing it in a place that brushes such huge strokes about people.

Once upon a time, there was a true purpose to the “safe space.” It actually started in the LGBT community and was used to indicate places youth could go to talk about problems they were having, specifically aimed at queer youth, but open to anyone who might be struggling. It was meant to be a judgement free area. 

Internet “safe spaces” are the exact opposite. They exist only to judge, and those who don’t fit the criteria are cast out, as if their problems no longer matter. What’s worse is many of these places collaborate. I found a place to post about my problems, but they happily link to /r/OffMyChest. What if they implemented the same policy? To be fair, I reached out to mods from /r/OffMyChest and /r/Rape (I decided to forego others, which aren’t specifically related to emotional and survivor support). I specifically asked for the reason they chose to implement the ban, and asked what led them to the decision. 

Moderators from /r/OffMyChest didn’t respond, however one moderator from /r/Rape agreed to talk to me a bit. I suppose I should emphasize: I don’t think these are bad people. I don’t personally believe they’re totalitarians, or intentionally silencing people. This is an overreaction by those who are looking to protect a community they care about, one they recognize as being particularly vulnerable. It’s not out of malice, and to be honest, few actions like these are. This moderator certainly solidified that; they were very genuine and sincere (again, keeping their name hidden for privacy). They explained the use of SaferBot, an autoblocking tool for subreddits that will prevent those who post in listed subreddits from posting in theirs. The impression given was that SaferBot is not for keeping average users out, but rather that trolls tend to frequent there as well to a degree where the bot may be necessary. There was a claim that brigades have come from KiA, but stated “we know this behavior doesn’t represent the entire KiA community.” However, they didn’t believe it to be a barrier and said they usually unban people easily who come to them asking politely. 

The comment about brigading seemed especially strange, since realistically, KiA is a very strict subreddit. Any linked to other subreddits, regardless of context or reason, are removed automatically. And the moderators have zero tolerance policies regarding slurs and hateful speech—in some regards, this has gained KiA a lot of vitriol from trolls. (This mod also pointed out there is another subreddit for rape survivors that does not use the bot, and in the case of /r/Rape, it is used because that particular subreddit is targeted more because of its past history). Regardless, it’s hard to find where the proof these brigades actually come from KiA are since there are no threads about these subreddits prior to the bans going out, and since any links to other subreddits are removed automatically by bots. 

Technicalities aside, the issues still persist. While they say no one has claimed to feel they can’t approach them, out of fear of judgement, that’s the problem. If someone fears being judged, they likely won’t tell the person they think is judging them. It could be very few, or it could be hundreds.

And the thing about trolls: things like this won’t stop them. If their intent is to troll, they will continue. To them, the fact that innocent people are left out will push them further, because it means not only can they hurt people directly, but indirectly. If the intent is harm, no about of block bots will stop them, but rather feed them. So innocent people are left out and feel they can’t approach, but trolls simply make new accounts and continue, or use side accounts and continue. The moderator also mentioned some requests to be unbanned that sound bitter and angry, but that often comes with the territory. If someone was already upset, then made more upset upon finding they couldn’t vent in a place designed for venting, they’re likely to be angry. 

While whitelisting certain individuals is a kind gesture, it doesn’t change the fact that this seems an unusual step, and one which in the end, falls on the subjective judgement of someone who may have an agenda. What is “posting awfully”? Is that just obvious trolls, or would that include writing good faith perspectives on the world? Is that just people who clearly have nothing in their posting history but vitriol, or anyone whose opinions differ from that of the moderators? Does a woman who doesn’t believe STEM has a “misogyny” problem suddenly not deserve help, because somehow simply believing the world to not be as horrible as others makes them “dangerous”? Does a person suffering from depression not deserve support? Where is the line or the criteria? What is keeping the moderators accountable? I was told there was no judgement, yet my first message to /r/Rape, I was initially dismissed by a moderator and assumed to just be a mouthpiece for KiA, because I post there. My experiences, or true intentions, didn’t matter. Fortunately, they decided otherwise, but what if I had been someone asking for an unban? Would I have gotten a similar response?

It would be better if there was some direction, so people weren’t suddenly caught off guard. If instead of holing up these, people who were banned were given immediate direction on where they can go that’s safe. “But won’t the trolls go there?” Maybe. There is no absolute solution to trolls and hateful people, technical or otherwise. At points like those, using a moderated forum is almost better, because then you are keeping out the trolls but not blocking out innocent people. These block bots work by assuming intent before a person has done anything—which isn’t fair, nor supportive. What would be more fair would be banning virtually everyone, and requiring people to apply to post (possible by creating a private subreddit, like /r/rapecounseling). But, again, where is the criteria and how do we know the moderators are trustworthy, when despite knowing the entire KiA community is not all bad, they still happily apply a broad brush? 

I contacted the moderators of KotakuinAction as well for their input. They’re aware of the problem, and possibly even the person leading the movement of block bots on Reddit, but again, that isn’t the purpose of this article. There is a clear frustration though, especially with subreddits who exist to support people in general and aren’t hobby or community boards. Currently, on KotakuinAction, there is nothing on the front page that could be considered “trolling,” only alternative opinions. There is one thread promoting a panel, where two of the panelists are women—one a woman from the Society of Professional Journalists, and the other a Latina woman with a background in STEM and advocacy for sex workers and veterans. There are several threads about the auto-bans themselves, pointing out the lack of any evidence for KiA warranting such a ban, and the rest a variety of celebration for the accomplishments of others, including several women, and fair critiques of journalists and websites. 

To some extent, one could understand wanting to prevent trolling before it happens. Obviously, people who post in these areas are vulnerable, and of course there are actual bad people on the Internet. It would be understandable that a black subreddit would ban say, a KKK subreddit—there is no way an interaction between those would result in anything but trouble. But this shouldn’t include assumptions. It shouldn’t include massive communities like KotakuinAction, based solely on a bad media perception or trolls using the board for whatever reason.

And it shouldn’t apply to support networks. Imagine if such vetting went into live group therapy sessions—if someone could just tell the leader “This person said something, on their own time, that I don’t like. I don’t want them here” or “I saw this person hanging out with these people, but one of those people is a bad person. I don’t trust them.” The basic rule still applies: if you wouldn’t do it in real life, you shouldn’t do it on the Internet. And that includes shutting potentially hurting people out of places where they may need support. It’s bad for them, but it’s also bad for the community. Who wants a judgmental support group? Isn’t that the polar opposite of what a support group is meant to be? 

Same goes on Twitter; Twitter was designed as an open medium. Twitter, by design, is for sharing ideas with massive groups of people quickly and efficiently. Doesn’t allowing a block bot negate that? If you want a perfectly safe haven, then make your own website or blog or somewhere that can be completely yours. But abusing the systems put in place by social media sites, ones designed to keep out spam and actual harm, to block out opinions, does nothing but harm. It isn’t safe in the slightest. Developments like these break my heart—not for me. I can usually figure something out;I’ve got tough skin. But I know others who are not so lucky, and I fear for the world my younger family members will grow up in, where the promise of support comes with so much red tape. 

I wouldn’t feel this article is complete without this: if you or someone you know needs support, there are still places you can go. If your situation is urgent, this webpage includes suicide hotlines for most areas in the United States, along with some specific ones. If you just need someone to talk to for support, there are websites like 7cupsoftea (which is what I use) where you can use group support or talk to a specialized listener. And as always, support each other.

Kindra Pring

Staff Writer

Teacher's aid by day. Gamer by night. And by day, because I play my DS on my lunch break. Ask me about how bad my aim is.