Twitter has begun testing a feature warning uses of “potentially sensitive content” as reported by TechCrunch.
The feature requires that users click through a disclaimer that states the following:
Caution: This profile may include potentially sensitive content
You’re seeing this warning because they Tweet potentially sensitive images or language. Do you still want to view it?
Clicking “Yes, view profile” will dismiss the message and allow you to see that particular person’s profile. The message will appear as a pop-up if you click on someone to view their profile or in the profile page directly if you head to someone’s profile via a link.
Deployment of this feature was first noticed over at Mashable where a reporter discovered the message when clicking on the profile of technology analyst Justin Warren. Mr. Warren is Chief Analyst and Managing Director of PivotNine, a research, advisory, and consulting firm centered around IT and based in Melbourne, Australia. Mr. Warren was unaware that his profile had been flagged as such. Twitter explicitly describes “pornographic or excessively violent content” as being verboten, but there are no clear rules for profanity or offensive speech listed. Users such as Mr. Warren have been baffled by the flag as a result what qualifies as “potentially sensitive content” seems to be vaguely defined.
The warning is based on the individual settings of users who are viewing tweets as well as those who are posting the tweets themselves. However, Twitter’s Media Policy states that they may contravene the settings of a poster if they post content considered as “sensitive” without labeling it as such. Repeated violations of the media policy can result in the removal of the ability to turn off this filter altogether. Although suspensions of your Twitter account for issues such as having a pornographic profile, header, or background image can be appealed, there currently is not any system to resolve the permanent “sensitive content” flag.
The phrase “potentially sensitive content” was included in a blog post by Twitter on February 7, 2017, where they talked about forthcoming features to combat abuse and harassment titled “An Update on Safety”.
The notion of “safety” online being put forth by Twitter is patently ridiculous and infantilizing. No one’s safety is harmed by seeing nudity, violence, or offensive speech. No actual harm is coming to them unless you equate hurt feelings to bodily harm – which is what they seem to be doing here. Provide users tools to filter out words, lewd media, and violence. Give people the ability to block or mute people, and punish people who evade those blocks. Beyond that, anything else is just coddling. No wonder their platform is becoming increasingly irrelevant by the day.
What do you think of Twitter warning users about potentially sensitive content? What do you think their internal guidelines are for defining what falls under this banner of content? Let us know in the comments below!