TR Member Perks!

Reuters reports that Google and Facebook have quietly begun using automated systems to take down extremist videos. The systems are similar to Google’s Content ID system to take down copyright infringing videos. A library is built up of videos which are considered to be extremist in nature, and if an identical or very similar video is posted, the system will automatically take it down. Such a system can only remove reposts of content that have already been identified as extremist, it cannot identify new content as being extremist.

In April of this year, representatives from YouTube, Twitter, Facebook and CloudFlare held a call to discuss how they could deal with extremist content on their platforms. This call was brought about after politicians in the US and Europe put pressure on technology and social media companies to do something about terrorists making use of their platforms. One of the main topics of discussion was a content blocking system proposed by the Counter Extremism Project, a non-profit organization which seeks to combat terrorist speech on social media.

None of the companies in the call fully embraced the proposal by the Counter Extremism Project. They were all concerned about outside interference in their platforms. Another idea which was discussed was the creation of an industry run non-profit which would develop a content blocking system. Some time after the call had concluded, at least two of the companies began implementing automated systems to remove extremist content. Sources in Facebook and Google stated that both companies had quietly begun using such systems. The sources did not tell Reuters how videos in the libraries were initially identified as extremist.

The sources stated that these systems were implemented quietly out of concern that terrorists might figure out how to manipulate them if they were widely known. They also expressed concern that repressive regimes may demand that such systems be used to silence opponents if the systems were not kept secret. However, CloudFlare CEO Matthew Prince has another take on why the companies are not talking about this. “There’s no upside in these companies talking about it,” he said, “Why would they brag about censorship?”

Google has declined to make any comment either on the April call or its use of an automated system to remove extremist content. Facebook declined to confirm or deny its usage of such a system, but did state that the company was “exploring with others in industry ways we can collaboratively work to remove content that violates our policies against terrorism.” Twitter’s only comment on the matter was that it was still evaluating the proposal by the Counter Extremism Project and that the company had “not yet taken a position.

Are automated systems to remove extremist content a good idea? Leave your comments below.

Max Michael

Senior Writer

I’m a technology reporter located near the Innovation District of Kitchener-Waterloo, Ontario.

  • Keirnoth

    “Why would they brag about censorship?”

    No you cod, the correction question is “Why do we have to censor?” This is beyond scary man…

  • Hawk Hopper

    Here is the video from the Counter Extremism Project. It seems like with a bit of creative editing, you could turn this video into an online extremist recruiting tool.

  • Riosine

    And how about the parodies of extremist, you know the kind of videos atheists usually do to mock the silly judeo-christian-muslim religions and their adherents, this system can potentially backfire upon those

  • BurntToShreds

    This is some grim stuff. I hate videos of terrorist acts like beheadings, bombings and bile-filled propaganda as much as the next civilized human being but creating an automated system to take them down is just asking for repressive governments to take the tech and reverse-engineer it for their own nefarious ends.

    The sheer amount of content uploaded to these sites every day may be overwhelming, but acts of extremism and terrorism need to be treated differently than people mass-uploading rips of movies and albums. No automated filters; respond to user reports and delete the content under standard guidelines. Letting the concept of a ContentID for extremism continue to spread unabated would be flat-out irresponsible.