Reuters reports that Google and Facebook have quietly begun using automated systems to take down extremist videos. The systems are similar to Google's Content ID system to take down copyright infringing videos. A library is built up of videos which are considered to be extremist in nature, and if an identical or very similar video is posted, the system will automatically take it down. Such a system can only remove reposts of content that have already been identified as extremist, it cannot identify new content as being extremist.
In April of this year, representatives from YouTube, Twitter, Facebook and CloudFlare held a call to discuss how they could deal with extremist content on their platforms. This call was brought about after politicians in the US and Europe put pressure on technology and social media companies to do something about terrorists making use of their platforms. One of the main topics of discussion was a content blocking system proposed by the Counter Extremism Project, a non-profit organization which seeks to combat terrorist speech on social media.
None of the companies in the call fully embraced the proposal by the Counter Extremism Project. They were all concerned about outside interference in their platforms. Another idea which was discussed was the creation of an industry run non-profit which would develop a content blocking system. Some time after the call had concluded, at least two of the companies began implementing automated systems to remove extremist content. Sources in Facebook and Google stated that both companies had quietly begun using such systems. The sources did not tell Reuters how videos in the libraries were initially identified as extremist.
The sources stated that these systems were implemented quietly out of concern that terrorists might figure out how to manipulate them if they were widely known. They also expressed concern that repressive regimes may demand that such systems be used to silence opponents if the systems were not kept secret. However, CloudFlare CEO Matthew Prince has another take on why the companies are not talking about this. "There's no upside in these companies talking about it," he said, "Why would they brag about censorship?"
Google has declined to make any comment either on the April call or its use of an automated system to remove extremist content. Facebook declined to confirm or deny its usage of such a system, but did state that the company was "exploring with others in industry ways we can collaboratively work to remove content that violates our policies against terrorism." Twitter's only comment on the matter was that it was still evaluating the proposal by the Counter Extremism Project and that the company had "not yet taken a position."
Are automated systems to remove extremist content a good idea? Leave your comments below.