On Tuesday, Twitter made two new policy changes to assist with combatting abuse amid an opinion piece by Twitter executive Vijaya Gadde on Washington Post explaining the goals of finding the balance between keeping a free speech environment and making all users feel safe. Many users have reached out to Twitter claiming that they don't feel like they have a voice on the platform or that they are scared to add to various conversations out of fear of violent comments. Because of this, Twitter has often been criticized by the media for "sucking at dealing with abuse." In their press release announcing the changes, Twitter says that they hope the updates "help [them] in continuing to develop a platform on which users can safely engage with the world at large."
The first change in the rollout was adjusting the language of their violent threats policy to not only apply to direct and specific threat, but also to a more broad "threats of violence against others or promot[ing] violence against others." This means that a post doesn't have to be directly at a specific user to be considered abusive or violent. To help Twitter enforce this policy, the support team will now have the ability to lock accounts for a pre-determined amount of time. The user will still have to delete the tweet and verify their phone number just like in Twitter's previous policy.
Twitter is also testing a new product feature that will help limit the reach of suspected abusive accounts. The features looks into many different factors that typically are associated with abuse, such as the age of the account and whether the content is similar to that which has been flagged abusive in the past. This feature is purely meant to combat abuse and will not affect the content that users are searching for. Fear not, this feature will not seek out controversial or unpopular posts. These combine to hopefully help Twitter reach its goal of creating a "safe place for the widest possible range of perspectives."