In recent years Twitter has been making efforts to clean up harmful and abusive content on its social media platform. The company has so far relied on internal software and user-flagged rule-breaking tweets to stay ahead.
Twitter has announced the introduction of a tool that allows users to rewrite replies before publishing that contain what they describe as “harmful” language.
The company said in a tweet from its support account on Tuesday that the new feature would be first brought in as a “limited” measure. After hitting send, users will be alerted if their message contains words similar to other posts that have previously been reported and given an option to revise the message before it’s published.
While the test comes as part of a general attempt by Twitter to combat hateful posts on the social media platform, some users did not take well to the announcement, going as far as to describe it as ‘thought policing’.
Gab, which is a self-professed “free-speech” alternative to Twitter, used the opportunity to advertise its own platform.
Others called for the introduction of an edit button to comments. Currently, to edit tweets users must delete then reupload a post.
There was some support for the measure, however, as well as calls for the site to counter “fake news”.
In an interview with Reuters, a Twitter representative said that the policy is designed to get users to “rethink” comments before posting to ensure that they are in line with existing guidelines.
Twitter policies do not allow users to use slurs, racist, or sexist tropes, or degrading content, but, until now, monitoring has been done by netizens themselves who report offensive posts, as well as through the companies’ own screening technology.