Twitter wants to limit sharing of violent content on its platform and for this it is testing a new feature which will allow users to self-edit if they are about to share tweets that carry harmful language.
This feature is currently available to a limited set of iOS users, who can revise the content of their tweet before making it public. “When things get heated, you may say things you don’t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful,” Twitter mentioned.
Twitter points out the dialog box will pop-up on tweet which carry some unpleasant material, and using its AI/ML tools, Twitter’s back end team will be able to get hold of such words in the tweet and report it to the user, giving them the chance to edit the content.
This seems like Twitter’s way of saying that people can have a second look at what they’ve written, and change the words in order to make sure they don’t face consequences for violating Twitters’ policies, which could lead to suspension of account or strike on the account.
Many users have been crying out for an edit button on Twitter, something that even Jack Dorsey, CEO, Twitter also ackowledged during his visit to India last year. But while this testing feature may sound like the real deal, but the micro-blogging platform is quick to change that stance for the public. “It is still not an edit button for users but a self-edit tool to tackle rampant harassment on its platform, said Twitter.
During the pandemic, people have been blatantly trying to use Twitter to spread COVID-19 related misinformation, and Twitter claims to have worked out multiple ways to limit that from happening. Since the test feature is only for iOS users right now, we’ll be waiting to see if the option is rolled out for Android folks as well in the coming months.