The social media company recently announced the test of a new feature called "Safety Mode," which aims to help users prevent being overwhelmed by harmful tweets and unwanted replies and mentions. The feature will temporarily block accounts from interacting with users to whom they have sent the harmful language or repeated and uninvited replies or mentions.
"We want you to enjoy healthy conversations, so this test is one way we're limiting overwhelming and unwelcome interactions that can interrupt those conversations," Twitter (TWTR) said in a statement. "Our goal is to better protect the individual on the receiving end of Tweets by reducing the prevalence and visibility of harmful remarks."
With the Safety Mode tool, when a user turns it on in settings, Twitter's systems will assess incoming tweets' "content and the relationship between the tweet author and replier." If Twitter's automated system finds an account to have repeated, harmful engagement with the user, it will block the account for seven days from following the user's account, viewing their tweets, or sending them direct messages.
Twitter spokesperson Tatiana Britt said the platform does not proactively send notifications letting people know they've been blocked. However, if the violator navigates to the user's page, they'll see that "Twitter auto blocked them" and that the user is in Safety Mode, she said.
The company says its technology takes existing relationships into account to avoid blocking accounts a user frequently interacts with, and that users can review and change blocking decisions at any time.
For now, Safety Mode is just a limited test, rolling out Wednesday to "a small feedback group" of English-language users on iOS, Android, and Twitter.com, including "people from marginalized communities and female journalists."