Instagram has introduced new options to limit direct messages and comments from strangers engaged in abusive behaviors against users…
Instagram is rolling out new anti-abuse features to limit and/or prevent “unwanted interactions.” The Facebook subsidiary is introducing tools to keep creeps away from its users. There are three new safety features. One is “limits,” another is under development and proactively detects abusive language before it’s delivered through the platform. Yet another would automatically filter DMs containing offensive words, phrases, and emoji, relegating them to a hidden folder.
New Instagram Direct Message Limit Option Prevents Strangers from Harassing Users
Instagram’s new “limits” tool does precisely what one would imagine. It allows users to enable an option that blocks strangers who aren’t connected from commenting or sending direct messages. Of course, it’s already possible to turn off all comments and/or DMs, but this would provide more granular control. The other two are automatic and would either block or filter out abusive language. Instagram explained in a press release:
“We hope these new features will better protect people from seeing abusive content, whether it’s racist, sexist, homophobic or any other type of abuse. We know there’s more to do, including improving our systems to find and remove abusive content more quickly, and holding those who post it accountable.”