Audience moderators interviewed by IPI highlight that the following tools can be considered when moderating abusive messages in Twitter:

  • Muting: When it comes to online abuse in violation of both the media outlet’s own and Twitter’s community standards, moderators tend to mute rather than block accounts. This option dilutes the direct impact of the abuse as the target will no longer receive notifications from the muted account. It also prevents a possible angry backlash as the muted user has no knowledge of the muting. Finally, muting allows moderators to still see content produced by muted accounts and therefore remain vigilant to any potential credible threats against the media outlet or a journalist.
  • Blocking: Moderators tend to block accounts that persistently spam or send scams – including bots and organic users – otherwise moderators generally adopt this measure as a very last resort to avoid a backlash from the blocked accounts as the latter are notified when they are blocked. Also, since the moderator will not be able to access the blocked account, it makes it difficult to monitor any imminent threat.
  • Reporting: Moderators generally report tweets or accounts to Twitter that disseminate potentially credible and imminent threats or contain violent imagery.