Introduction

Media organizations use social media platforms to reach a wider audience, generate public debate around certain issues and, ultimately, create a community. Media outlets tend to apply the same community standards on their official social media channels as they do in their own discussion forums, where moderations teams engage with the audience and create an ecosystem for healthy public debate with and among users.

However, when newsrooms try to mirror the practices on these forums onto their social media channels, they must also contend with the policies established by the tech companies behind these platforms and the tools the latter provide to moderate debate. This section examines precisely the intersection of newsroom policies and the policies of social media platforms and aims to present mechanisms with which moderators can steer abuse-free public discussion on social media platforms.

The following measures are the result of a series of in-depth conversations with audience moderation experts of five media organizations in four European countries: Finland, Germany, Spain and the United Kingdom. While nowadays newsrooms include diverse channels – e.g., Instagram, Telegram and WhatsApp – in their communications strategies, significant resources remain dedicated to moderating Facebook and Twitter. This section, therefore, focuses exclusively on these two platforms.

While necessarily looking for ways to limit online attacks against journalists, the moderators with whom we spoke said they avoid blocking or banning users whenever possible, for three reasons. Firstly, to preserve these channels as an open arena for debate and criticism; secondly, to protect media organizations from accusations of censorship; and thirdly, out of concern that blocking or banning may trigger even worse abuse.

Note that this project focuses only on tools to moderate conversations on media outlets' official channels, not for journalist's personal pages. Click here for safety tips regarding personal social media accounts.

Blocking words and setting the strength of the profanity filter: Facebook blocks certain content that it regards as a breach of its community standards. However, in addition to Facebook’s own blocking process, moderators can also choose to set a series of keywords such that any comment containing these words will automatically be blocked.

Read More >

Moderators report posts, comments or accounts that have breached both the media outlet’s community standards and social media platforms’ policies with regard to online abuse. It is therefore very important that moderators are aware of and constantly updated about such policies.

Read More >

Deleting a comment: This feature allows the moderator to remove permanently the comment from a post, meaning that it will not be visible to anybody, including the user who posted it. Click here to learn how to delete a comment from a post Hide a comment: This feature allows the moderator to hide comments on [...

Read More >

Ban a user or Page: This feature allows the moderator to ban a user or another Facebook Page. The banned user will still be able to view and share content from the media’s Facebook Page but will be unable to engage – e.

Read More >

Audience moderators interviewed by IPI highlight that the following tools can be considered when moderating Facebook: Delete a comment when it contains aggressive or threatening content or derogatory words and insults. This is done to promote a healthy public discussion.

Read More >

Twitter offers moderators the possibility to report tweets (including multiple tweets at the same time), direct messages, targeted tweets within a conversation or entire conversations (threats, replies below a tweet), moments, and accounts taking part in abusive behavior. Please click here to learn how to report a tweet.

Read More >

Mute an account This feature allows the user to remove from their timeline the tweets of an account without unfollowing or blocking them and, unlike these actions, the muted account will not know that you have muted them. Muted accounts can still see your content on Twitter but you will not be notified if the [...

Read More >

Audience moderators interviewed by IPI highlight that the following tools can be considered when moderating abusive messages in Twitter: Muting: When it comes to online abuse in violation of both the media outlet’s own and Twitter’s community standards, moderators tend to mute rather than block accounts. This option dilutes the direct impact of the abuse [...

Read More >