Introduction: Strengthening moderation to prevent attacks
We have explained in other sections why comment moderation is vital to allow users to have healthy conversations on your comment pages and to that ensure attacks targeting journalists don’t appear on the comment pages or, if they do, that they are removed as quickly as possible and that aggressors are discouraged. In order to accomplish these aims, you should aim at reading, or at least glancing through, all comments posted by users on your comment page. While reading all comments requires significant resources that not all news organizations will have, your strategy should at least have this goal in mind. You should also consider employing AI software to help highlight potentially problematic posts. While this such software is not a substitute for human moderation, and while some aggressors have reportedly learned to circumvent it, it can be useful in helping prevent certain problematic posts from appearing on your site without your consent.
Keep in mind that removing attacks, threats and insults targeting journalists does not remove the risk of physical violence emanating from the aggressor. Moderators who see aggressive messages aimed at a journalist, particularly those that contain a threat, should not only remove those messages but also bring them to the attention of relevant persons in the news organization. Online attacks have the potential to turn into physical attacks and news organizations must have a system in place to assess the risk level and take the necessary steps to protect targeted journalists.
This section aims to define the role of moderators and highlight some useful tools and strategies moderators can adopt to prevent aggressive messages from appearing on your comment sites.
A primary goal of moderation is making users aware that comments posted in the comment section need to remain within clearly defined boundaries and follow a set of transparent rules that the news organization enforces. Showing users right from the start that moderators are actively participating in the conversation and monitoring it serves as a [...Read More >
Tools and Resources
Your Community Guidelines should warn users that their accounts will be suspended if they act in breach of the Community Guidelines twice within a short period of time. Their accounts can be blocked for a few hours or a day.Read More >
Comments that include a threat, insult or another attack against a journalist need to be analysed closely by moderators, who will have to determine if the comment in question falls within the boundaries of legitimate criticism or is in breach of the Community Guidelines and therefore needs to be removed. Any decision about removing a [...Read More >
While you should aim at reading all comments on your pages, doing so may take significant time and mistakes can happen. Ask your users to support you in the effort to keep the conversation in line with the principles set out by the Community Guidelines, in particular as far as attacks against journalists are concerned.Read More >
News organizations can choose to pre-moderate comments, meaning that comments are vetted before being posted, or post-moderate, meaning that comments are analysed shortly after being posted and attacks on journalists are immediately removed as well as any other content in breach of the Community Guidelines. A key factor in deciding whether to implement pre- or [...Read More >
Moderators play a key role in ensuring your community of users can be part of a valuable exchange of information and ideas and that your journalists and your media outlet are shielded from attacks on your comment pages. The following elements should be kept in mind to ensure your moderators can carry out their work [...Read More >