The ability of newsrooms to develop and implement structured protection and prevention systems to counter online harassment against their journalists is often dictated not only by strategic decision-making on the part of the news organization but also by the availability of resources necessary to put such a system in place.

In the course of this research, we reviewed and compared protection strategies adopted by news organizations that have invested in their online presence and pursued community building and comment moderation as an integral part of their journalistic activity. This study looks in particular at the experience of private news organizations such as The Guardian (UK), Spiegel Online (Germany) and Cadena SER (Spain), as well as the public service broadcasters in Germany, Finland and the UK.

Looking at the approaches adopted by the news organizations mentioned above, some commonalities can be identified:

  • A declared acknowledgment that online harassment directed at journalists represents an attack on the entire newsroom.
  • A recognition that women, as well as members of minority groups, or journalists covering related issues, are targeted more often and in more brutal ways.
  • The treatment of online attacks against journalists as an element of the broader phenomenon of hate speech and dissemination of disinformation, all of which seek to undermine the very foundations of journalism and democratic exchange of ideas.
  • The development of preventive measures, including regular workshops on issues such as online security, emerging topics that tend to attract abuse, or how to cope with the emotional distress that might result from online violence.
  • A commitment by newsroom managers to improving newsroom culture so that journalists feel comfortable in coming forward with their experience of harassment. This commitment includes offering journalists several points of entry to support channels so as to make it easier for them to seek help. These include dedicated email addresses, mobile chat groups, direct contact with line manager and heads of audience, and a peer support network constituted by colleagues who have been trained to assess risk and respond to trauma.
  • A trend toward holding regular meetings with community managers, digital editors and different news teams for a “health check” on their work related to social media. These checks give the teams a possibility to bring up issues that they face in their everyday work and are a way of communicating that there are support mechanisms in place whenever needed. In times of crisis, such meetings can be held more often.
  • The development of a set of guidelines and protocols to prevent and counter online attacks, highlighting clearly which content will be immediately removed and which alternative strategies will be adopted for content that cannot or should not be removed. All newsrooms that participated in this study agreed that rapid changes in technology, in social media tools and in the political landscape behind online attacks require a constant assessment and update of the guidelines adopted. In some cases, newsrooms have chosen to have the protocols exclusively transmitted verbally, through frequent roundtables and workshops, to ensure they remain up-to-date.
  • A general agreement that although existing judicial procedures dealing with online harassment are not as effective as they should be, it is important to report cases of threats, sexual harassment and insults to the police in order to increase the understanding of the phenomenon of online harassment among law enforcement and the judiciary. For those newsrooms that have access to legal defence, pursuing criminal charges or lawsuits against online aggressors has also proven effective in pre-empting further attacks.

The experience of newsrooms in dealing with social media platforms to request the removal of content varies greatly. Indeed, the experience of newsrooms in the UK and Germany, where social media platforms have local offices and staff who speak the local language, is different from that of their colleagues in Poland and Finland. Community managers and editors who participated in this study have expressed the desire to develop better communication avenues with social media platforms, in order to obtain the swift removal of highly problematic content, such as threats and doxing.

Practices also differ in the choice of online platform moderation adopted, which is partially influenced by the reality in which journalists and news organizations operate. Pre-moderation, meaning that users’ comments will be posted only after being reviewed by an editor, as well as real-name registration requirements have been successfully adopted by some newsrooms, such as Helsingin Sanomat in Finland, diari ARA in Spain and Gazeta Wyborcza in Poland, to limit the number of abusive comments on their sites.

Only few newsrooms so far, and generally better resourced ones, report employing artificial intelligence-based software to ensure that potentially problematic posts are brought to the attention of newsroom and community managers as fast as possible. While AI software is perceived as a very useful tool, community managers feel it does not substitute human moderation. Furthermore, while it is true that AI software has improved its ability to identify potentially problematic comments, it is also true that aggressors (both humans and bots) have been successful in developing a language and tools to avoid detection by AI.

Our interviews with journalists, editors and news managers revealed a growing awareness of the need to develop strategies to tackle a problem that is only likely to grow. At the same time, the clear evidence showing that women and members of minority groups are particular targets of online aggression have reinforced the belief among observers that gender balance within the newsroom as well as a gender-sensitive approach to the content disseminated are also key to ensuring that women who receive online attacks feel fully supported and empowered in their working environment.