Moderators report posts, comments or accounts that have breached both the media outlet’s community standards and social media platforms’ policies with regard to online abuse. It is therefore very important that moderators are aware of and constantly updated about such policies.
Twitter has developed a series of policies under its Twitter Rules defining what it considers online abuse. The following categories are relevant in the context of this project:
- Violent content (threats of death or physical harm) or that glorifies violence.
- Terrorism or promoting terrorism.
- Targeted harassment such using aggressive comments to intimidate someone, incitement to do so, or wishing and hoping physical harm against someone.
- Hateful conduct promoting violence against other users in the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.
- Sensitive media including graphic violence and adult content.
- Impersonation of individuals or organisations with intend to mislead others.
See here the full text of Twitter’s Rules
Moderators can report to Facebook any post or message that they believe violates the platform’s community standards. Facebook states that it will not inform the subject of the complaint and that it will review the reported content and remove it if its assessment finds a violation of the platform’s community standards.
Click here to learn how to report a post or comment on your page.
Facebook community standards states that the platform does not tolerate:
- Bullying and harassment, which is defined as posts in form of writing or images targeting a person “with the intention of degrading or shaming them”. This category also includes online stalking;
- Direct threats, including “serious threats of harm to public or personal safety”;
- Sexual violence, including not only posts containing direct sexual-related threats (rape or similar) but also those containing incitement for such attacks, and threats to share intimate images. Click here to access Facebook’s form for reporting blackmail or threats to share intimate images.
- Hate speech, which Facebook defines as direct attacks or promotion of violence against people based on characteristics such race, ethnicity, national origin, religion, sexual orientation or gender identity among others.