What Have We Learned From Facebook’s Leaked Moderator Guidelines?

Does Facebook even have a clue how to moderate it's content?

Leaked Facebook's Internal Guidelines Moderation

by Alyss Bowen |
Published on

The Guardian has obtained more than 100 internal manuals from social media giant, Facebook, used to advise moderators about what should, and shouldn’t be posted on the social media platform. The manuals contain details of the criteria used to dictate if posts are too sexual, violent, hateful or racist – and it would appear even Facebook themselves don’t know how to moderate the content on their social network.

Dubbed as Facebook’s ‘internal rulebook,’ the guidelines decide what their 2 billion plus users can and cannot post in a series of manuals, spreadsheets and flowcharts. Topics stem as wide as cannibalism. But what do these files tell them? Let’s break it down.

  • Videos of violent deaths are marked as ‘disturbing,’ but might not always be deleted as they can help to create awareness of mental illness.

  • Remarks like ‘Someone shoot Trump,’ must be deleted, as he is a head of state and is in a protected category. Yet saying something like ‘To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat’ and’ ‘fuck off and die’ will not be removed, as they are not seen as ‘credible threats’.

  • Photographic proof of physical abuse or the bullying of children does not have to be deleted unless there is a ‘sadistic or celebratory element’.

  • Handmade art, showing ‘nudity and sexual activity is allowed but digitally made art showing sexual activity is not.’

The list goes on, and even goes as far to say that Facebook will allow people to live stream attempts to self-harm because it doesn’t want to ‘censor or punish people in distress.’ What’s so confusing about these leaks documents is that Facebook itself appears to have no clue how to go about moderating their content. How is it acceptable that ‘Let’s beat up fat kids,’ or ‘kick a person with red hair’ has been categorised under the credible violence section? When those statements are both offensive and imply an act of violence towards another individual.

Facebook appears to believe that language like this is ‘acceptable’ as it doesn’t give 'reasonable ground to accept that there is no longer simply an expression of emotion but a transition to a plot or design.' What they aren’t taking into consideration is the implication that statements mentioned could cause extreme distress to an individual who happens to have this language on their feed. Yes, making the choice to tick yes or no when moderating is a complex decision, but the guidelines seem confused.

Since the introduction of Facebook Live, live streams of abuse, death and suicide have been streamed across the platform on multiple occasions – all of which the social media giant has faced extreme criticism over. In light of this, Mark Zuckerberg announced that over the next year, they would be hiring 3, 000 staff to its expanding moderation team. On top of this growing team, it’s widely known that Facebook uses AI algorithms to moderate its content – yet it’s unclear if this system follows the same guidelines as leaked by The Guardian.

Facebook’s head of global policy management, Monika Bickert, said Facebook ‘has almost 2 billion users, and that it’s difficult to reach a consensus on what to allow.’ Moderators are said too often have ‘just 10 seconds’ to make a decision on what, and what not to approve.

According to Bickert they ‘work hard to make Facebook as safe as possible, while enabling free speech. This requires a lot of thought into detailed and often difficult questions, and getting it right is something we take seriously.’ Yet making an extremely important, potentially life-threatening decision in just 10 seconds complete contradicts her statement that making Facebook as safe as possible requires ‘a lot of thought detailed and often difficult questions.’

A source even told The Guardian that ‘Facebook cannot keep control of its content. It has grown too big, too quickly,’ and many moderators are said to have serious concerns about the ‘inconsistency and peculiar nature of some of the policies.’ Facebook has such a wide and diverse community, and it’s natural that everyone on the platform will have conflicting views about what’s acceptable to share – but Facebook has a responsibility here to keep its users safe, and these guidelines in no way reflect this. Bickert goes onto say that ‘videos of violent deaths are disturbing but can help create awareness. For videos, we think minors need protection and adults need a choice. We mark as ‘disturbing’ videos of the violent deaths of humans.’ This in itself isn’t a solution, however – you find us a minor that would turn off a video that opens with a ‘disturbing’ credit. Curiosity would get the better of them, and this could then, in turn, affect them both physically or mentally.

It’s hard to say what should be done here, but it’s plain to see that Facebook needs to take more responsibility here. There’s a very thin line between spreading awareness and spreading unsolicited images, videos and information that could be potentially harmful to those who view it. Facebook are yet to make a statement about the leaked documents, but we will update this article as and when they do.

Facebook are yet to make a statement about the leaked documents, but we will update this article as and when they do.

Like this? You might also be interested in…

Study Says Instagram Is The Worst For Your Mental Health, So How Do We Fix It?

Soon All Your Facebook, Instagram And Messenger Notifications Could Be In One Place

Facebook Has Banned Developers From Using Our Data For Surveillance

Follow Alyss on Instagram @alyssbowen

This article originally appeared on The Debrief.

Just so you know, whilst we may receive a commission or other compensation from the links on this website, we never allow this to influence product selections - read why you should trust us