Close

Content
Moderation

July 2020 | Article

By Quantanite

By Quantanite

How should your company
approach content moderation?

There are a lot of keyboard warriors out there.

Disinformation and misinformation, hate speech, violent, disturbing and extremist content is being posted on social media platforms right now, as you’re reading this.

All this material must be monitored according to the platform’s community guidelines. If it is not swiftly removed, it will not only wreck the reputation of the host platform, it may seriously traumatize its users.

In this article we explore how companies should approach content moderation. But first, here are some interesting stats.

Some interesting stats

According to social media intelligence firm Brandwatch…

If just 2% of this content turns out to be problematic, that adds up to a daily total of 64 million social media posts violating the terms of a site’s community guidelines

In short, there are an awful lot of posts to monitor – and your company might not have the time or resources to filter out the content which will harm your brand. Now, you may be thinking: surely that’s what AI is for.

Can we leave content moderation to the robots?

Not entirely, although automation is a huge help. It acts as a triage system, weeding out obviously problematic content, and pushing suspect material towards humans for checking.


Lacking an understanding of the nuances and subtleties of language, meaning and context, robots need human help to read between the lines.


So, where AI stops, human intelligence takes over. The trouble is, the world of content moderation is a tough and brutal place for humans.

Paying the price of content moderation

In May of 2020, a large social media provider agreed to a settlement of $52 million for its content moderators.

The workers said they were left unprotected from the trauma caused by what their lawyer described as “every horror that the depraved mind of man can imagine”.

Social media companies are now making ongoing improvements to working environments, such as improving mental health and creating a more stable work environment.

Paying the price of content moderation

In May of 2020, a large social media provider agreed to a settlement of $52 million for its content moderators.

The workers said they were left unprotected from the trauma caused by what their lawyer described as “every horror that the depraved mind of man can imagine”.

Social media companies are now making ongoing improvements to working environments, such as improving mental health and creating a more stable work environment.

How to mitigate the risks

While awareness has increased over the last few years, there is almost no research into the most effective protection for content moderators.

The following is Quantanite’s best practice. It’s based on our experience of employing content moderation teams and asking what they need when it comes to protecting their safety:

Some personalities are suited to the job, others are not. It is not enough to simply assess language ability, listening and observation skills and then leave an employee to sink or swim.

As well as precise, level-headed analysis, this job requires a tough stomach. Content moderators sift through some of the worst that humanity has to offer, and taking that home with them is not a viable option.

Careful oversight is essential. Managers must be trained to recognize individual danger signals before they become an issue. Traumatic triggers will vary from person to person. Some may experience no trauma at all, while others may appear to be fine – right up to the point at which they break.

The cultural fit

The decision on where the work takes place is important because geography will dictate how guidelines are interpreted by the moderator. Content that’s perfectly acceptable in one country may be deeply offensive in another.

These are often fine-tuned contextual judgments. To make the call between right and wrong, content moderation teams must understand the cultural environment of those that post content, and the context of their posts.

For example, the US presidential election is guaranteed to create an avalanche of false information. Moderators will need a thorough awareness of the American political landscape, its election cycles and political influences.

The shifting landscape

The way we define and classify problematic content is constantly changing. Moderators perform a delicate balancing act on shifting ground. They must protect free speech while following the law, and keep users safe while ensuring the product continues to thrive.


All social media platforms want to be a trusted source of news. They should never be in a position where, due to content moderation errors, they become the news.


It’s a huge responsibility, and hiring the right teams in the right locations is crucial for any platform that hosts user-generated content.


To find out how Quantanite can protect your brand reputation, get in touch.

Share this post

Share on linkedin
Share on facebook
Share on email
Share on print