Double standards in hate speech policies on social media

The current discretion afforded to social media companies’ moderation of hate speech risks cementing discrimination online, perpetuating the inequalities that both hate speech legislation and contractual hate speech policies aim to combat.

The definition and implementation of private hate speech policies on social media is currently unregulated within the EU. Instead, moderation of hate speech is left to self-regulation through soft law measures and social media companies’ own contractual terms. Disturbing double standards have emerged in the vacuum of transparency and accountability that has followed this laissez-faire approach. In online discussions, legitimate criticism of discriminatory structures are often removed as hate speech, whilst genuinely hateful and discriminatory statements are allowed to remain online.

To name an example, in April 2021 the Swedish activist Amanda Ålenius posted a text on Instagram on the topic of men’s sexual violence against women.[1] The post was quickly removed for violating Instagram and Facebook’s shared policy on hate speech, because the statement was considered to be a generalisation of men as perpetrators of criminal behaviour. Yet, as a fact, men are overwhelmingly overrepresented as offenders of sexual violence.[2] This not only legitimises, but also necessitates, a discussion of the statistical phenomenon of men’s sexual violence against women: a discussion which is apparently not permitted on two of the world’s largest online platforms.

Meanwhile, human rights organisations have criticised inadequate responses by social media companies toward online misogyny. In a report from 2018, Amnesty International criticises Twitter for failing to investigate and respond to violence and abuse directed against women on its platform.[3] Similar discriminatory practices have been identified regarding other marginalised groups, such as people of colour and the LGBT+ community,with particularly concerning reports of flagged “hate speech” against white people and men being prioritised over statements against actually marginalised groups.[4]

In a world where a considerable part of our communication occurs on social media, such arbitrary moderation of hate speech poses great risks for marginalised groups’ abilities to exercise their freedom of expression online. The seemingly public forum that social media offers is, however, in fact a private one and thus falls outside of the sphere of protection afforded by constitutional rights to freedom of speech. Human rights cannot typically be invoked against a private actor, only public authorities.[5] Whilst a state can be guilty of censorship, a private company cannot: Facebook, Instagram and Twitter are free to decide for themselves what they want and don’t want to be displayed on their services. Human rights frameworks are therefore limited in their ability to regulate the relationship between social media companies and their users.

An alternative way to address the risk of arbitrary “censorship” online is to, through EU legislation, instead target the lack of accountability and transparency which this risk stems from. Currently, on an EU level, there is no such legal framework addressing the risk of arbitrary content moderation. Only one soft law measure has been adopted: the non-binding Code of Conduct on Countering Illegal Hate Speech Online (henceforth the Code of Conduct). The Code of Conduct was introduced by the EU Commission in 2016, and has been signed by most major online platforms. Through the Code of Conduct, these companies are encouraged to remove illegal hate speech within 24 hours, but the agreement contains no safeguards to ensure that these removal decisions are accurate. Therefore, whilst yearly monitoring reports show that removal rates have increased, it is impossible to know whether this reflects an increased removal of genuine hate speech. Nor does the Code of Conduct contain any procedural rights for users, such as the right to be informed of a content moderation decision (e.g. when a post is removed), the right to receive a justification for such a decision and the right to challenge it. These user rights are essential in order to be able to rectify inaccurate removal decisions such as that of the case of Amanda Ålenius.

EU law therefore does not currently guarantee any user rights against social media companies when it comes to moderation of hate speech. In the absence of a legal framework, the relationship between social media companies  and users  is regulated by the contractual terms of the social media platform. For users of Twitter, Facebook and Instagram, their prospects are bleak: neither of the companies state any concrete right for users to be informed of content moderation decisions, to receive a justification for any decisions, nor to be able to challenge them.[6] In practice, this means that a user does not have a right to appeal a decision made by a social media company. It also means that, even if the user is allowed the possibility to challenge a decision, the user won’t necessarily be given an explanation of why the post was deemed to violate platform policy. This makes it difficult for a user to motivate why its post should remain online and successfully challenge a decision. Even more difficult is when a user is subject to restrictions that limit its content’s visibility without being informed, such as variants of so called shadowbanning. How should users know when to challenge a decision if they are unaware of it?

In December 2020, the EU Commission proposed a new regulation aiming to target this lack of transparency and accountability in online content moderation. The proposal for the Digital Services Act (henceforth the DSA) suggests a number of new obligations for very large social media companies. Notably, the DSA introduces compulsory transparency reports[7] and several user rights, effectively establishing the right to be informed, the right to a justification and the right to an appeal as a minimum standard in the companies’ contractual terms.[8]

Whilst the DSA proposes welcomed improvements in user rights, these new obligations for online platforms achieve only limited transparency and accountability. The proposed transparency reports essentially replicate those reports that social media companies already produce and are therefore of little novelty. Furthermore, user rights only extend to certain categories of users and certain kinds of content moderation decisions. Social media users can namely be divided into two types: the content owner, and the user who reports hate speech on the platform. The content owner is affected by content moderation measures if their content is removed or otherwise restricted. Conversely, the notifying user is affected if no measures are taken against content they have flagged. However, the DSA only proposes all three user rights for the content owner. The notifier merely has the right to be informed of a decision, but not to receive a justification for it, nor to challenge it.[9]

In practice, this means that if a user reports something as hate speech, it will have no possibility to receive a justification for why the content was allowed to remain online, nor will it be able to appeal the decision. This allocation of user rights therefore fails to bring transparency or accountability to situations such as those illustrated by Amnesty International, where misogynistic content is not removed despite being reported to the company. Moreover, the proposed content owner’s user rights only apply to removal or suspension decisions. According to the DSA proposal, a social media company would still not be required to inform of, justify or provide an appeal for any other form of restrictions of users’ content.[10] More stealthy moderation techniques, such as reducing content’s visibility, could therefore still be used without informing the content owner.

These limitations in the DSA propsal mean that EU legal regime will continue to offer little protection against arbitrary and discriminatory moderation of hate speech online. Consequently, it does not sufficiently address the double standards that have been observed in the enforcement of private hate speech policies on social media. Besides being a serious issue of discrimination, this legal regime also threatens the democratic values that the EU rests on. Voices that challenge the status quo and its structural inequalities are those that pluralistic, democratic societies need the most. A continued lack of transparency and accountability for social media companies’ moderation of hate speech therefore undermines the very foundations of the EU.

Yvette du Plessis-Sjöblom, LL.M.


[1] Amanda Alenius subsequently posted an almost identical version where the original word “men” was replaced with “chickens”, resulting in a rather entertaining text on “chickens’ violence against women”. As the original post has been removed, only the edited (chicken) version is accessible at https://www.instagram.com/p/CN8Oh-IF_By/, accessed 2021-12-19.

[2] BRÅ, Våldtäkt och sexualbrott 2020, https://bra.se/statistik/statistik-utifran-brottstyper/valdtakt-och-sexualbrott.html, accessed 2021-12-18.

[3] Amnesty International, Toxic Twitter, https://www.amnesty.org/en/latest/research/2018/03/online-violence-against-women-chapter-1/, accessed 2021-12-17.

[4] The Washington Post, https://www.washingtonpost.com/technology/2021/11/21/facebook-algorithm-biased-race/, accessed 2021-12-17.

[5] See e.g. article 10 of the European Convention on Human Rights which refers to freedom to express oneself “without interference by public authority”.

[6] See Instagram, Terms of Use, https://help.instagram.com/581066165581870/?helpref=hc_fnav, accessed 2021-12-17;  Facebook, Terms of Service, sections 3(2) and 4(2), https://www.facebook.com/terms, accessed 2021-12-17; Twitter Terms of Use, https://twitter.com/en/tos#intlTerms, accessed 2021-12-17; Twitter, Our range of enforcement options, https://help.twitter.com/en/rules-and-policies/enforcement-options, accessed 2021-12-17.

[7] Articles 13 and 23 of the DSA proposal.

[8] Articles 15(1) and 17(1) of the DSA proposal.

[9] Articles 14(5) and 17(1) of the DSA proposal.

[10] Articles 15(1) and 17(1) of the DSA proposal.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s