Submission
Privacy Policy
Code of Ethics
Newsletter

The Path to Automatic Moderation

The freedom provided by Section 230 of the Communications Decency Act has opened huge business opportunities, but it has also created several problems. One hot spot around which debates in the US have flared up again and again, for example, is the category of content related to terrorism, or rather, content published by terrorist organizations.

The situation is complicated by the fact that, despite the recurring debates, the US has for a long time only indirectly introduced regulations. These have taken the form of public-private partnerships, awareness campaigns, memoranda of understanding, transfer of expertise, and delineation of responsibilities between the US government and large social media companies.

Of course, in addition to this, it is also a non-negligible aspect that, for example, all the large social media platforms are also companies registered on the stock exchange. In many cases, negative news about the company is directly reflected in the reduction of share prices, which indirectly exerts strong pressure on the companies concerned through investors. In addition to this, there is also the role of social norms, which means that in many cases it is the community using the service that wants to see its values reflected in the content on online platforms.

In addition, of course, social media providers are also responsible for restricting content that is illegal, for example in cases covered by the Anti-Terrorism Act or for content that infringes copyright. Partly for this reason, and partly to regulate cases that are not regulated by law but may be offensive to society’s values, most platforms have a Terms of Service (TOS) setting out exactly what content they will delete or sanction the content provider for.

In most cases, this type of sanction involves the removal of the content concerned, which service providers usually want to do efficiently and, most importantly, quickly in response to ongoing social pressure.

It should also be remembered that although most of these service providers are registered in the US, as their activities are worldwide, they are also subject to EU legislation, for example in the EU. Perhaps the most important of these is the EU’s Digital Services Act (DSA), which came into force on 17 February 2024. This will place new obligations on online platforms that have users in the EU, with the aim of protecting users and their rights more effectively. The regulation will include several new elements, such as

  • provide tools for users to flag illegal content, goods and services, and related activities,
  • prohibit the targeting of minors with advertising based on profiling or personal data,
  • require users to be informed about the advertisements they see, for example, why they see them and who is funding them; and
  • prohibits in its entirety ads that target a group of users based on sensitive data, such as political or religious beliefs, sexual preferences, etc.

In addition, once the DSA comes into force, the rules that have already applied to some very large platforms and search engines since 2023 will apply to all platforms and hosting services. It is therefore not surprising that it is particularly important for social media service providers, which derive a significant part of their profits from data trading and advertising sales, to comply and enforce these rules quickly and effectively.

To filter out offending content quickly and effectively, platforms have traditionally employed moderators, trained people whose job it is to review the content posted on the sites, review the reports received and possibly impose the necessary sanctions. Of course, different algorithmic solutions existed already at that time, but the dominant approach was the human-in-the-loop approach, i.e. in most cases decisions were made after at least minimal human review.

The moderation process was of course already criticized at the time. A June 2020 report by New York University Stern, Centre for Business and Human Rights, for example, criticized Facebook, the largest social media platform, for outsourcing the moderators it employs and for not having enough moderators.

With COVID, the situation changed dramatically. During the pandemic, social media platforms relied heavily on algorithmic content moderation, as the pandemic limited the work of human moderators. The introduction of automated systems has had mixed results: while effective in managing large volumes of content, they have also faced challenges such as understanding context and the difficulty of moderating in different languages. In the fight against disinformation, platforms were often confronted with the complexity of political and scientific debates, which influenced moderation decisions. The epidemic highlighted the limitations of algorithmic systems and the importance of human oversight for accurate and fair content moderation.

Some authors define content moderation as a collective term for governance mechanisms that structure participation in a community to facilitate collaboration and prevent abuse. In some form, the activity has been with us since the beginning of group-based online communication. Importantly, therefore, in this interpretation, moderation does not only include administrators or moderators who are empowered to remove content or exclude users. It also includes the design decisions that organize how community members interact with each other. This is why the tendency towards almost complete automation of the related processes, leading to a quasi-dehumanization of the process, is extremely interesting.


István ÜVEGES is a researcher in Computer Linguistics at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences, Political and Legal Text Mining and Artificial Intelligence Laboratory (poltextLAB). His main interests include practical applications of Automation, Artificial Intelligence (Machine Learning), Legal Language (legalese) studies and the Plain Language Movement.

Print Friendly, PDF & Email