Submission
Privacy Policy
Code of Ethics
Newsletter

Content Moderation and Meta’s New Strategy in the United States – Part I.

Some people have been looking at Facebook every day since 2011, while others have not logged in for a long time or have deactivated their account altogether. Although many users may feel that the social networking site is “dead”, the reality is that around two billion people still use it every day and three billion log in at least every month. It’s often said that ‘only older people are on Facebook anymore’, but the scale of the platform is still enormous, and it’s also the space where the company can introduce its latest technology and fresh content management ideas. Mark Zuckerberg outlined in a short video on 7 January 2025, and then on Meta’s own website, that in the US they are experimenting with breaking with external fact-checkers, simplifying moderation rules, moving some moderation teams and overall giving more room to the user community and algorithms to evaluate content.

Meta has worked with several IFCN (International Fact-Checking Network) accredited fact-checking partners, who could not remove posts but could report if something appeared to be false or fake. The idea behind this was that Meta, by its own admission, cannot decide what is true or false. This is why it has in the past collaborated with independent fact-checkers around the world to be able to review potential misinformation and intervene in its spread. Based on warnings from these external parties, the company has often restricted the appearance of controversial content, labelled it with a warning and generally tried to slow down its spread (whether publicly or operating behind the scenes).

However, it is worth putting this in the context of wider, long-standing debates, as modern social media platforms have become not just technological enterprises, but a fundamental infrastructure for social communication. This role implies that these platforms, including Meta, can influence and regulate user interactions, discourse and, in some cases, political participation or even social mobilization. With a huge amount of new content appearing every day, human moderators have long been unable to respond to each post individually. Automated and algorithmic solutions are coming to the fore, but they raise several questions in terms of freedom of expression, legal obligations and social impact.

Social platforms have been given considerable power to decide what can and cannot be displayed to the users, how to rank posts and how much to highlight or hide them through their algorithms. As a result, their decisions are not only of technical or commercial importance, but also of political significance. They can regulate and influence public debates, facilitate or hinder the organization of social movements, and in extreme cases even influence elections by getting some information to more people and others to fewer. One of the main issues under discussion is how far platforms can go in moderating or ranking content, and how far they should be subject to strict national or international regulations. In this area, the relevant EU legislation, the Digital Markets Act (DMA) and the Digital Services Act (DSA), are particularly important. The DMA is more concerned with regulating competition and the influence of large companies acting as “gatekeepers”, while the DSA focuses on the operation of services, user rights and the security of digital space. With tightening EU regulations, there is a risk that tech companies will become even more protective of their own interests to avoid liability, including through stricter moderation policies – which, if done algorithmically and in large quantities, could also place greater restrictions on free speech.

Related to this topic are the false positives and false negatives resulting from the operation of algorithms. Machine-learning-based systems can only make decisions based on the patterns of data that they are trained on. They may delete harmless content that is not in compliance with the rules (false positives) or they may not properly filter out truly harmful or illegal posts (false negatives). Both errors can have serious social consequences: one can lead to excessive restrictions on freedom of expression, and the other can facilitate the spread of harmful disinformation or extremist content. In addition, content display and ranking, such as the system of “recommended to you” posts, can also contribute to the creation of so-called filter bubbles, where users only encounter content that reinforces their own point of view, while opposing or critical views remain invisible to them. This phenomenon is directly linked to increasing social divisions and political polarization.

While Meta’s new measures aim to reduce the number of innocent/innocuous posts being deleted and give the community itself a greater role in judging content, there is a risk that toxic discourse and misleading information will also become more prevalent. Nevertheless, the company stresses that offensive content (violence, fraud, abuse) will continue to be removed, but a “softer” approach to the “grey zone” will be introduced. This is in line with the long-standing debate on how to preserve freedom of expression while ensuring the safe and democratic functioning of social media.

This is also linked to the announcement that some of the moderation and security teams will move to a new location, as the company believes this could make the management of problem posts more transparent and efficient. Meta has also been developing Artificial Intelligence tools for years that can automatically detect highly damaging content – such as hate speech, child abuse or terrorism – that is appearing on a mass scale.

Given Facebook’s large and demographically diverse audience, it is particularly striking how the social networking site tries to make the news feed constantly “infinite”. In the old days, if you scrolled through all the new posts from your friends, there came a time when you could stop scrolling. Today, the platform mixes “recommended for you” content, posts from foreign sites, and ads to mix up the truly relevant posts, so you never get to the end. This, of course, means that users spend more time on Facebook on average, and the company gets more advertising revenue.


István ÜVEGES, PhD is a Computational Linguist researcher and developer at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences. His main interests include the social impacts of Artificial Intelligence (Machine Learning), the nature of Legal Language (legalese), the Plain Language Movement, and sentiment- and emotion analysis.