Submission
Privacy Policy
Code of Ethics
Newsletter

The AI Quandary: Unmasking the Imperfections in Social Media

Artificial Intelligence (AI) is a multifaceted field of computer science and technology that has garnered immense attention and relevance in recent decades. It encompasses the development of intelligent systems and machines capable of simulating human-like cognitive functions such as learning, problem-solving, reasoning, and decision-making. AI is driven by the fundamental idea of creating systems that can process information, adapt to changing conditions, and perform tasks that typically require human intelligence.

One of the key factors propelling AI forward is the capacity to process and analyze enormous amounts of data at speeds unattainable by humans. Machine learning techniques, a subset of AI, enable systems to recognize patterns, make predictions, and improve their performance through training examples. These examples then help the system to generalize over new cases that it has not seen before in the training data. These systems have found applications in diverse domains, from healthcare and finance to autonomous vehicles and virtual assistants.

AI research and development continue to shape our modern world, with AI technologies influencing aspects of daily life, from personalized online recommendations to autonomous drones. AI also plays a pivotal role in addressing complex challenges, such as disease diagnosis, climate modeling, and cybersecurity.  But does the AI commit an error, and is the possibility of error elimination feasible?

First, let us assess what shortcomings AI has in the field of social media and whether these shortcomings can be eliminated.

The use of AI in areas involving human communication is an extremely complex issue. Nothing is more indicative of this than the fact that there is a separate research area dedicated to it (Natural Language Processing—NLP. It is also one of the most influential trends in AI research nowadays. While technology and AI continue to evolve, we are still dealing with technology that may not adequately process human nuances, intuitions, or underlying meanings within context. Consequently, its ability to effectively analyze individual motivations within a post is limited.

In addition, the first imperfection regarding AI is its lack of capability to recognize dialects and slang. For example, within LGBTQ groups, there are words and expressions that, without context, the characteristics of the group, and the individual’s habits, could potentially fall under the concept of hate speech. There are already instances where Facebook’s algorithm erroneously analyzed such a situation when it banned a transgender woman for uploading a new picture of herself, showing her new hairstyle, and describing herself as “tranny”, a term which is fully accepted within the LGBTQ community and contains no hate speech content.

What further underscores the shortcomings of AI in this area is another study that analyzed tweets. From this, they concluded that there is twice as much chance for shared content to be classified as offensive by AI if it is posted by an African American. This is even though language differences are rooted in the diverse linguistic traditions of social groups.

In May 2021, Facebook mistakenly blocked and later reinstated millions of Palestinian accounts and posts related to recent conflicts. Facebook’s explanation was that its hate speech detection software incorrectly categorized a key hashtag as related to a terrorist group. The exact capabilities of Facebook’s filtering tools are unknown (these solutions are often part of a company’s know-how), and this is also part of the problem.

However, scientists argue that this practice is such that linguistic data always contains pre-existing biases. Despite attempts to change the algorithm, a 6% bias persisted, further perpetuating the misjudgment of words. While 6% might not be considered a significant result, it still affects many people among internet users. It should not be a factor that the infringement only affects a small percentage of individuals. As long as the possibility of infringement exists, I believe the system can never be truly adequate.

Several solutions have been introduced to remedy the mentioned imperfections and prevent legal violations. One of these methods is the introduction of the Facebook Oversight Board. Facebook has established the Facebook Oversight Board, whose goal is to evaluate content that is removed or remains on the platform. The Board currently consists of 20 members, with a minimum of 20 and a maximum of 40 members. This group comprises multinational, multidisciplinary members and is completely independent of Facebook. If someone wishes to appeal a content removal due to a mistake, he will face a complex five-step procedure. Even though this is a cumbersome process, it is further complicated by the fact that the Board selects cases within its discretion that are of significant importance to the content moderation’s critical questions. This limits the number of decisions made by the Board, making the process less effective despite using several sources to understand the context of the speech to ensure that only real hate speech content is removed.

Another solution we should take into consideration could be paraphrasing. This technology allows users to decide what content they want to access. They can customize the content they see and thus decide for themselves what they consider hateful. Another option is to create a smart filter that edits such content. Social media platforms have not yet implemented this solution, even though it would provide personalization options. The use of this so-called paraphrasing increases user contributions to legal certainty while reducing the influence of AI-driven moderating mechanisms. However, there are downsides to this approach. Firstly, users whose content is altered may not be aware of the changes. Secondly, the transformation can also place the text in a completely different context, giving it a different meaning. This could potentially violate the rights of the original author.

When examining the issue at the European Union level, the Digital Services Act (DSA) emerges as a significant measure on the part of the EU. The DSA was voted on by the European Parliament on July 5, 2022, and applies from February 17, 2024. The DSA plays a significant role as a hard law in reshaping the regulations of social media platforms to introduce new transparency obligations. The main goal is to establish a secure and trustworthy system for users in which they can have confidence.

The DSA is a response to concerns and shortcomings in the content moderation systems currently employed by major social media platforms. It aims to reduce the unjust blocking of content. The DSA establishes mandatory safeguards for the removal of user data, including providing users with appropriate information, complaint-handling mechanisms, and external out-of-court dispute resolution mechanisms. The system, to some extent, builds upon the Code of Conduct. Moreover, the practices already established in the Code of Conduct are not only applied within EU countries but also in other parts of the world. This is an example of the “Brussels effect”, as platforms globally shaped their practices to comply with the strictest legal regulations.

The system also creates a defense mechanism for users of online media and attempts to strike a balance between protecting freedom of speech and restricting hate speech. The DSA also imposes an obligation to implement the notice and action mechanism, which alerts platforms to the presence of content deemed illegal by the notifier. It enhances transparency in the process by requiring reporting obligations for platforms. In addition, every platform must clearly and explicitly inform users about how they collect user data and what they use it for. Platforms must provide clear-cut notice and takedown mechanisms, as well as detailed reports on how user content was illegal or violated the platform’s terms of service. Furthermore, if a platform exceeds 45 million users, which is equal to 10% of the Union’s population, it must bear the highest standards of due diligence. If a platform fails to comply with the above rules, a Member State is entitled to impose a fine, with a maximum amount of 6% of the provider’s annual worldwide turnover. As seen, these are the latest steps taken by the EU to address the situation. However, their effectiveness is questionable, and in my opinion, the DSA will not eliminate the shortcomings of AI algorithms. Therefore, in the future, new ways must be sought to address these issues.

On the part of the EU, another regulation is the introduction of the AI Act. The AI Act is a legislative proposal developed by the European Commission and unveiled in April 2021. This proposed regulation is designed to address the use of AI systems within the European Union. Its primary aim is to establish a framework for regulating the security, ethical considerations, and legal requirements for various AI applications in the EU. The AI Act covers a wide range of AI applications, including autonomous vehicles, healthcare AI systems, educational AI, and other AI technologies.

The AI Act categorizes AI systems into different risk levels and prescribes varying degrees of regulation, along with mandatory certificates and labeling, depending on the risk classification. The proposal also addresses data privacy concerns, transparency, and the rights of individuals when AI systems are involved in decision-making. The goal is to promote safer and more responsible use of AI technologies while supporting innovation and competitiveness.

After unmasking the imperfections of AI in social media it is safe to say that policymakers strive to keep pace with the rapidly changing world and emerging challenges. However, despite continuous regulatory development efforts, there has yet to be an effective set of laws established. Several factors, both independent of and related to the law, hinder this progress. In the future, it is justified to explore new approaches to address and remedy the situation effectively.

Dalma Medvegy is a fifth year undergraduate law student at the University of Szeged, Hungary, Faculty of Law and Political Sciences, holding a talent scholarship from the Aurum Foundation. She is Director for Board Management at ELSA Szeged in the academic year of 2023/2024, Member of the International and EU Environmental Law Research Group at the University of Szeged, Faculty of Law and Political Sciences. Her research focuses on the revision of EU treaties, competence issues and the future of Europe.

Print Friendly, PDF & Email