Submission
Privacy Policy
Code of Ethics
Newsletter

Unattainable: AI setting standards for humans

In youth, there is an increased pressure to fit in, and to live up to the expectations set for us. This desire for perfection may even follow us into our adult lives. Some years ago, there were discussions about plastic surgery, and its effects on mental health, both regarding the recipient and people who try to emulate someone’s looks. However, the age of influencers photoshopped and surgically sculpted to perfection may have come to an end. They are being replaced by artificial intelligence-based influencers, just like humans fear that their everyday jobs will be taken from them. Well, it seems that the human race is not going down without a fight in this aspect either. But how can we tackle the ethical problems that come from creating and employing AI ’models’ in both senses of the word?

Recently, a scandal has erupted because of SheerLuxe’s new editor, Reem Bot. In her introduction, Reem walks the reader through her morning and night-time routines, apparently eats food, goes to restaurants, has a summer holiday bucket list destination, etc. She has an instagram account, and while it is stated that she is „AI-enhanced”, we can clearly see that she sets an unattainable standard of beauty and luxury living for readers. Why does this matter?

Research suggests that even early exposure to dolls epitomizing an unrealistically thin body ideal could damage girls’ body image, which would later contribute to an increased risk of disordered eating and weight cycling. This standard is however, not only carried on by dolls to the younger generation now. Through the TV, internet and several other sources, exposure to AI influencers is what is coming. As it has been proven that constantly being exposed to unattainable beauty standards from a young age can lead to negative self-relevant emotions and behaviors, such as an increase of risk of eating disorders, anxiety, depression, and social isolation, we can all understand why the issue of taking away any humanity from our screens is dangerous. Exposure to unattainable standards of beauty, living and accomplishments may even lead to the increase of suicide attempts in teenagers. Out of 1,000 girls aged 10-17 surveyed, 90% said they follow at least one social media account that makes them feel less beautiful. 16% of the U.S. population aged 10 years or older experience body dissatisfaction, and adults are not fairing better either: beauty standards cost Americans over $300 billion in 2019. The issue of unrealistic beauty standards and negative effects do not only influence young girls. Boys are also confronted with an ideal body that is difficult to achieve, and experience low self-worth because of it.

So, we can conclude that there are serious concerns when it comes to allowing businesses to use artificial intelligence to produce AI influencers. Not to mention that CGI-diversity is mistaken with real diversity through AI influencers, ultimately leading to less oportunities for human members of marginalized groups, as big brands would rather employ AI systems that look like people of diverse backgrounds. Generative AI is used to produce personas such as Shudu, the world’s first digital supermodel, Lil Miquela, who has appeared in ads for fashion companies like Chanel and Givenchy, or Lu of Magalu, who has partnered with brands like Adidas and Samsung. To reiterate, in 2020, Lil Miquela signed with CAA and was projected to earn over $10M, similarly to how other „influencers” are being endorsed instead of actual humans. It is even proclaimed that Virtual Influencers command three times higher engagement than their human counterparts.

To combat issues of unattainable beauty standards, negative mental health impact on the youth, and taking over spaces of humans in the world of influencers, discussions about the ethical issues of AI is vital. There needs to be more stress on teaching internet literacy for young people, and a push for brands to use human embassadors instead of cheaper, more engaging robots.

What does legislation say about these issues? Sadly, next to nothing. The European Union’s long-awaited AI Act alone cannot tackle every social issue that the rapid spread of AI brings. The current wording of the Act makes it unclear whether and to which extent the online social media practices such as dark patterns fall within the scope of prohibited practices. Artificial influencers may breach several laws in place, but proving anything is rather difficult.

First of all, AI systems are trained using a set of data, which is not always obtained legally or with the consent of the relevant parties. These influencers were most likely created using real women and men as data sets, similarly to how OpenAI faces lawsuits over data protection concerns. Moreover, AI is likely to infringe on intellectual property laws, as well as copyright laws, as seen in the 2022 Andersen v. Stability AI et al. case, where three artists formed a class to sue multiple generative AI platforms for using their original works without license to train their AI systems.

Currently, there are transparency obligations to the media, as this area is not categorized as high-risk. Despite the lack of specific obligations, the industry should explore how it may bring transparency into practice. A crucial step of this is reiterating that AI generated influencers are not real people, and that their looks, personality and works are all the result of several humans’ efforts and talent. The creators of AI influencers should be compensated, and I am not naive to the reality that AI influencers will most likely become more rampant each year. However, it is not a coincidence that SheerLuxe’s Reem got significant backlash.

Creating ethical codes of AI might just be vital for social media sites as well as companies, going forward. This should include disclaimers of the nature of the AI influencer, having a certain percentage of workers be humans, and ensuring human oversight into the activities of systems. Rather than writing an article of an AI saying what they like to eat to promote a certain brand, this promotion material could be rephrased, leaning more on the angle that yes: AI systems are here to stay, and will add value to our lives. But – at least so far – they cannot have human experiences. AI systems can „hallucinate”, lead to tragic outcomes such as suicide if they are allowed to converse with humans with serious faults in their code, and are generally not at the level where they can be deemed safe without oversight. AI systems still require more work before they are reliable.

This is why the AI Act uses a risk-based approach, to try to put the needs of humans first. The qualification system aims to encourage developers to think of human values, such as those issues I have showcased. However, it is difficult to make businesses comply when significant profit is at stake. Therefore, voluntary cooperation should be encouraged through social pressure.

The current state of AI governance requires ethical considerations to weigh more than profit, in order to keep future and present generations as safe from negative mental health effects as possible. AI is projected to result in alienation for humans, as there are now AI friends and romantic partners available, which will most likely be developed further. The changes to our society are bound to happen, with AI systems being part of our lives to a further degree in the coming decades. This permanent place they shall inhibit in our society is the reason we must discuss the humanities’ side of AI, learn basic digital skills, acquire an adequate level of literacy, and encourage all stakeholders to – in their rush to develop and employ the best and latest AI systems – not forget about fellow humans.


Mónika Mercz, JD, is a visiting fellow at the George Washington Competition & Innovation Lab, in Washington DC. She obtained her law degree and specialization in English legal translation at the University of Miskolc, and has an AI and Law degree from the University of Lisbon. Mónika’s past and present research focuses on constitutional identity in EU member states, data protection aspects of DNA testing, children’s rights, and artificial intelligence. She is a founding editor of Constitutional Discourse, leading the Privacy & Data Protection column.

Print Friendly, PDF & Email