Submission
Privacy Policy
Code of Ethics
Newsletter

István ÜVEGES: Why (Not) Anthropomorphize Generative Artificial Intelligence? PART II.

As explained in the previous post, there is currently a lot of uncertainty in the development of artificial intelligence as an industry, and a lot of details about how the technology works, and its real limitations are unclear to the public. In such an environment, it is legitimate to ask: why exactly is it unrealistic to associate human traits with even the most cutting-edge AIs of our time?

To be able to reproduce human intelligence artificially, there are basically two paths. One is to understand, and then mirror what exactly makes us human.

This has been attempted in the past by several theories. The sharpest difference between them is the way in which they seek to access the reasons behind the otherwise rather vague concept of “intelligence”. Cognitive approaches, for example, are mostly based on the assumption that intelligence involves mental representations of information (e.g., propositions or images) and processes operating on such representations. Reductionism, which seeks to understand intelligence at the biological level, holds that a true understanding of intelligence is only possible by identifying its biological basis. Some go so far as to say that there is no alternative to reductionism if the aim is really to explain intelligent behavior rather than merely to describe it. But what all the theories and methods of investigation have in common is that none of them has so far succeeded in providing a complete and accurate explanation of what makes us different from all other living beings on the planet. 

This is significant for artificial intelligence research since by creating artificial neural networks (which are the basis of even today’s most advanced solutions), researchers are in effect trying to mimic the human brain. The structure of the neurons, the way they are connected and the way they store information maps the physical structure of real-world neurons into a software environment. The way these artificial structures store information is a kind of model, or rather an approximation, of the learning process observed in intelligent beings. However, we can only model anything as accurately as we know its origin. Therefore, until we know with complete precision the exact biological, physiological, and cognitive explanation of the set of abilities referred to as “intelligence”, we cannot even intentionally reproduce them.

It’s important to note that even today’s most advanced generative systems are statistics-based.  This kind of operation is apparently covered up by the fact that such systems have an incomprehensible amount of data available, which allows them to imitate, for example, the use of human language with often astonishing accuracy. However, according to today’s consensus, this does not reflect the working mechanism of the human brain, at least not completely. This is also why such systems can make trivial mistakes that no average person would make but can perform exceptionally well on tasks that most people would not be able to do.

The other way to create real human intelligence might be to somehow succeed in experimentation accidentally.

But even if this happens in some unforeseen way, there are still open questions. Humans are characterized by many other qualities, such as self-awareness, empathy, creativity, and imagination. It is far from certain that the achievement of machine intelligence will be accompanied by the emergence of all these together. Moreover, each of these is a concept whose scientific basis is much more tenuous than that of intelligence alone. One only must think of the recent case of Integrated Information Theory (IIT), one of the most influential theories in the study of consciousness. According to the opinion, noted by more than 100 scientists, the theory is more of a pseudoscience than a real scientific position. 

Regardless of whether we agree with the supporters of the theory or with the doubters, the example of the study of self-consciousness clearly shows how the scientific investigation of such human characteristics is in its infancy.

Another aspect mentioned after the introduction is how and why manufacturers (developers) are trying to make artificial intelligence-based assistants, fine-tuned language models such as chatbots, and humanoid robots more and more human-like.

As in human-human interaction, we form first impressions of non-human agents. Part of people’s attitudes towards artificial agents may result from personal preference, but general influencing factors can also be observed. The topic is most discussed in the context of humanoid robots, where one of the most important of these influences is appearance. At least as important in the case of non-physical artificial intelligences, such as chatbots, is human likeness, which in their case may mainly be the (correct) use of language.

Trust plays a key role in the adoption and use of AI. In human-machine interaction, trust is nothing more than the willingness of humans to accept information produced by AI, to follow their suggestions, to share tasks, to communicate information, and to support AI. It is of course true that the more we understand a technological process, such as the way artificial intelligence works, the greater the degree of trust. However, the degree of anthropomorphic features of a given AI is just as important. Several studies have found that the more human-like an AI solution is, the more trust users have in the decisions it makes. In addition to people’s natural tendency for social bonding, the fact that in many cases human-likeness is synonymous with development and sophistication can also play a role in this, not only in the artificial intelligence industry but also in public thinking. Just think that the main objective of artificial intelligence research has always been to artificially reproduce human capabilities.

This kind of anthropomorphization can therefore be both a demonstration of technological development and a natural consequence of the ever more advanced imitation of human capabilities.

However, the spread of increasingly human-like AIs may also bring unforeseen risks. According to several studies, people with social phobias prefer interactions with chatbots to real people. According to a report published by the U.S. Department of Health and Human Services, feelings of loneliness and isolation are on the rise in US society.

If we compare this with AI-based services that are increasingly present in the online space, social risks can be easily seen. The shift of human relationships to the digital space (even due to virtual assistants) can have a negative effect on real social networks, distracting attention from the development of meaningful relationships. New questions arise, such as how can people find real friends and relationships instead of seeing robots as friends. We will certainly need to reevaluate our efforts to anthropomorphize our technical devices soon. As a first step towards this, we need to provide much clearer signals about when someone is interacting with a human and when they are not.

The increasingly anthropomorphic appearance of AI currently seems to be an unstoppable trend both online and in the physical world. However, it is important to remember that no matter how human-like today’s state-of-the-art solutions are, the gap between human thinking and its machine modeling is still enormous. It is important to keep in mind that deliberately not anthropomorphizing technology does not mean dehumanizing something that otherwise has essential human qualities. Although the General Artificial Intelligence that may appear in the future will raise drastically different questions, AI that exists today (even generative) is still only an extremely sophisticated tool, which as a kind of side effect is becoming more and more human-like.


István ÜVEGES is a researcher in Computer Linguistics at MONTANA Knowledge Management Ltd. and a researcher at the Centre for Social Sciences, Political and Legal Text Mining and Artificial Intelligence Laboratory (poltextLAB). His main interests include practical applications of Automation, Artificial Intelligence (Machine Learning), Legal Language (legalese) studies and the Plain Language Movement.

Print Friendly, PDF & Email