
From the ELIZA Effect to Dopamine Loops – AI and Mental Health
As Artificial Intelligence becomes an increasingly natural part of our everyday lives, a less discussed consequence is becoming more apparent: the development of psychological and emotional dependence. Conversations with chatbots not only provide information, but increasingly offer companionship, understanding, or at least the illusion of it. How long will AI remain a tool, and when will it imperceptibly become a substitute for human relationships?
When we talk about artificial intelligence (AI), public discourse tends to focus on economic impacts: professional roles that are disappearing or transforming, industries undergoing fundamental change, challenges affecting creative professions, etc. Meanwhile, however, social discourse often overlooks the quieter but equally profound transformation that is taking place not at the level of production or efficiency, but in the fabric of the human psyche and social relationships. It seems that AI is not just another tool, but increasingly a new type of companion.
Recently, Sam Altman, CEO of OpenAI, also highlighted this transformation. As he explained, older generations tend to use ChatGPT as a replacement for Google, while those in their 20s and 30s seek lifestyle advice from it, and university students rely on it as a quasi-operating system in their daily lives. This not only shows the diversity of its uses, but also the extent to which AI is integrated into the spheres of individual decision-making, identity formation, and emotional support. AI is therefore no longer just a functional tool but is also becoming a kind of mental and social interface.
Behind the intensive and regular use of chatbots based on Generative AI lie deeper psychological and social mechanisms that fundamentally influence human behavior. Essentially, this is the combined manifestation of several interrelated psychological and social processes. Machine interactions are increasingly taking on roles that were previously fulfilled by human relationships. Rather than simply providing answers, these systems increasingly provide emotional feedback, a sense of social presence, or even a sense of security. The constant availability of digital responses, instant feedback, and the seemingly empathetic behavior of AI may contribute to more and more people seeking emotional security in it, especially when they experience a lack of real human connections.
It is therefore not surprising that users who use ChatGPT intensively daily often report increased anxiety, burnout, and sleep disturbances. These symptoms do not necessarily stem from AI itself, but rather from the way people relate to it. They place excessive expectations on it or use it as emotional support in situations where they would previously have turned to other people. This phenomenon is related to the ELIZA effect, whereby people tend to attribute human characteristics, such as empathy and understanding, to machines, even when they know that they do not possess true consciousness or emotions. The anthropomorphization of technology thus increases emotional attachment while blurring the line between reality and illusion.
The constant positive feedback and attention provided by AI can be particularly appealing to those who lack emotional support in real life. However, this can distort self-esteem and everyday relationship strategies in the long run, especially if AI becomes their primary communication partner. Research suggests that this type of emotional attachment to AI can reduce empathy and the desire for genuine human relationships. The mechanization of emotional support therefore has repercussions not only on an individual level, but also on a societal level, affecting the nature of our relationships.
But why is it so difficult to resist these systems? The answer lies partly in biology. The human brain has evolved to learn from social interactions and immediate feedback. When a chatbot responds to our questions in a fraction of a second, it can activate the same dopamine system that is triggered by social media use or even gambling. Quick, positive reinforcement acts as a kind of psychological reward and can lead to habituation and potentially addiction in the long term. The biological mechanism thus intertwines with the technological mode of operation and strengthens the bond with AI systems.
At the same time, AI systems are capable of learning information about us that we ourselves are not necessarily aware of. Every interaction generates data: what topics interest us, how we communicate, what emotional reactions certain questions elicit. This information can be used to build a psychological profile. If not for the specific chatbot used, then for other data-driven AI systems. Based on such profiles, targeted content can be recommended, behavior can be predicted, or even decisions can be influenced. According to one study, the responses of AI systems have a noticeable effect on users’ decisions even in the short term, especially if they are personalized and empathetic.
The structural aspect of developing addiction is also noteworthy. Companies operating AI platforms have an interest in maximizing usage time. The goal of so-called “engagement optimization” is to keep users in the application for as long as possible. This uses algorithms that are technically like those used by social media platforms or streaming services, and traps users in the same dopamine loop. Usage habits do not develop spontaneously but are the result of decisions influenced by algorithms.
The speed of technological development only exacerbates this. The most advanced systems available in 2025, such as GPT-4o or Google Gemini, are already capable of interpreting not only text, but also images, sounds, and facial expressions. This is another step toward making AI appear even more “human-like.” Multimodal interactions deepen the illusion that we have a real companion—even though the relationship is essentially one-way and algorithmically controlled. Children and adolescents are particularly vulnerable in this area. The developing brain is more sensitive to reward cycles, and during the period of personality formation, AI use can have an increased emotional and identity impact. Research also warns that excessive use of AI systems can affect the development of critical thinking, creativity, and social skills.
So, the question is not whether we use these systems. We will use them, and we will do well to exploit their potential. The problem arises when AI begins to replace human relationships, or when we use it in such a way that we lose control over our own mental and emotional states. That is why it is important to develop a kind of “digital hygiene”, to be aware of when and why we use these tools, to set time limits, and to maintain the priority of human relationships. Technological awareness is not renunciation, but self-protection.
AI-based systems are neither good nor bad in themselves. Rather, they are tools that can reflect the behavior, mindset, needs, and habits of their users. In this sense, they act as a mirror in which we can recognize our own strengths and weaknesses. The question is not whether we reject or idealize these systems, but whether we are able to relate to them consciously—and thus to ourselves.
István ÜVEGES, PhD is a Computational Linguist researcher and developer at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences. His main interests include the social impacts of Artificial Intelligence (Machine Learning), the nature of Legal Language (legalese), the Plain Language Movement, and sentiment- and emotion analysis.