Submission
Privacy Policy
Code of Ethics
Newsletter

From ChatGPT to AI agents: how will artificial intelligence technologies evolve?

It seemed simple at the beginning: when we first encountered ChatGPT and other similar generative AI models, we mainly noticed the qualitative leap they made in artificial text generation. In one fell swoop, it was obvious that these AI systems (unlike the previous ones) accurately mimic human language patterns, can produce (seemingly) convincing content on any topic, and at the same time provide (mostly) logical answers to our questions. It was the kind of leap we had been waiting for since the beginning of AI research in the 1950s.

Of course, the possibilities for using it have also been constantly expanding, whether it’s writing emails or writing program code. While this seemed like a revolutionary development in the history of artificial intelligence, the reality is that it promises to be just the beginning.

Generative models are particularly good at answering questions and following instructions. This is not surprising, since their training is precisely because their responses to instructions are evaluated by humans, and their feedback is the basis for the development of the “perfect” answer. There is no doubt, however, that these instruments lack a real sense of purpose.

Hence the intriguing (and very far-reaching) idea: why not extend these generative capabilities by training AI to track contexts over time, to plan and make autonomous decisions? ChatGPT is a good example of how an AI system responds to questions, but of course it has no real purpose or intention, nor does it seek to solve complex problems without human intervention. A task sent in a message will be performed flawlessly, but as soon as it has finished writing or coding, it stops. In contrast, the new trend, called “agentic AI,” already makes it its goal not only to provide answers, but also to perform autonomous tasks. Such an AI “agent” can recognize on its own when it needs to turn into a new tool, gather additional information or even redesign its own strategy.

To understand the basis of this development, it is worth going back to the concept of “agent” used in classical AI research. In the older literature, an agent referred to a software or robotic entity that senses its environment using various sensors and responds with actuators or output channels, all based on some goal or set of rules. This classical viewpoint requires that the agent has some internal state, a plan, perhaps a memory, with which to map the environment and decide on the next steps. Although this definition has been around since early research on artificial intelligence, for a long time it was not revolutionary, as the agent either operated in a very limited environment or could only make decisions based on strictly predefined rules. Agentic AI, however, takes this classical agent model to a whole new level by combining the creativity and pattern recognition capabilities of large language models with the ability to decide and plan autonomously.

This is where the real leap lies: imagine the freedom that would be given if AI not only passively waited for requests but could also see through a complex task and even start taking steps toward a solution. We most often use ChatGPT to ask it to draft a formal letter, write an essay or suggest some code snippets. An agentic system, on the other hand, would proactively decide to gather more information from the Internet, search expert databases or “consult” other specialized AI modules as soon as it hears our goal. In addition, when it finds out that the user intends to use the content created for, say, marketing purposes, it will incorporate keyword research results and SEO rules, then combine all this and finally formulate the final version based on the pre-agreed preferences.

In practice, this might mean, for example, that an agentic AI keeps up to date with incoming emails, identifying the most important messages and writing responses, while of course always making sure that they match the style of the user’s previous conversations and the information they have already heard. After all, this is a time-consuming process that most of us would prefer to leave to a reliable, autonomous system. And it’s not just about the system knowing what to respond to, but also about its ability to learn from previous interactions and incorporate this into subsequent decisions or actions. This kind of adaptation is radically different from a simple, rules-based chatbot experience, where the tool responds in essentially the same way regardless of how many questions you’ve asked before.

But the breakthrough also comes with challenges. In the classical agent model, it was still important for the agent to make the right decision based on the available data, but with today’s agentic AI, the complexity and range of possibilities are much greater. The way large language models work is in itself “black box”; it is difficult to unravel exactly what logic is used to arrive at a response. If you add to this the ability to make autonomous decisions and to invoke tools running in the background in sequence, understanding how it works becomes even more complex. An agentic AI, however clever, carries the potential for error or, in extreme cases, even bypassing the safety constraints that developers have built in. Therefore, researchers and developers alike stress the importance of developing appropriate security, ethical and monitoring solutions. For example, it is a basic rule that the system should always have a human “supervisor” or at least an interruption facility to stop unwanted processes.

Despite all these dangers, most agree that agentic AI is the next big milestone in the evolution of artificial intelligence, going back to the classical notion of agents, but combining it with the knowledge and flexibility of today’s large language models. It is no coincidence that the major AI research companies, including OpenAI, are working on pilot projects in which AI is already capable of controlling computers, running applications and even playing an autonomous role in entire workflows. In a few years’ time, an “agent” running on a personal computer or smartphone could ensure that all processes – from code development to day-to-day logistics – run smoothly. ChatGPT has only given a taste of what a generative model can do. It will be agentic AI that will extend this to real, proactive action, closely linked to the key concepts of the classic AI agent concept.

Ultimately, ChatGPT and its partners have shown how exciting AI can be when creativity and language skills are at the heart of it. However, in the next step, it will not be enough for a model to be fast and human-like, it will also need to be able to support the entire problem-solving and decision-making process. This could give birth to a new generation of AI agents that, if developed and used with the right responsibility and attention, will open unprecedented horizons in the world of automation, productivity and innovation. In the future, the example of ChatGPT will probably live in our memories as the first step towards a much bigger adventure: towards the true autonomy and “agentification” of AI.


István ÜVEGES, PhD is a Computational Linguist researcher and developer at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences. His main interests include the social impacts of Artificial Intelligence (Machine Learning), the nature of Legal Language (legalese), the Plain Language Movement, and sentiment- and emotion analysis.