Submission
Privacy Policy
Code of Ethics
Newsletter

Behind the buzzwords: AI agents, agentic AI, and the agentic web, clearly explained

Since 2023, the conversation around generative AI has been shifting, slowly but steadily, from “it answers” to “it acts”. The newest tools do not only produce text. They can also take steps on a user’s behalf across websites and digital workflows. The three terms, AI agent, agentic AI, and the agentic web, are different ways of describing this same turn: the executor, the operating mode, and the environment. Yet the debate keeps coming back to the same point: how much autonomy do we give these systems, and at what cost, as questions of control, transparency, and accountability become sharper?

For many people, the first wave of generative AI was simply a new text interface: you ask questions, and the system responds in a human-like way. The rise of the terms “agent” and “agentic” suggests that the focus is moving beyond basic answering toward more independent action, including running more complex processes with little or no human input. In this frame, we no longer expect the system only to tell us what to do. We also expect it to do certain things for us: gather information, fill out forms, book appointments, or carry out a workflow through from start to finish. With that shift, “errors” also change their meaning. They are no longer just bad advice. They can become the consequences of an action that was taken. With this in mind, the following section focuses on the essentials and walks through three of today’s most common internet buzzwords: AI agents, Agentic AI, and the Agentic web, with an emphasis on what they mean in practice and how they relate to one another.

It helps to start the term “AI agent” in a very down-to-earth way. Here, we are describing a system that carries out tasks for us within a defined scope and toward a defined goal. The key word here is “task”. You can recognize an agent not by the fact that it talks, but by the fact that it manages a sequence of steps to reach a goal: it gathers information, makes a choice, executes, and then checks the result. It is easy to talk about it as if it were an independent “actor”, but it is more accurate to see it as a delegated tool whose room to operate is set by predefined permissions, rules, and constraints. For example, a simple “meeting organizer” agent can look through a user’s calendar, email two or three possible time slots to the other participants, book a meeting room, and send a confirmation, but it only does what it is explicitly allowed to do, and at every step it works from the data and rules it has been given.

“Agentic AI”, by contrast, is not a specific product or application, and not a neatly bounded category. It describes a way of operating. In practice it is an umbrella term for systems that do more than answer: they can also carry a task through, step by step, toward a goal. IBM, for instance, says that agentic AI can reach a specific goal under limited supervision, and it often involves the coordinated work of multiple agents. The point is that the system does not “close” the problem with a single response. It turns it into goal-directed action, sometimes into a whole chain of actions. It plans, takes steps, checks what happened, and adjusts. AWS highlights a similar idea when it emphasizes proactive, goal-driven agency: the system does not only react, it can also initiate actions through the channels that have been opened to it.

A good way to frame this difference is to see agentic operation as an organizing question. You design a digital process so that some steps can run automatically, while control still stays with you. This is where “limited supervision” becomes a central idea. It simply means the system does not operate “freely”, but within a defined scope: there are actions it may take on its own, and there are points where it must stop because it needs confirmation to continue. The point is not autonomy for its own sake, but the exact boundary of it: how far the system may go on its own, and where it has to ask again or hand the decision back From here, it is a short step to the third buzzword that has been gaining traction: the “agentic web”. This is basically a shorthand for a shift that is starting to show up in how we use the web. For a long time, we treated the web as a human-facing interface: links, buttons, logins, forms, clicks. The logic of the Agentic Web is that some of these interactions can move to a delegated system that carries out the steps on our behalf, while we mainly provide the goal and step in at the important decision points. This also connects naturally to questions of internet governance. If software genuinely “acts” for us, it matters who sets the rules of the game: closed platform rules, or a move toward more open protocols where authorization, identification, and accountability are more portable and easier to verify. The key point is the shift itself. When the role of the “user” becomes partly automated, familiar concepts do not disappear, but they must be applied in new situations, and questions become more important such as what was authorized, on what basis an action was taken, and how clearly the process can be reconstructed afterwards.

At this point it helps to look at a concrete example that shows where the logic of today’s web starts to tilt toward automation. In case of OpenAI’s Operator, for instance, the agent is not “doing things somewhere in the background”. It works through its own browser: it reads the situation from screenshots, then clicks, types, and navigates on the same interfaces we have been clicking through ourselves. This is what makes the “agentic web” suddenly feel very ordinary. Something becomes agentic not because every service prepares a special integration for it, but because delegation starts to work on the interfaces that already exist. But the moment an agent moves through human-facing interfaces, the other side of the story shows up as well. The web is full of small decision points: a button can mean more than one thing, a pop-up can appear, a page can be rearranged, and sometimes two different next steps can both look perfectly reasonable. This is where delegation meets limited supervision. The question is not whether the system can take steps, but how far it may go on its own, where it should stop, and when it should ask for confirmation. From the same logic follows something else: the process has to remain reviewable. It is not enough that “it handled it”. We also need to see what happened, in what order, based on which inputs, and where the control points were.

Meanwhile, on the receiving side, work has also started on the “infrastructure” of the agentic web, through initiatives that try to standardize the connection itself. The Model Context Protocol (MCP), for example, describes an open protocol for how applications built on language models can connect to external data sources and tools in a consistent way. Google, in turn, positions the Agent2Agent (A2A) protocol as a basis for secure information exchange and coordinated action between agents, and it also signals a move toward an industry and community standard.

The buzzwords become truly “real” once the system is no longer just saying something, but can also do things: search, download, fill out forms, send messages, record information, or trigger an action. In that setting, prompt injection is especially nasty because, in its simplest form, it means someone slips an “instruction” into the content the system is asked to process, for example the text of a webpage, a document, or even an email, and the model starts treating it not as data but as guidance to follow. The result is not always dramatic. It is often subtle: the system is steered off course, tries to extract sensitive information, or initiates an action that should not have happened. This becomes most dangerous when an agent is “browsing” the web, consuming external content, and then acting through tools with few stopping points. That is why it is tightly connected to agentic AI and the agentic web. Delegation on its own is not the problem, but delegation plus untrusted content opens a new attack surface, and it only becomes manageable through clear boundaries, well-placed checkpoints, logging, and a risk framework.

In short, the three concepts discussed describe the same shift from different angles. An AI agent is the delegated executor, agentic AI is the operating mode in which a system moves toward a goal in steps, and the Agentic web is the environment where all of this meets everyday digital infrastructure. The real question for the next phase is not whether these systems will exist, but under what conditions they will be acceptable: where the room to act begins and ends, how transparent it remains what happened, and how manageable the risks are when a system consumes external content while also taking action. If those boundaries are drawn well, “agentic” will not be a magic word, but a useful capability: delegation whose consequences can still be kept under control.


István ÜVEGES, PhD is a Computational Linguist researcher and developer at GriffSoft Ltd. and a researcher at the ELTE Centre for Social Sciences. His main interests include the social impacts of Artificial Intelligence (Machine Learning), the nature of Legal Language (legalese), the Plain Language Movement, and sentiment- and emotion analysis.