Submission
Privacy Policy
Code of Ethics
Newsletter

Hype or Value? The Real Measure of Generative AI – Part II.

Generative AI has attracted a great deal of attention in recent years, yet in corporate applications a wide gap remains between promises and reality. Most projects do not deliver lasting and measurable benefits, although under certain conditions the technology can already create real value. This duality makes it urgent to understand what separates failures from successes.

As we have seen, most companies have so far failed to achieve measurable business gains from implementing generative AI, although there are exceptions. This highlights the fact that the technology alone is not enough; it only delivers results when applied in the right context and with a clear purpose. Not every attempt ends in failure. A small group of companies has managed to cross the “gap,” where AI is no longer a promise but a working reality. These examples show that success depends less on the technology itself and more on how it is integrated into corporate operations.

One of the hallmarks of successful projects is close business integration. Companies that clearly defined the problem they wanted to solve and aligned AI to that goal achieved much better results. The change did not come from simply adopting AI but from the organization’s ability to reshape its processes where necessary and develop new ways of working.

Specialized solutions tend to focus on smaller, well-defined tasks, which makes them capable of delivering tangible results. In document processing, for example, some systems have significantly reduced the time required for manual handling, review, and assessment of individual documents, while also improving the accuracy of information extraction based on them. Other studies report that ninety-two percent of organizations adopting AI early experienced measurable financial benefits, with an average return of forty-one percent. AI tools integrated into audit processes have improved quality and consistency while allowing employees to dedicate more time to strategic tasks. In the consulting sector, certain teams saved hours each week on administrative work, while in financial services, AI adoption cut research and preparation processes from weeks to just hours. These experiences suggest that success is tied more closely to focused applications than to the pursuit of broad, all-encompassing solutions.

Another key to success seems to be data management. Successful companies have a clear data strategy, working with clean, structured, and easily accessible data. This enables AI to function not as an isolated experiment but as part of everyday decision-making.

From the leadership perspective, six factors consistently emerge as requirements: the tool must inspire trust, integrate well with existing systems, be capable of learning, deliver stable performance, ensure data security, and demonstrate clear returns. In practice, it is not the “most advanced” model or the one performing best on artificial benchmarks that prevails, but the one that can reliably meet these needs.

The third key factor is flexibility and the ability to learn. Where AI systems can adapt continuously and where companies are open to feedback, deployed solutions are more likely to endure. This is what distinguishes short-lived pilots from systems that deliver lasting value.

After the results achieved in document processing and other administrative tasks, it is worth mentioning another emerging direction: the so-called Agentic RAG approach, which illustrates some of the potential in generative AI development. Traditional RAG (Retrieval Augmented Generation) works by having the language model search external sources for information and then generate an answer based on that, producing results that are more accurate and reliable than if it relied solely on its training data. The agentic variant takes this a step further: it not only searches and summarizes information but can also make decisions, handle tools, perform iterative queries, and correct errors. It is important to note, however, that the concept is currently more of a software engineering approach and only partly represents a true AI breakthrough. This approach suggests that generative AI may gradually shift from a purely supportive role toward more adaptive and active operation, where it not only assists but also participates directly in everyday processes.

At the same time, the overall situation of the technology is not entirely positive. Other studies point out that employees often feel overwhelmed by new tools, and many experience learning AI skills as if it were a second job. The slow pace of adaptation is already reducing corporate returns, and several executives have indicated that soon AI literacy may be considered in performance reviews or even as a hiring requirement. This highlights that the human factor is just as critical as the technology itself: if the burden of learning cannot be kept manageable, AI adoption can easily trigger resistance and lead to effects opposite to those expected.

The report ultimately concludes that so-called “agentic AI” systems may bring about the real breakthrough long anticipated from generative AI. At the same time, it is worth maintaining a critical perspective. MIT NANDA is itself actively involved in developing agentic systems, so it is natural that the concept receives particular emphasis. Even so, independent sources also support the idea that memory, learning, and autonomous operation could indeed become key factors soon. If the developments in this area live up to expectations, such systems will be able to use memory, accumulate experience, and make decisions independently within certain limits. MIT’s proposed “Agentic Web” concept, for example, includes protocols such as Memory-Centric Processing (MCP) and Agent-to-Agent (A2A) communication, which would enable cooperation between agentic systems. Should these become available, the GenAI gap could narrow, as AI would no longer be only a static tool but an active participant in corporate operations.

Experience so far makes it clear that the introduction of generative AI is still more about experimentation than true transformation. Most trials do not deliver tangible results, but where organizations manage to use the technology in a focused, data-driven, and integrated way, measurable benefits are already visible. The key to the future is therefore twofold: technological limitations and the human factor must be addressed together. If these can be aligned, the GenAI gap may indeed become bridgeable.


István ÜVEGES, PhD is a Computational Linguist researcher and developer at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences. His main interests include the social impacts of Artificial Intelligence (Machine Learning), the nature of Legal Language (legalese), the Plain Language Movement, and sentiment- and emotion analysis.