Submission
Privacy Policy
Code of Ethics
Newsletter

When scaling is no longer enough: the future of AI through critical eyes

Artificial intelligence is often seen as a technology on an unstoppable path toward human-level intelligence. Yet recent developments suggest that current models may be approaching their limits, raising doubts about the promise of endless progress. This perspective invites a more realistic view of AI: not as an autonomous mind, but as a powerful tool that works best when supporting human judgment.

In recent years, the field has undergone striking development, taking a central role in both professional and public discourse. Large Language Models such as ChatGPT, Bing AI, and Claude have acquired increasingly advanced communication and content generation abilities, leading many to see them not only as tools but also as a kind of universal knowledge repository. Enthusiasm for this technology is often accompanied by the assumption that AI’s trajectory will inevitably lead toward human-level general intelligence (AGI). This thought usually rests on the implicit premise that AGI can be achieved simply by further scaling current, predominantly transformer-based architectures. According to this view, AI-based systems in the future will be capable of taking on full intellectual roles, redefining the very concepts of work, knowledge, and interpretation.

This vision, however, is not without its critics. An increasing number of technology experts and analysts are raising the question: what if the current level of AI performance is not a springboard but a plateau? What if today’s architectures have already reached the limits of their capabilities? This is not a speculative question but one of strategic importance, as it fundamentally shapes how we think about the future of AI, its social integration, and its economic role.

An increasing number of interpretations suggest that the current pace of AI development should not be seen as a temporary stage but rather as a relative endpoint. These perspectives do not predict collapse or crisis; they simply highlight the possibility that systems based on language models may already be very close to the technological ceiling of their performance. In some readings, the slowdown of progress may signal the beginning of a more restrained period, reminiscent of what was once described as an “AI winter”. The central question is not whether AI is useless, but whether we can still expect dramatic qualitative leaps within the current methodological framework.

The AI revolution of recent years was essentially built on a remarkably simple observation: larger language models perform better. According to OpenAI’s 2020 study, model performance is closely tied to size: as the number of parameters and the amount of data increases, so does model accuracy. This observation formed the foundation for the development of GPT-3 and later GPT-4. The idea of “scaling laws” effectively brought the notion that “bigger model = better model” into the mainstream.

Recent developments, however, suggest that this relationship is becoming less reliable. GPT-5 and other new models have not delivered the dramatic leap in quality that many had anticipated. Progress continues, but it is slower and less perceptible. From several technological perspectives, scaling alone cannot lead to true intelligence, because current models lack a world model—that is, a deeper causal representation of reality.

Research suggests that scaling has clear limits, and once those are reached, further performance gains are not guaranteed. Model performance does not grow indefinitely with more data and parameters; rather, it tends to converge toward a level where additional resource investment yields only marginal or even negative returns. The quality of data is at least as important as its quantity, especially when datasets contain redundancy, noise, or distorted patterns. Moreover, an increasing number of observations support the so-called “inverse scaling” phenomenon, where larger models perform worse than smaller or specifically optimized counterparts on certain linguistic, logical, or pragmatic tasks. All this indicates that progress may not come primarily from scaling, but from rethinking model concepts.

The current direction of development relies more on refining existing models than on introducing radically new architectures. After pretraining, AI models typically undergo various post-training procedures, such as reinforcement learning from human feedback (RLHF) or fine-tuning with new weights tailored to specific application areas. These methods are undoubtedly useful, making models more “polite,” “helpful,” or “less biased.” Still, they do not represent fundamental breakthroughs in deep learning but rather aesthetic, behavioral, or goal-oriented adjustments. What will truly transform the quality of AI is not these refinements but a fundamentally new approach or architecture.

Some argue that it is time to move away from a view that mystifies AI and ascribes to it general abilities beyond its current technological foundations. Language models are undoubtedly useful for drafting, summarizing texts, generating ideas, or automating certain routine tasks, but in their present form, they do not understand the world and lack intentional or logical reasoning capacities. Machine learning and large language models are therefore better viewed as powerful supporting tools rather than autonomous cognitive actors. They do not replace the work of doctors, teachers, lawyers, or writers; they complement it within well-defined boundaries.

Excessive expectations surrounding AI development carry not only technological but also economic and social risks. Many policymakers, investors, and regulators view AI as a technology with limitless growth potential, capable of transforming the economy and society within a short time. Yet if technological progress slows down or stalls, these assumptions could quickly become unfounded.

Current systems face serious theoretical and practical limitations: they do not understand logical relations, lack a world model, and often fail at solving even simple problems. According to a growing number of professional interpretations, these models represent only a technological transition rather than a final or universal solution. While future breakthroughs remain possible, they are by no means guaranteed and cannot be assumed to happen automatically.

Exaggerated beliefs and narratives, especially those that frame AI in religious or apocalyptic language can obscure a clear assessment of its actual capabilities. This can be particularly dangerous in systems such as education, healthcare, or justice, where using AI as a tool may displace sensitive human competencies without providing genuine understanding.

In its current form, AI is not an autonomous thinker but an effective supporting tool. Instead of treating it as a universal solution, it is more practical to integrate it into everyday practice in proportion to its real capabilities. AI delivers the greatest benefit when used not for replacement but for complementing human work. The focus should not be on machine power but on supporting human decision-making. In this way, a slowdown in technological progress does not signal a crisis but rather the opportunity to develop a new, more mature relationship between AI and society.


István ÜVEGES, PhD is a Computational Linguist researcher and developer at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences. His main interests include the social impacts of Artificial Intelligence (Machine Learning), the nature of Legal Language (legalese), the Plain Language Movement, and sentiment- and emotion analysis.