More Than Half the Internet Is Machines – What Automated Traffic Means for Credibility and Public Discourse (Part I.)
When a majority of online activity is produced by automated systems, the question is no longer whether bots exist, but what kind of internet we are actually using. The “dead internet” idea captures this unease, even if its more extreme claims fall short. What matters is not apocalypse, but erosion: of trust, of signals, and of the sense that real people are still shaping the space.
Since November 2022, more than 8 billion people have been living on Earth, and many of us use the internet daily. At the same time, several recent analyses suggest that by 2025 more than half of all web traffic comes from automated, non-human sources. In other words, a significant share of the requests and interactions moving across the network are generated by machines. This is a striking figure, especially if we consider the millions of videos uploaded every day, the hundreds of millions of images, and the sheer volume of comments. At this scale, the issue is no longer just a technical detail. It becomes a question of how we interpret online presence itself. How “alive” can a space be if part of the visible activity is not human to begin with?
From this perspective, the idea of the “dead internet” is better understood as a question than as a statement, and certainly not as a ready-made explanation of what is happening online. It is closer to a bundle of thoughts, shaped by different observations and experiences. Even early interpretations, long before the spread of generative language models, already emphasized that a growing share of online traffic comes from automated systems rather than people, and that this can distort what we perceive as activity, interest, or community life. The narrative has a more extreme version that leans toward conspiracy thinking, as well as a much more down-to-earth reading. The latter focuses on incentives, business logic, and technological possibilities. What follows adopts this more conservative interpretation and explores its consequences, since it remains meaningful and relevant even if we set aside the all-explaining, apocalyptic claims. The internet may not be dead, but it is becoming decreasingly obvious how the reality we see online is put together.
Many people trace the early origins of this discussion to around 2021, when the term “dead internet” appeared in a relatively small forum environment and then slowly began to spread. What matters, however, is not exactly where or when it first surfaced, but the kind of experience that gave rise to it. The idea did not grow out of statistics, but out of a hard-to-define impression. A sense that certain online spaces which had once felt lively and familiar gradually emptied out or at least changed. As if the voices, faces, and recurring names had faded away, replaced by something more impersonal. This experience is not groundless. Over the past few years, a growing number of measurements have suggested that automated presence is not a marginal phenomenon, but a structural part of how the internet now operates. According to industry reports on web traffic, around 2023–2024 nearly half of global internet traffic already came from non-human sources. Some measurements suggest that by 2024 this share had crossed the fifty percent mark, meaning that most requests and interactions moving across the network are generated by machines.
It is important to note that this does not refer only to “bad” bots. It also includes search engines, technical monitoring systems, and other legitimate forms of automation. At the same time, during this same period the share of clearly abusive and malicious bot traffic reached historic highs. This suggests that automation is no longer just background infrastructure but has become an active presence in the online space.
These figures do not, by themselves, prove the claim that “the internet is dead,” but they do make the uncertainty from which this idea emerged easier to understand. When a significant share of visible activity is not human to begin with, the mood, rhythm, and feedback logic of online spaces inevitably change. From this point on (especially in 2026), the question is no longer whether bots exist, but to what extent they reshape what we perceive as community presence, opinion, or a majority voice.
Today, this line of thought had become difficult to dismiss for a much wider audience, as the technological environment in which online interactions take place had itself changed. What only a few years earlier seemed like an exception or a technical curiosity has become part of everyday use. In many situations, machine-generated contributions no longer stand out immediately, do not disrupt the flow of conversations, and do not necessarily raise suspicion at first glance. What matters here is not which model appeared when, but the fact that user perception has become less certain.
This uncertainty gradually shifts the focus. The key issue is no longer whether automation is present, but how we relate to a space in which it is not always clear who, or what, we are interacting with. If we cannot reliably tell what stands behind a comment, a reaction, or a piece of feedback, the weight of these signals diminishes. Conversations carry less risk, feedback matters less, and some interactions lose the sense of stakes they once had. When this is no longer an isolated experience but a repeated one, it also shapes how we use the internet. We become more cautious, or sometimes more indifferent. We invest less energy in responses, take feedback less seriously, and move on more quickly. Not because “everything is fake,” but because it has become increasingly difficult to tell what truly carries weight from what is present only as noise.
At this point, numbers and ratios usually enter the discussion: how large the share of bots might be, and where the line can be drawn between human and non-human activity. These claims, however, need to be handled with care. Measurement methods differ, many figures are closer to estimates than hard facts, and even more rigorous analyses usually capture only a slice of the picture, such as a single platform or a specific type of traffic. For this reason, it is important to separate two different statements. One is that a significant amount of automated activity is present in online spaces, something supported by a wide range of data. The other, much stronger claim would be that “there are hardly any people left,” which does not follow automatically from the first.
This is also where the strength of the “dead internet” idea comes from. Not from offering clear-cut answers, but from the fact that many people sense similar shifts. There is no single decisive piece of evidence, but rather a collection of small signals that become striking when seen together. These shifts can take many forms: suspiciously inflated view counts on streaming platforms, the spread of interactions that feel empty, unrealistically high numbers of reactions on social media posts, or feedback that largely consists of generic, quickly produced content. Taken on their own, none of these mean that “everything is fake.” They point instead to a growing level of noise, and to increasing difficulty in telling what counts as a genuine response. The emphasis is not on people having disappeared, but on the balance between human and artificial presence gradually shifting. As a result, the burden of judgment increasingly falls on the user, while the reliable reference points that once guided online interaction become decreasingly effective. [1]
This uncertainty has not remained limited to personal experience or opinion writing. Over the past few years, it has gradually appeared in regulatory thinking as well, albeit in a much drier and more technocratic language. There, the question is not framed as whether “the internet is dead,” but as what happens when credibility in the public online space can no longer be taken for granted, and when automated, coordinated, or inauthentic behaviors are able to distort visible reality at scale. In this sense, regulation neither refutes nor confirms the idea of a dead internet. Instead, it responds to the same underlying issue: that manipulation is not an isolated abuse, but a systemic risk.
István ÜVEGES, PhD is a Computational Linguist researcher and developer at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences. His main interests include the social impacts of Artificial Intelligence (Machine Learning), the nature of Legal Language (legalese), the Plain Language Movement, and sentiment- and emotion analysis.
[1] https://www.galaxy.com/insights/perspectives/dead-internet-theory-collapse-online-truth