Submission
Privacy Policy
Code of Ethics
Newsletter

Artificial General Intelligence “by Accident”: Emergent Behavior and Chaos Theory—Part I.

Can Artificial General Intelligence come into being by chance? Although we traditionally think that all major technological breakthroughs are conscious, large and complex systems often undergo spontaneous changes that put this issue in a new perspective. Let’s dive into this from the perspective of emergent behavior and chaos theory.

In recent years, the development of Artificial Intelligence (AI) has advanced at an impressive pace, revolutionizing many industries and changing the way we live our daily lives. However, the possibility of achieving a General Artificial Intelligence (AGI) capable of performing all human intellectual tasks remains one of the most controversial and interesting issues in the technology community. It is also, of course, a highly divisive topic. It is enough to think that while Elon Musk believes that AGI could be a reality within a year or two, Bill Gates thinks that this prediction is overly optimistic and neglects the many unresolved technological difficulties and open questions surrounding AGI.

AGI is (so far only theoretical) software that, if created, would be able to reproduce general human cognitive abilities and find solutions to even unfamiliar tasks. This fundamentally differentiates it from today’s task-dependent AI, such as the recently popular Large Language Models (LLMs). A common feature of all task-dependent AI systems is that they can only solve tasks for which they have seen large amounts of training data during their pre-training. An example of this is predicting the next, most likely word based on statistical knowledge of a text, which LLMs also do every time we ask them questions.

There are several theories on how we can move from the task-dependent (also known as weak or narrow) AI we have today to task-independent (aka. strong, or general) AI. Without going into too much detail, it is worth mentioning two main approaches that could be the path to achieving AGI.

The first of these can be called the incremental path. It is the one that first comes to mind when we think of technological developments, and whose effects are perhaps the most obvious in public consciousness. Incremental development is a step-by-step approach to AGI. In effect, the approach focuses on incrementally extending and refining current AI systems, where each development step adds new features and capabilities to the system. Many examples of this can already be found today. The release of GPT-3 in 2020, for example, was a huge leap forward in language modelling. But it was also primarily the result of incremental improvements to earlier models such as GPT-2 and GPT-1. Among these, each new version had more parameters and more complex linguistic capabilities, partly due to the increasing amount and quality of the model’s training data. A good example of the difference is that GPT-3 had over 175 billion parameters, while its predecessor, GPT-2, had only 1.5 billion. Thanks to this incremental improvement, GPT-3 was now able to perform more complex language tasks such as “creative” writing, machine translation, and code generation.

The above example shows a trend where each generation of models is getting bigger in size and using more and more data to train them. For example, as far as we know, the basic (transformer) architecture of GPT models does not change much between each increment. Importantly, changes to the basic architecture are also incremental improvements. To summarize, incremental development is a step-by-step process where all innovations and improvements are made along predefined objectives. Therefore, a good example is the development from GPT-1 to GPT-3, detailed above, where with each iteration the model parameters and the size of the dataset were increased, gradually improving the model’s capabilities. The main concepts along which such evolution takes place are therefore predictability and planning.

It may seem like an odd idea at first, but with complex systems it sometimes happens that the new system has features that the developers did not consciously intend. Such new features seem to be random at first glance, and sometimes they cannot be fully explained by the development steps that have been taken. This is called emergent behavior, or the development of emergent properties. The concept itself is widely spread and is used in several scientific fields, which is why its meaning is slightly different everywhere. What is common, however, is that emergent behavior is the result of interactions between elements of a system that give rise to new, complex forms of behavior that were not programmed or planned. But what does this mean from the perspective of each discipline?

In physical systems and systems theory, emergent behavior describes properties that only appear in the system as a whole, and not just as a simple sum of its components. Emergent properties are classified into two main categories: weak and strong emergent behavior. For weak emergence, the behavior can be simulated and analyzed. While for strong emergence the behavior is irreducible and cannot be simulated. In the latter case, therefore, the sum of the components that make up the behavior does not uniquely determine the nature of the behavior that results from their combined presence. This is perhaps the most interesting case from the point of view of AI, since such behavior is extremely unpredictable due to its nature, and often without a sign.

In biology, an example of emergent behavior can be the functioning of ecosystems or the development of symbiotic relationships between organisms. In these cases, the functioning of the system as a whole exhibits properties and behaviors that cannot be attributed to the simple functioning of individual components. An example is the way an ant colony works, where individual ants follow simple rules, but together they can form a complex colony.

In the field of AI, emergent behavior refers to the emergence of complex, unpredictable properties that arise from the interaction of simple algorithms. For example, a neural network trained for a particular task may be able to solve other unrelated tasks efficiently. This is why we can say that emergent behavior in AI systems can play a key role in the evolution of new capabilities and the adaptability of these systems.


István ÜVEGES is a researcher in Computer Linguistics at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences, Political and Legal Text Mining and Artificial Intelligence Laboratory (poltextLAB). His main interests include practical applications of Automation, Artificial Intelligence (Machine Learning), Legal Language (legalese) studies and the Plain Language Movement.

Print Friendly, PDF & Email