The EU’s strategic autonomy depends on technology, but most technology giants are not European-owned. This is a serious handicap that the EU is seeking to overcome by strengthening digital sovereignty. Democratizing AI and creating a regulatory environment that can support competitiveness is an important step in this direction. If the EU does not want to be left behind in the global race for AI, it is time to act now.
The emergence of Foundational Models and their subset, Large Language Models (LLMs), has revolutionized the world of Artificial Intelligence and the research projects and industrial applications that use AI in the last few years. During this period, the development of neural network-based solutions has been progressing at a pace (in many respects exponential). This phenomenon is testing the timeliness of both the users of such technology and the readiness of policymakers to deal with the legal regulatory challenges posed by these new methods. From the EU perspective, the fact that the development of the latest solutions is largely taking place outside the EU is a major strategic disadvantage. This significantly increases the EU’s exposure to market players operating outside its jurisdiction. In this post, we therefore briefly review;
- the latest developments in Generative AI (GAI), the trend that is most dominating the development of AI today,
- its relation to the EU’s objectives of digital sovereignty
- along with the role that the democratization of AI can play in this.
In Pierre Bellanger’s original definition, the concept of digital sovereignty refers to the ability of an actor to freely control its own digital data. This includes control over their entire digital environment, both the software and data they use and the hardware used for operational tasks. The concept is generally divided into two connected smaller units, the first being ‘data sovereignty’ and the second ‘technological sovereignty’. For the former, the critical points are the location of data storage and processing, the range of people who have access to the data, and the laws governing the storage and use of the data. For the latter, the place where the technology is deployed, the identity of the creator and operator, and the lawful use, or prohibitions on use, are important considerations.
These components are also particularly problematic for the EU because, based on current trends, a significant part of the modern Western world’s data is stored in the US. Additionally, the vast majority of AI innovation is also born there. The EU’s concept of digital sovereignty aims to create a viable and sustainable alternative to this kind of inequality. One means of doing this can be the strengthening of regulatory autonomy (one of the most iconic examples of which is the GDPR, in force since 2018). The EU wants to move towards digital sovereignty mainly by keeping data generated in Europe within the continent. This will be facilitated by the development of a single EU regulation and the promotion of the idea that data should be stored and processed primarily through European IT companies.
The primary objective of achieving the above is to overcome the strategic, geopolitical, and cybersecurity risks and disadvantages caused by the significant technological dependence on non-EU actors. At a time when the rapid spread of AI-based applications is also a key to competitiveness, such a unilateral dependency could easily put the EU on the defensive. This could also erode its role from a position of an initiator to a merely passive player. Such a shift could also have a potentially negative impact on its global influence and decision-making autonomy. In addition, the lag in the development of Artificial Intelligence (even within the established Atlanticist perspective) represents a serious vulnerability and exposure.
There are several solutions to prevent such and similar disadvantages. One solution is a significant change of attitude towards the creation of an ‘entrepreneurial state’. Another solution is the democratization of AI. Both could benefit EU market players, even at the level of small and medium-sized enterprises, in the short term.
The concept of the ‘entrepreneurial state’ first came to prominence with the publication of Mariana Mazzucato’s The Entrepreneurial State: Debunking Public vs. Private Sector Myths in 2013. The main claim of the book is that the traditional view that the private sector is usually the driving force behind innovation and thus the most important source of experimental investment in successful economies is fundamentally wrong, or at least outdated.
The author argues, for instance, that the most important factor behind the success of the US economy has been public and state-funded investment in innovation and technology. This contrasts with the view that the basis of a successful economy is minimal state involvement and the enhancement of the free market. This, of course, requires a move away from the perception of the state as a mere bureaucratic machine. Simultaneously, it necessitates a shift towards a policy where the state takes on a leading role as a risk-taker in investing in innovation. As a key conclusion, the author outlines a trend. This trend indicates that industrial actors have, in many cases, become involved in the development of a technology only after the state has started investing in it. An iconic example is the development of Google’s search algorithm, initially funded by the US National Science Foundation.
A similar change of perspective at the EU level could also act as an incentive. With the right planning, it could spark innovative technological and other initiatives at the EU level or at the Member State level. Given that one of the EU’s current priorities is to create the most comprehensive regulatory framework possible for the development and use of AI-related technologies, the impact of regulations on the actors operating in the EU will be a crucial issue for the future. More specifically, the key question may be whether this essentially forward-looking initiative will proliferate and create a kind of opaque, over-regulated market environment, or whether it will become a breeding ground for a whole range of ‘entrepreneurial’ EU states. It may give cause for concern that in the context of digital sovereignty, related organizations (e.g., European AI Alliance) regularly identify the provision of the necessary funding and significant hardware resources for AI development as important, as well as the attraction and retention of the necessary expertise within the EU.
The concept of digital sovereignty can also draw on another growing trend that has gained increasing international emphasis in recent years, namely the democratization of AI. In a previous post, we briefly discussed the areas where the ‘black box’ phenomenon poses a risk, for example in the case of Language Models. Here we have also briefly discussed that the 3 main components of virtually any machine learning project are the algorithm, the data used for training, and the resulting model. If any of these are not public, i.e., not freely available to anyone, then the solution should be considered (at least partially) as a black box.
The counter-pole to this latter phenomenon is the initiative known as the democratization of AI, where the main goal is to make all the components mentioned above fully public and open-source. The advantage of open-source solutions is that they allow companies with limited expertise and resources to develop their own AI solutions. Without open source, these companies would not be able to innovate in the same way.
A prime example is the development of Foundational Models, which currently require the kind of hardware resources that only a few technology giants in the world have (Meta, Alphabet, OpenAI, etc.). The strength of these models is that they can be used for very general and diverse tasks out of the box, thanks to the huge amount of training data and the algorithms used for the training process. However, for such a general model to be applicable to a more specialized domain (e.g., as a legal chatbot or for extracting specific content – like, a summary – from legal texts), it is necessary to show more concrete examples of the model specific to the domain. The number of these specialized examples is significantly less than the amount of data needed for preliminary training. In many cases, even a few thousand hand-crafted training data may be sufficient to fine-tune the basic model for some more specific task. Thanks to this significantly reduced data and associated hardware requirements, such specialized models can be created by small companies. This is only possible if open-source Foundational Models are available in advance.
A shift in the AI industry towards community development and open-source models could benefit not just the EU, but all businesses around the world that use or plan to use AI at some point in their business operations. This shift has the potential to significantly improve their competitiveness and productivity.
Against this background, perhaps the most important issue for the EU in the coming years is clearly how to respond to the challenges posed by rapidly evolving technologies. There is a shift in the global market towards transparency and open-sourcing of AI (an excellent example is the initiative of Meta, who have recently released several fully open-source large language models, e.g., OPT-175B). However, such initiatives are meaningless if the regulation in place does not set proportionate limits on the use of new technologies. The protection of personal data, the prevention of electoral manipulation, or even the prohibition of data collection without consent are inevitable and necessary to tighten up in order to respect the right to privacy (all of which are also highlighted in the AI Act under negotiation). However, it should not be forgotten that new technologies must always have room for development. Companies need the freedom to experiment with them, as without such experimentation, competitiveness will be lost, and there will be a risk of marginalization in the global economy.
On the path toward digital sovereignty, democratizing AI emerges as a powerful catalyst for progress. By embracing open-source solutions and striking a balance between regulation and innovation, the EU can foster strategic autonomy and unleash its potential as a leading global player in AI technology. The EU’s pursuit of digital sovereignty gains momentum as it focuses on self-reliance and competitiveness without monopolistic ownership. Empowered by a single market approach, the EU stands poised to become the driving force behind consumer-centric AI systems. The time to act is now, as the democratization of AI promises to transform challenges into opportunities, ensuring a brighter and more innovative future for the EU and beyond.
István ÜVEGES is a researcher in Computer Linguistics at MONTANA Knowledge Management Ltd. and a researcher at the Centre for Social Sciences, Political and Legal Text Mining and Artificial Intelligence Laboratory (poltextLAB). His main interests include practical applications of Automation, Artificial Intelligence (Machine Learning), Legal Language (legalese) studies and the Plain Language Movement.