Submission
Privacy Policy
Code of Ethics
Newsletter

István ÜVEGES: The Optimist-Pessimist Dichotomy in the Tech Era. Why Not (!) Jump on the Hype Train?

The burst of Generative AI (GAI) into the public consciousness has sparked heated debates around the world. This is generally typical of new emerging transformative technologies that have the potential to transform everyday life. However, these debates are perhaps even more pronounced in the context of the GAI. If we look carefully at the opinions that have emerged in the context of the application of AI, a significant part of them still tend to follow a techno-optimistic or techno-pessimistic rhetoric. The question is: in a discourse fueled by hopes or even fears, are the practical issues—that should keep the developments of the near future on a democratic and considered course—not lost?

In general terms, technological optimism is the neglect of uncertainty about progress and the prioritization of the veracity of promises that are not at all certain to be realized now. Another key idea is to accept potential losses in exchange for promised gains. In contrast, technological pessimism is based on an overestimation of the threats and adverse effects of new innovations. What is overlooked here is the ability of people to react to risks in a correct and timely manner. The key idea here is to avoid losses, even if this means losing out on benefits.

Of course, it is true that just because someone emphasizes the disadvantages of a technology does not automatically mean that they are techno-pessimistic. There can be many reasons for a similar skeptical attitude, or even for paying particular attention to the damage these technologies can cause. For example, human rights organizations are much more aware of the potential harms and setbacks that may arise from the introduction of AI in a public authority environment.

Amnesty International recently published a statement regarding the upcoming EU regulation, the AI Act. In it, the organization points out that AI-based systems may have several inherent biases that may later contribute to discrimination against, for instance, minority groups. Among others, they mention a system introduced by the Dutch authorities in 2019, which used a self-learning algorithm to set up risk profiles to detect childcare fraud, and which was later scandalized.

Similarly, it stirred up a lot of dust when, in March of this year, several outstanding researchers and developers working in the field of AI stated in an open letter that AI-related developments should simply be put on hold. According to perhaps the most quoted part of the letter “Powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable…”. It must be acknowledged that the signatories have indeed expressed fears that are on the minds of many and that it is perhaps the signatories who know most about the real risks. Since then, we have dealt with several questions raised in the letter (such as the synthetic content flooding the internet, or specifically the growing concerns surrounding the deepfake phenomenon) in our previous articles. What really puts the manifesto on the techno-pessimistic axis is the solution it outlines. Technological development is not something that can be stopped by force. What we have a real influence on is more the direction of development. This could be, for example, the creation of real and enforceable ethical standards to guide AI developments, or even research into how AI can model morally correct decisions.

The EU’s forthcoming AI Act also takes a prohibitionist approach to the problem. By classifying developments into risk classes and banning the use of AI in some areas, the main aim is to minimize or completely avoid its negative impacts on society.

At the other end of the scale might be the recent manifesto of the Information Technology & Innovation Foundation (ITIF). The organization has recently published its position paper entitled “A Global Declaration on Free and Open AI”, which is in stark contrast to what has been described so far regarding the techno-pessimistic viewpoint. Its focus is on the positive potential of AI and the idea that democratic ideals and global development can best be promoted through AI-based tools.

The statement draws parallels with the concept of a “free and open internet” that was previously published in the United States. It also aimed to ensure positive change and freedom of expression for people worldwide. Perhaps the most important finding of the current declaration is that authoritarian governments or regimes are already trying to regulate or stifle free communication facilitated by generative artificial intelligence. If this trend were to spread, it would create a kind of censorship on the free flow of ideas by simply removing ideas that challenge the legitimacy of governments or are unpalatable to those in power from the content generated by, for example, large language models (LLMs). 

According to the organization, the most important thing we need to do now is for democratic nations to collectively embrace the vision of free and open artificial intelligence to prevent such interference.

The “Global Declaration on Free and Open Artificial Intelligence” underlines the importance of AI tools respecting democratic values and promoting human development. It also calls on governments to protect the free development and use of AI, promote accountability, and build trust in AI systems. In doing so, democratic nations can harness the transformative potential of AI while protecting against authoritarian control and censorship.

Of course, none of the initiatives described above can be seen as extremely optimistic or pessimistic, but together they highlight the opportunities that lie ahead for the future of AI. Artificial Intelligence is as much a double-edged sword as any great innovation that humanity has invented before.  It can be a threat to democracy or a facilitator of the free flow of ideas and thoughts. The key is to listen and prepare for pessimistic scenarios so that we can prevent them without sacrificing progress. But it is equally essential to support and encourage optimistic thoughts, without neglecting the risks.


István ÜVEGES is a researcher in Computer Linguistics at MONTANA Knowledge Management Ltd. and a researcher at the Centre for Social Sciences, Political and Legal Text Mining and Artificial Intelligence Laboratory (poltextLAB). His main interests include practical applications of Automation, Artificial Intelligence (Machine Learning), Legal Language (legalese) studies and the Plain Language Movement.

Print Friendly, PDF & Email