Digital Battlefields and Wars Fought in the Shadows—Part II.
Digital invasions: how are cyber weapons changing conflicts?
In presenting the tools of hybrid warfare, we have seen how military, economic, political, and (dis)information elements are intertwined in modern conflicts. However, one of the most prominent and dynamically evolving areas of this multifaceted strategy is cyber warfare, which represents one of the defining fronts of the digital age. In what follows, we will examine in detail how cyberspace is becoming the new theatre of war and its role in hybrid warfare.
Cyber warfare takes place in digital space and is aimed at attacking or defending the infrastructure of cyberspace. This includes, for instance, obtaining or damaging critical data or making services unavailable, like in the form of distributed denial of service (DDoS) attacks. Cyber warfare is one of the more aggressive elements of hybrid warfare, which can be a useful tool in conflicts that have already started. It can be used to cripple the enemy’s information infrastructure, to keep the population in fear, and to disrupt critical infrastructure.
This more overt form of hybrid warfare often combines cyber-attacks with conventional military action to increase effectiveness. For example, cyber-attacks accompanying military actions can cripple communications systems or logistical support, thus facilitating the achievement of military objectives. In the “Doppelganger” and “Spamouflage” operations reported by OpenAI, these groups used AI to debug code, analyze social media activity, and generate content in multiple languages. The Spamouflage campaign, for example, operates a network of fake social media accounts that disguise their political messages with spam-like content to avoid moderation measures against them.
Disinformation is also a key element of hybrid warfare. Cyber warfare is closely intertwined with disinformation, especially on social media, where attackers exploit the platforms’ algorithms and global reach. Generative AI is of course also playing a significant role in these campaigns, a role that is likely to grow soon. Its application enables the rapid and relatively cost-effective creation and targeted distribution of large amounts of automatically generated content. Disinformation often accompanies cyber-attacks, such as phishing attempts or the destruction of digital infrastructure, to have an impact on technical, psychological, and social levels. In this way, cyber warfare is not only a technological attack but also an effective tool for shaping public opinion.
We should also see that AI-based technologies have made hybrid warfare methods increasingly sophisticated. Disinformation campaigns on social media platforms can deliver personalized messages to users, exploiting their emotions and prejudices. These attacks aim to polarize society and undermine democratic processes. The methods used include content generation and creating so-called “false engagement.”
The latter refers, for example, to cases where content creators try to make a piece of content viral by using artificially generated profiles to attribute many reactions, comments, and shares to the original content they intend to distribute. The high volume of user interaction will make social media algorithms more likely to distribute such content to more, now real, users. Once these users respond to the content, the process becomes quasi-self-sustaining; the information to be disseminated now starts to spread through a personal network of contacts on profiles that are backed by real people.
Defending against hybrid warfare is made more difficult by the fact that information warfare fought with modern tools is a completely new theatre of war, with constantly evolving and changing tools. Prevention therefore requires complex strategies, like the complexity of the situation, including the protection of the information space, public education, and strengthening international cooperation. Early detection and rapid identification of threats is an essential element of protection. For example, there are already promising developments that promise to bring AI-generated voice recognition to the masses in the form of a browser plug-in. But of course, this is only the tip of the iceberg.
Artificial intelligence-based systems could also soon enable early detection and analysis of disinformation campaigns and help counter threats. AI-based systems may be able to identify fake news, recognize unusual patterns, and detect the sources of manipulative campaigns. It should be noted, however, that these are still intensively researched topics today, with no reassuring conclusion.
Security mechanisms that make it more difficult for threat groups to operate play an important role in the design of systems that help to protect. AI models already often refuse to generate the requested content, making it difficult to implement disinformation campaigns. For example, Meta’s Llama models, or OpenAI’s publicly available GPT models have built-in content moderation that makes it impossible to generate hateful content. However, this is not 100% protection. Consider creating supportive comments under a Facebook profile post. Many supportive comments give the impression that a particular opinion or view has been generated by a broad social consensus, whereas most comments may be AI-generated. The creation of positive, supportive text is not prohibited by the terms of use of any model. This type of misleading, while seemingly less harmful, is at least as dangerous as using generative AI for direct discrediting purposes.
International cooperation and information sharing are also essential to increase the effectiveness of protection. Raising public awareness and critical thinking skills are also key to combatting disinformation. Media literacy programs can contribute to people’s ability to distinguish real from fake information and to make informed use of social media. In addition, international cooperation is essential, as hybrid threats are often cross-border in nature and effective protection can only be achieved through joint efforts.
Hybrid warfare poses new challenges for modern societies, as attacks are complex and difficult to detect, combining traditional and digital methods. To defend themselves properly, technological innovations, international cooperation, and awareness-raising are needed. AI represents both a threat and an opportunity in this process. It is in our common interest to use it effectively to defend and identify threats. Success against hybrid warfare depends on how quickly we can adapt to ever-changing threats and new technologies.
Increasing the resilience of societies, educating the public, and international cooperation are all important elements to reduce the threats posed by hybrid warfare. The success of future defense strategies will depend on the ability of states and societies to respond flexibly to hybrid threats and to exploit the opportunities offered by technological progress in defense.
István ÜVEGES, PhD is a Computational Linguist researcher and developer at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences. His main interests include the social impacts of Artificial Intelligence (Machine Learning), the nature of Legal Language (legalese), the Plain Language Movement, and sentiment- and emotion analysis.