Submission
Privacy Policy
Code of Ethics
Newsletter

The Future of Robotic Warfare: Ethical and Strategic Challenges

In recent years, advances in technology have revolutionized warfare. Robotic systems, drones, and artificial intelligence-guided weapons are becoming more and more part of the military arsenal, as shown by the significant role technology is already playing in the ongoing Russia-Ukraine war. What opportunity does the idea of ​​a battlefield free of people represent and what are the challenges of today and tomorrow?

In the example of the Russo-Ukrainian war mentioned above, we can see that, for instance, among the ruins of Bakhmut, the Russian army has already deployed small, tracked ground drones equipped with grenade launchers for direct attack. While it is true that these were reportedly unsuccessful, this was largely due to the robots’ exposure to radio control. However, in an age where concrete humanoid robots are available commercially for only a few tens of thousands of dollars, it is not hard to imagine how quickly a leap in robotic warfare could occur. As always, of course, there are pros and cons to consider, but also the fact that in such a nascent field, the extent of some risks can be estimated, not determined, even with the best intentions. In this article, we will briefly look at the main aspects along which the use of robots on the battlefield is usually assessed.

It is worth noting that the range of autonomous robots that can be deployed on the battlefield is extremely wide, from simple four-wheeled reconnaissance vehicles to flying kamikaze drones and the (perhaps humanoid) robots of the future. We do not distinguish between them in the context of this post but rather focus on the opportunities and risks of the spread of robotic warfare in general.

The advantages of robotic warfare

The use of robotic systems in warfare can bring many benefits. The first and perhaps most obvious of these is the possibility of saving human lives. The use of robots on the battlefield allows soldiers to control operations from a safe distance, reducing the need for direct involvement in dangerous situations. Modern drones, such as the Shahed 136, which Iran produces, are effective and inexpensive tools that can deliver precise strikes on the enemy with minimal risk.

The interesting question is, of course, how much making attacks more effective increases the number of casualties on the other side of the battlefield. Even if potential civilian casualties are not considered, the question arises: how much more damage will an army that does not have the technical sophistication on the defensive side suffer from a more accurate and effective weapons system than a human?

Robotic systems can also bring significant cost savings. Training and maintaining a soldier costs tens of thousands of dollars per year, while the one-off cost of a robot or drone is often lower and maintenance costs can be more affordable in the long run. Using such equipment is therefore not only safer but can also be economically beneficial.

Ethical concerns

The biggest ethical question raised by robotic warfare is whether it is humane to use robots on the battlefield. Two main arguments are usually highlighted in this context. One is that the use of robots could reduce the risk of war to the point where it could lead to more widespread and frequent conflicts. This is essentially the idea that the number of human casualties puts political pressure on governments to seek peace, but that robots can reduce this pressure. So, paradoxically, the reduction in the cost of robotic warfare turns the benefits of sparing human lives into the risk of increasing conflict.

The other argument is that robots are unable to distinguish between combatants and civilians, which can lead to higher civilian casualties. Although the use of robots could theoretically reduce human casualties, in practice, the machines may make wrong decisions, with potentially serious consequences.

There is also, of course, the significant risk of moral suasion in the use of autonomous systems. If decisions with lethal consequences are made by algorithms instead of humans, then human responsibility for war and the sense of responsibility for the losses caused by war will be reduced.

Strategic challenges

The integration of robotic systems into warfare also poses several strategic challenges.   One such challenge is the recruitment of soldiers. Many armies around the world are currently struggling to recruit enough soldiers. The use of robots could reduce the need for human resources, thus alleviating recruitment difficulties. In addition, the use of robots could improve the technological capabilities of the military, making it competitive on the global stage.

However, the use of robots also brings new types of threats. Cybersecurity risks increase as robotic systems may be vulnerable to attacks by hackers. Another challenge is the issue of liability, which is often raised in relation to other AI-based technologies. In short, who is liable if an autonomous robot makes a mistake? Such and similar ethical and legal issues make the future of robotic warfare extremely complex.

Challenges and opportunities for the future

To reap the benefits and minimize the risks of robotic warfare, comprehensive regulation and international cooperation are needed. We are already seeing efforts in this direction in the field of artificial intelligence, just think of the AI Act that will soon enter into force in the EU. The United States has already developed some ethical guidelines for the military use of robotic systems, but these or similar ones should be applied globally. The ethical guidelines include, for example, that human decision-making must remain part of the decision-making process when it comes to fatalities, and that robotic systems must be developed and deployed in a transparent and verifiable manner.

The international community should work together to develop common rules for robotic warfare. Adherence to these rules can help minimize ethical and security risks and ensure that robotic systems are used in a humane and responsible manner. Without such regulation, there is a risk that robotic warfare will remain chaotic and unregulated, with potentially serious consequences.

The extent of the destruction that advanced AI-based systems can cause in the theatre of war is yet undetermined. Therefore, in an extreme scenario, it is not inconceivable that fully autonomous robotic warfare could follow in the footsteps of biological weapons, the development, production, acquisition, transfer, and use of which have been banned since 1975.

The future of robotic warfare is therefore still on very shaky ground. Advances in technology could revolutionize warfare, reducing human casualties and improving the technological capabilities of the military. At the same time, without addressing the ethical and strategic challenges, robotic warfare also poses several risks. To maximize its benefits and minimize its risks, comprehensive regulation and international cooperation are needed. The US ethical guidelines are a good starting point, but they should also be applied globally. The international community must work together to ensure the humane and responsible use of robotic warfare and to protect global security and peace. All this before we let the genie out of the bottle.


István ÜVEGES is a researcher in Computer Linguistics at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences, Political and Legal Text Mining and Artificial Intelligence Laboratory (poltextLAB). His main interests include practical applications of Automation, Artificial Intelligence (Machine Learning), Legal Language (legalese) studies and the Plain Language Movement.