Submission
Privacy Policy
Code of Ethics
Newsletter

Hold Companies & Nations Responsible For Lethal Autonomous Weapons Systems

“We made too many wrong mistakes.”

Baseball Guru Yogi Berra

Let’s face it:  AI will make mistakes.  But some mistakes are more costly than others, particularly regarding AI powered fully autonomous weapons systems.  Should such fully autonomous weapons be allowed on the battlefield?  Numerous voices from the United Nations to the European Parliament and even a writer on this blog have called for a total ban.  But is this practical?  Numerous fully autonomous weapons already exist.  Often called lethal autonomous weapons systems (LAWS), some have already been in use for years.  South Korea has deployed its SGR-AI sentry gun on the Demilitarized Zone (DMZ) with North Korea since at least 2014 (but likely even earlier).  SGR-AI has a fully autonomous mode.  There’s also the Russian Lancet drone, the latest version being the Lancet-E, which has had a devastating effect against artillery in the current Russo-Ukraine War.  An estimated 50% of all Ukrainian RM-70 rocket launchers lost in the war have been due to this weapon.   Even the US military has a fully autonomous unmanned combat air vehicle, the X-47B, a tailless fighter-jet sized aircraft.  It has been in use since at least 2013.  Finally, there’s the (at least partially) autonomous fighter jet named Fury, currently in development.

            But the mere fact that some technology has been, or is currently in use, isn’t a good argument that it should be.  Humans have historically used mustard gas, chemical weapons, and the atomic bomb, but their mere use doesn’t entail that such a use was morally justified.  Further, that technology is currently used, doesn’t mean it cannot be banned.  Humans have banned mustard gas and the use of chemical weapons in warfare with some success.  So why not make laws that ban LAWS? 

            Here I only wish to address one big question that many have raised regarding the ethical use of or development of LAWS:  Who would be responsible should an AI powered fully autonomous weapon system fail or target civilians or engage in ‘friendly fire’? 

Are LAWS Instruments or Tools?

To fully answer that question, we must first clarify that lethal autonomous weapons, meaning weapons that once launched are fully autonomous needing no human input to kill, are mere tools or instruments for human soldiers, commanders, or armies.  Just as a person who turns on a stove top and uses it to heat soup is responsible for the effect of the soup being warmed up, the electrical usage, and the metal in the pan becoming hot, so too any person who uses a tool is responsible for its effects.  Imagine the person went away after the stove kept running and a hole was burned in the pan by the stovetop, which ended up burning the house down.  That person would be morally responsible for such an effect, as it was his action (albeit by means of the hot stovetop) that burned the house down.  Granted it was unintentional, but one that could have easily been foreseen.  At the very least such a person was morally negligent and so to some extent responsible for the effect that followed (even more so if it was a gas stovetop as it’s much easier to see how uncontrollable and dangerous it can be).

            Similarly, if a man turns on his neighbors fully autonomous lawnmower foreseeing that it likely will cut down her garden of flowers, he is morally responsible for what follows.  Something like this I believe happens in the case of fully autonomous lethal weapons systems.

A Person Who Acts by an Instrument is Responsible for Its Reasonably Foreseeable Effects

All fully autonomous weapons systems need to be activated.  Someone needs to be there to press the power button or switch it into fully autonomous mode.  Whoever does this is then morally responsible for any reasonably foreseeable effects that might follow (for the sake of this essay I’m not focusing on the legal question, but legal responsibility ought to be modeled on moral responsibility).  It’s no excuse to say you didn’t know better that the whole neighborhood would burn, if as a college student you turn on a fully autonomous flamethrower robot dog as a cruel prank on your friend living down the street.  Any reasonable person would foresee the great risks involved and that reasonably speaking it’s foreseeable that many or all houses in the area could catch fire. 

            Similarly, any soldier, who knows enough about the lethal autonomous weapon they are deploying, should be held morally responsible for any reasonably foreseeable effect.  If he’s deploying it in a highly populated civilian zone and knows that it hasn’t been programmed to distinguish between enemy combatants and civilians, then he’s morally responsible for any civilians it kills.  He should be held guilty for a war crime (as the legal framework ought to follow the moral framework).  Likewise, whoever was the CEO of such a company that directed the development of such a product and knew about its indiscriminate nature and intended deployment in highly populated civilian zones should also be held responsible.  He should be prosecuted as well.  Soo too the project manager, quality control team, and all programmers who knew enough about the details of the program and that’s use is likely to be deployed in an indiscriminate manner are morally responsible; they can and should be held liable for war crimes (how much will depend upon the details of the case). 

            It’s much like a team that developed a bioweapon for use against an enemy population, knowing it will kill both enemy soldiers but also civilians as well.  Claiming they alone couldn’t have brought the product into development isn’t enough, as each person is still partly responsible.  Collective responsibility doesn’t mean nobody is responsible, it just means the responsibility (and corresponding punishment) must be shared by all involved.

Prosecute Nations, Programmers, Project Managers, and Soldiers for Indiscriminate Use of LAWS

A difference, of course, arises between the use of traditional weaponry like machine guns, grenades, or artillery and that of lethal autonomous weapons.  When a soldier pulls the trigger in order to kill a civilian it’s easy to assign moral responsibility—he alone is responsible ceteris paribus.  But in the case of AI lethal autonomous weapons, it’s not likely not just to be one person.  Of course, it depends upon the details.  Perhaps, the company designed the product to be discriminate and with reasonable accuracy, but a soldier hacked the system to make it kill anybody and deployed it with the intent of killing all civilians in a given area.  He’s solely responsible in that scenario. 

Say, however, the soldier didn’t know any better and was merely deploying the hacked killer robot at the commander’s orders.  The soldier didn’t know the system was hacked and had no good reason to believe the commander had changed the code.  In that case, the commander would be at fault.

Say a company designed a LAWS and in order to cash in on the current AI bubble rushed production and testing.  They could have foreseen that it would make many mistakes and kill civilians whenever the sun was setting or at sunrise (when the AI sensors couldn’t function so well).  The project manager really wanted a promotion and so even though quality control told him about the defect in the product, he didn’t care and told them to delete their recent documentation of defect and pass it on to the next stage.  Quality control then decided to go ahead and rubber stamp the approval process (as they were afraid of getting fired).  Say the company decided not to give any warnings to the military that then deployed such a weapon and issued no warning labels on the product.  Who’s responsible in this case?  Both the project manager and quality control are morally responsible.  Both should be held criminally liable, possibly even for war crimes.

If a dictator, president, prime minister, or other world leader or government proposes, funds, and develops fully autonomous weapons designed to indiscriminately kill all once deployed, they are all morally responsible for what follows, which can and should entail legal responsibility.  Every single one of them who knew about it should be tried for a war crime.

Now, some may protest this makes the legal framework all too messy or that the current laws don’t allow for the type of moral responsibility I am proposing.  In that case, the law needs to catch up with ethics.  Law shouldn’t dictate the ethics, rather the ethics should dictate the law.  If the international laws on war crimes need to be updated then to hold the designers, managers, programmers, etc. responsible for reasonably foreseeable unjust use of lethal autonomous weapons, so be it.  More specifically, the Rome Statute, Article 28 regarding the moral responsibility of commanders for crimes committed by forces under their command should be updated to include LAWS that ‘intentionally’ or foreseeably target civilians indiscriminately.  An updated proposal of the added language is added in bold-italics below to the original statute:

Article 28.  Responsibility of commanders and other superiors.  Under this provision, military commanders are held criminally responsible for crimes committed by armed forces (including AI or lethal autonomous weapons systems) under their effective command and control, such as intentionally targeting civilians, rape, and any sexual violence used in war.  This applies to instances where the superior knew or should have known about such crimes, or failed to take all necessary and reasonable measures to prevent their commission (even those involving LAWS).  The crimes committed by the armed forces must have been a result of the failure of the commander to properly exercise control over them.  In addition, there must be evidence beyond any reasonable doubt that the commander is responsible and the crimes were sufficiently widespread so that it is evident that they occurred during the ordinary implementation of the military action for which the commander is responsible.  The goal of this provision is to encourage commanders and superiors to prevent effectively the perpetration of crimes by their forces.

Changing this law will encourage more ethical development of LAWS or at least hinder their unethical use.  It’s time for the law to catch up with the murky realm of LAWS.  Doing so will make for a better humanity and a safer world.


John Skalko, PhD, is a philosopher living in Massachusetts, who has an expertise in moral philosophy.  He has written on a wide range of topics from the ethics of lying and assisted suicide to mechanical ventilation and more recently questions surrounding the use of AI.  He has presented at conferences in Missouri, Michigan, Kansas, Louisiana, California, New York, and Massachusetts, as well as in Poland, Spain, and the Czech Republic.