Submission
Privacy Policy
Code of Ethics
Newsletter

Stop Killer Robots

The slogan in the title is not a line from a science fiction movie, but the slogan of a campaign by Human Rights Watch (HRW), a New York-based international NGO. Just a few years ago, the idea of autonomous fighting machines driven by Artificial Intelligence (AI) on the battlefield may have seemed a very distant prospect. But that future is fast approaching. In October last year, for instance, news broke that the Ukrainian army was using autonomous drones to attack Russian tanks, sending the first true “killer robots” into battle.

The whole idea was particularly topical when I read an article some days ago about an interesting change to the OpenAI Terms of Use. As reported by The Intercept, the tech company’s website has recently undergone a major overhaul of the restrictions on the use of their models. The current modifications, as the changelog states are “to be clearer and provide more service-specific guidance”. Since the last version of the Terms of Use, the company’s product range has expanded on several fronts (just think of GPTs, which allow you to add your own prompts, knowledge base, and skillset to the basic GPT model). For this reason, the current version tries to specify the cases to be prohibited or avoided for each platform/use case. What has caught people’s attention is a seemingly innocuous change in vocabulary. Previously, the company had declared, that any potential use of their models is prohibited, related to;

Activity that has high risk of physical harm, including:

  • Weapons development
  • Military and warfare
  • Management or operation of critical infrastructure in energy, transportation, and water”

However, the highlighted phrase is now completely missing from the redesigned website.

In fact, at first reading, the conditions of use are currently also sharply differentiated from any use that could be linked to military purposes. The Universal Policies section, which therefore applies to all the company’s products, is worded as follows;

“2. Don’t use our service to harm yourself or others – for example, don’t use our services to promote suicide or self-harm, develop or use weapons, injure others or destroy property, or engage in unauthorized activities that violate the security of any service or system.”

With the change in wording, there were concerns that the company was slowly moving towards weaponized AI development.

It is certainly fair to say that it would have been more appropriate to use the earlier terminology and refer specifically to military use as a prohibited use. If the “clearer and … more specific” passage was intended to refer to Plain Language related principles, there is at least no reason why the previous, perfectly clear and unambiguous wording should have been abandoned. Without delving too deeply into the world of hermeneutics (the science of text interpretation), one can only guess at the real purpose of the specific amendment. The fact that the phrase “develop or use weapons, injure others or destroy property” includes military applications is as likely as the replacement being a softening of the terms of use. Therefore, it is no coincidence that similar concerns were quickly raised.

From another point of view, it may be worth focusing on the suitability of GPT-3/4 and other language models for such purposes. Of course, we cannot be undoubtedly certain, but it can be said that Large Language Models (LLMs) in general do not fit very well into the concept of autonomous weapon systems as we think about them today. The strength of such models is that they can generate responses to human language inputs that look like a correct reaction or response to that input, thanks to the huge amount of training data. This is mainly due to the statistical nature of how these models work. For this reason, even if developers could somehow find a way to apply such a model on the battlefield, it would still be far from safe enough to be used in a real-life situation (just think of the often-mentioned bias and hallucination problems inherent in models).

The category of weaponized AI can include systems that are capable of targeted damage or information gathering either in the real world or in cyberspace. The former has just been largely ruled out as a possibility (at least for the present). The latter is a trickier question, as current forms of generative AI could be excellent for disinformation purposes, for facilitating the production of new kinds of bioweapons (think of the acceleration of current pharmaceutical uses), for discrediting public actors by creating deep-fake content, for making cyber threats more serious by generating malicious code, and so on.

If we accept that these are already real, concrete threats to the use of AI as a weapon, then we can be somewhat reassured. The restrictions on the above remain explicit, and any of these that have a high chance of affecting OpenAI models will be clearly stated as prohibitions (cf. Section: Building with the OpenAI API Platform).

At the same time, we must not forget that the problem to which the change of these few words has drawn attention is very real and may well be imminent. In the field of robotics, for example, a “ChatGPT moment” may be imminent as early as this year. This refers to a point when a series of breakthroughs that were not predicted by previous trends will seemingly come out of nowhere. Even if self-driving cars are still a long way off, computer vision and object recognition are getting better every year. It’s a bit like having LEGO bricks in your hands that you haven’t put together in the right way just yet.


István ÜVEGES is a researcher in Computer Linguistics at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences, Political and Legal Text Mining and Artificial Intelligence Laboratory (poltextLAB). His main interests include practical applications of Automation, Artificial Intelligence (Machine Learning), Legal Language (legalese) studies and the Plain Language Movement.

Print Friendly, PDF & Email