Updates to our Terms of Use

We are updating our Terms of Use. Please carefully review the updated Terms before proceeding to our website.

Monday, April 22, 2024 | Back issues
Courthouse News Service Courthouse News Service

Ethical questions abound as wartime AI ramps up

With an arms race under way, and clouded by the usual opacity of war, AI may be moving onto the battlefield with much of the world not yet fully aware of the potential consequences.  

PARIS (AFP) — Artificial intelligence's move into modern warfare is raising concerns about the risks of escalation and the role of humans in decision making.

AI has shown itself to be faster but not necessarily safer or more ethical. U.N. Secretary General Antonio Guterres said Friday that he was "profoundly disturbed" by Israeli media reports that Israel has used AI to identify targets in Gaza, causing many civilian casualties.   

Beyond the "Lavender" software in question and Israeli denials, here is a tour of the technological developments that are changing the face of war.

Three major uses

As seen with Lavender, AI can be particularly useful for selecting targets, with its high-speed algorithms processing huge amounts of data to identify potential threats. 

But the results can only produce probabilities, with experts warning that mistakes are inevitable.

AI can also operate in tactics. For example, swarms of drones — a tactic China seems to be rapidly developing — will eventually be able to communicate with each other and interact according to previously assigned objectives.

At a strategic level, AI will produce models of battlefields and propose how to respond to attacks, maybe even including the use of nuclear weapons.   

Thinking ever faster

"Imagine a full-scale conflict between two countries, and AI coming up with strategies and military plans and responding in real time to real situations," said Alessandro Accorsi at the International Crisis group.

"The reaction time is significantly reduced. What a human can do in one hour, they can do it in a few seconds," he said.

Iron Dome, the Israeli anti-air defense system, can detect the arrival of a projectile, determine what it is, its destination and the potential damage. 

"The operator has a minute to decide whether to destroy the rocket or not," said Laure de Roucy-Rochegonde from the French Institute of International Relations.

"Quite often it's a young recruit, who is 20 years old and not very up-to-speed about the laws of war. One can question how significant his control is," she said.

Courthouse News’ podcast Sidebar tackles the stories you need to know from the legal world. Join our hosts as they take you in and out of courtrooms in the U.S. and beyond.

A worrying ethical void

With an arms race under way, and clouded by the usual opacity of war, AI may be moving onto the battlefield with much of the world not yet fully aware of the potential consequences.  

Humans "take a decision which is a recommendation made by the machine, but without knowing the facts the machine used," de Roucy-Rochegonde said.

"Even if it is indeed a human who hits the button, this lack of knowledge, as well as the speed, means that his control over the decision is quite tenuous."

AI "is a black hole. We don't necessarily understand what it knows or thinks, or how it arrives at these results," said Ulrike Franke from the European Council on Foreign relations. 

"Why does AI suggest this or that target? Why does it give me this intelligence or that one? If we allow it to control a weapon, it's a real ethical question," she said.

Ukraine as laboratory

The United States has used algorithms, for example, in recent strikes against Houthi rebels in Yemen. 

But "the real game changer is now — Ukraine has become a laboratory for the military use of AI", Accorsi said.

Since Russia invaded Ukraine in 2022 the protagonists have begun "developing and fielding AI solutions for tasks like geospatial intelligence, operations with unmanned systems, military training and cyberwarfare," said Vitaliy Goncharuk of the Defense AI Observatory at Hamburg's Helmut Schmidt University.

"Consequently the war in Ukraine has become the first conflict where both parties compete in and with AI, which has become a critical component of success," Goncharuk said.

One-upmanship and nuclear danger  

The "Terminator," a killer robot over which man loses control, is a Hollywood fantasy. Yet the machine's cold calculations do echo a fact of modern AI — they do not incorporate either a survival instinct or doubt.

Researchers from four American institutes and universities published in January a study of five large language models — a system similar to the ChatGPT generative software — in conflict situations. 

The study suggested a tendency "to develop an arms race dynamic, leading to larger conflicts and, in rare cases, to the deployment of nuclear weapons."

But major global powers want to make sure they win the military AI race, complicating efforts to regulate the field. 

U.S. President Joe Biden and China's President Xi Jinping agreed in November to put their experts to work on the subject.

Discussions also began 10 years ago at the United Nations, but without concrete results.

"There are debates about what needs to be done in the civil AI industry," Accorsi said. "But very little when it comes to the defense industry."

By DIDIER LAURAS Agence France-Presse

Categories / International, Technology

Subscribe to Closing Arguments

Sign up for new weekly newsletter Closing Arguments to get the latest about ongoing trials, major litigation and hot cases and rulings in courthouses around the U.S. and the world.

Loading...