Updates to our Terms of Use

We are updating our Terms of Use. Please carefully review the updated Terms before proceeding to our website.

Friday, April 19, 2024 | Back issues
Courthouse News Service Courthouse News Service

Countries sign military AI pact at historic summit, but is it enough?

At a two-day, first-of-its kind conference, representatives from more than 80 countries met in The Hague to discuss the responsible use of artificial intelligence in military applications.

THE HAGUE, Netherlands (CN) — A short walk from the European Union agencies Europol and Eurojust and the Organization for the Prohibition of Chemical Weapons, 60 countries announced on Thursday they are joining a call to action to regulate artificial intelligence. 

At the conclusion of a two-day conference in The Hague, the United States, the Netherlands, China and dozens of other nations signed off on a 25-point call to action, asking countries to ensure the safe and responsible use of a variety of machine intelligence applications. Twenty other nations that attended hadn't signed the pact as of Thursday, for reasons that weren't immediately clear.

The pledge calls for countries “to ensure that humans remain responsible and accountable for decisions when using AI in the military domain” as well as sharing best practices and developing national frameworks. 

“This cutting-edge technology presents new challenges that demand our attention,” said Park Jin, the South Korean minister of foreign affairs. His country co-hosted the Summit on Responsible AI in the Military Domain, or REAIM, with the Netherlands.

But the nonbinding pledge falls short of the restrictions imposed by other weapons treaties. It “lacks any kind of enforcement mechanisms and produces extra language that further obscures a way toward a legally binding instrument,” said Jessica Dorsey, an assistant professor of international law at Utrecht University. 

The summit follows a 2021 letter from the Dutch parliament asking ministries to look into the need for a legal framework for regulating artificial intelligence. 

“The Hague is the historical city of peace and justice,” Dutch Foreign Minister Wopke Hoekstra told reporters at a pre-conference briefing. The coastal city has hosted the signing of a number of peace treaties during its history, including The Hague Convention of 1899, which created the precursor to the International Court of Justice, the highest judicial organ of the United Nations. 

Arms control treaties between nations date back even further, as early as ancient Greece. Charlemagne the Great banned the export of armor from his Frankish territories and the Second Lateran Council in 1139 forbid the use of crossbows (only against Christians, however). The Strasbourg Agreement of 1675 between France and the Holy Roman Empire banned the use of poison bullets. 

Not everyone feels that a treaty banning or restricting the use of military AI is needed.

“Additional export control would be useful,” said Lauren Sanders, a senior research fellow at the University of Queensland whose work focuses on the future of war. But Sanders pointed to the speed of advancement in the field and the difficulty in updating international treaties, suggesting other methods of restriction might be more effective. 

An AI system developed in Japan to sort out pastries was adapted within a few years to identify cancer cells.

“It is really challenging to regulate these dual-use technologies,” Sanders said, referring to inventions that have both military and civilian use. 

Many of the speakers at the conference, meanwhile, stressed the need for human responsibility of AI technologies. Jörg Vollmer, a retired German general who spoke at the opening of the second day, emphasized, “The human always has to make the decision.” But, he noted, “there are exceptions."

The REAIM Code of Conduct has a similar caveat, calling for “human oversight of the use of AI systems, bearing in mind human limitations due to constraints in time and capacities.” 

Human rights groups and lawyers question where the accountability will lie if a computer program commits a war crime.

“How do you apportion legal accountability to machines? Is that even possible?” asked Dorsey, the Utrecht University professor.

The concern isn’t speculative. Ukraine is currently using a mobile application that uses satellite imagery and other information to help forces on the front line fire at specific Russian targets. 

The U.S. on Thursday released its own framework that it wants nations to agree to, calling for military AI to be accountable, including ensuring there is a human in command of using these technologies.

“This is the spirit of the summit coming to life,” said Bonnie Jenkins, the U.S. undersecretary of state for arms control and international security, during the closing of the conference. 

Follow @mollyquell
Categories / Government, International, Technology

Subscribe to Closing Arguments

Sign up for new weekly newsletter Closing Arguments to get the latest about ongoing trials, major litigation and hot cases and rulings in courthouses around the U.S. and the world.

Loading...