Updates to our Terms of Use

We are updating our Terms of Use. Please carefully review the updated Terms before proceeding to our website.

Thursday, March 28, 2024 | Back issues
Courthouse News Service Courthouse News Service

Military Unveils Guidelines on Artificial Intelligence Warfare

To keep pace with technological advancements on the battlefield, the Pentagon has announced new ethical principles governing military use and design of artificial intelligence. 

WASHINGTON (CN) – To keep pace with technological advancements on the battlefield, the Pentagon has announced new ethical principles governing military use and design of artificial intelligence.

"AI technology will change much about the battlefield of the future, but nothing will change America's steadfast commitment to responsible and lawful behavior,” Defense Secretary Mark Esper said in a statement Monday. “The adoption of AI ethical principles will enhance the department's commitment to upholding the highest ethical standards as outlined in the DOD AI Strategy, while embracing the U.S. military's strong history of applying rigorous testing and fielding standards for technology innovations."

The recommendations announced in a memo this week came after 15 months of deliberation with industry leaders and experts across several fields, according to the Pentagon.

The new principles will guide both combat and non-combat AI applications, including surveillance and preventing mechanical problems, officials said.

The ideas were presented to the public in October at a Georgetown University forum by members of the Defense Innovation Board, which is led by former Google CEO Eric Schmidt.

Google dropped out of the military-led Project Maven in 2018 due to internal protests in the company. The project uses algorithms to interpret aerial images of war zones or conflict areas, one of many recent military developments that involve AI.

Among the five general principles listed in the new Defense Department ethics guidelines is one that requires AI to be “governable,” meaning the automated technology can be stopped if unintended behavior occurs.

“The department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior,” the department said Monday.

The memo also says the Pentagon plans to ensure future AI is carefully developed to be reliable and traceable by being easily auditable so problems can be identified and fixed.

Equitability and responsibility round out the new list of ethical principles, as the government aims to prevent data bias in developing AI.

The principles align with the Trump administration’s efforts to “advance trustworthy AI technologies,” the Pentagon said, and are based on the military’s current ethical framework derived from the U.S. Constitution.

Schmidt, the former Google CEO, said in a statement that the Defense Department is “committed to ethics, and will play a leadership role in ensuring democracies adopt emerging technology responsibly.”

In related news, the Pentagon recently awarded Microsoft a $10 billion cloud computing contract for a project known as the Joint Enterprise Defense Infrastructure, or JEDI. The system will store large amounts of classified data to allow the government to use AI for military planning.

However, the 10-year project has yet to launch due to a lawsuit brought by Amazon CEO Jeff Bezos, who claims President Donald Trump’s personal attitude toward The Washington Post, which is owned by Bezos, played a role in the company losing the contract bid.

Follow @@ErikaKate5
Categories / Government, National, Technology

Subscribe to Closing Arguments

Sign up for new weekly newsletter Closing Arguments to get the latest about ongoing trials, major litigation and hot cases and rulings in courthouses around the U.S. and the world.

Loading...