Updates to our Terms of Use

We are updating our Terms of Use. Please carefully review the updated Terms before proceeding to our website.

Thursday, March 28, 2024 | Back issues
Courthouse News Service Courthouse News Service

Driverless Cars Could Learn to Make Moral Choices

Is a self-driving vehicle capable of making moral decisions? If it is, which moral values should it use to make such choices?

(CN) - Is a self-driving vehicle capable of making moral decisions? If it is, which moral values should it use to make such choices?

These questions are among the issues society must consider as artificial intelligence, or AI, systems become more common in various industries, according to Gordon Pipa, co-author of a new study that provides a statistical model of human morality.

The research, published Wednesday in the journal Frontiers in Behavioral Neuroscience, is a breakthrough for efforts to equip AI systems with morality - which experts had viewed as context-based and therefore impossible to describe mathematically.

“But we found quite the opposite,” said lead author Leon Sutfeld, a researcher at the University of Osnabruck in Germany. “Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object.”

In order to examine human behavior in road traffic scenarios, the team asked study participants to drive a car in a simulated, virtual-reality suburban neighborhood where they experienced unexpected, unavoidable dilemmas involving animals, inanimate objects and humans - forcing the participants to prioritize which to save.

The authors then used the results to conceptualize statistical models that established rules, along with an associated degree of explanatory power to understand the observed behavior.

The findings come amid growing debate over the behavior of self-driving vehicles and other machines in unavoidable accidents.

Stakeholders and experts have operated under the assumption that human moral behavior could not be modeled, and have focused on outlining critical variables for engineering AI systems. For example, a new initiative from the German Federal Ministry of Transport and Digital Infrastructure, or BMVI, has defined 20 ethical principles for self-driving cars.

Now that applying human morality to machines seems to be possible, the team argues that debate should now focus on how such morals are programmed into, and employed by, AI.

“Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma,” said senior author Peter Konig, a professor at the University of Osnabruck. “Firstly, we have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machines…act just like humans.”

The team also warns that society is at the beginning of a technological revolution that requires clear rules. Without them, machines could begin making decisions without us.

In conclusion, Papa wonders: “Should they imitate moral behavior by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?”

Follow @SeanDuffyCNS
Categories / Science, Technology

Subscribe to Closing Arguments

Sign up for new weekly newsletter Closing Arguments to get the latest about ongoing trials, major litigation and hot cases and rulings in courthouses around the U.S. and the world.

Loading...