Hello, Guest.!
/

Army AI Study Focuses on Moral Dilemma, Decision Making by Autonomous Vehicles; Celso de Melo Quoted

1 min read

U.S. Army researchers worked with Northeastern University and the University of Southern California to further study ethical artificial intelligence by covering moral dilemmas involving automated vehicles and other autonomous systems.

In the study, they found that the perceived risk of injury to drivers and pedestrians moderated the probability of making the utilitarian choice or reducing the risk of injury to humans or pedestrians. The research also demonstrated how other decision makers’ moral decisions influence those of the participants.

Celso de Melo, researcher at DEVCOM Army Research Laboratory (ARL), said the study is relevant to the service’s modernization efforts.

“As these vehicles become increasingly autonomous and operate in complex and dynamic environments, they are bound to face situations where injury to humans is unavoidable,” de Melo said. “This research informs how to navigate these moral dilemmas and make decisions that will be perceived as optimal given the circumstances; for example, minimizing overall risk to human life.”