Self-Driving Cars: A Moral Dilemma

*Note author affiliations are from 2022

*Image only viewable through PDF version

Self-Driving Cars_ A Moral Dilemma – PDF VERSION

 

Andrew Ahn¹, Eric Guan², Joshua Kaplan³, Lauren Yu⁴, Brooke Ellison5

¹Fayetteville-Manlius High School, Manlius NY 13104; ²North Carolina School of Science and Mathematics, Durham NC 27705; ³The Frisch School, Paramus NJ 07652; 4 Brea Olinda High School, Brea CA 92823, 5Center for Compassionate Care, Medical Humanities, and Bioethics, Health Science Center, Stony Brook University, Stony Brook, NY 11794

*Editors: Lillian Sun, Junsang Yoon

 

As the full automation of cars comes near, there are many ethical issues to be discussed. There are 6 projected levels of automation for vehicles, 0 being entirely human-reliant and 5 being fully autonomous. Currently, a consumer can purchase a moderately autonomous level 2 car, and in the second quarter of 2019, approximately 10% of cars sold in the U.S. were level 2.[1] These cars are capable of steering, accelerating, and braking independently. Thus, while the driver is still responsible for maintaining safety, the car is able to make certain decisions. As self-driving cars continue to become increasingly autonomous, the nature of the decisions they have to make become more complex and difficult to program.

 

The ethics of self-driving car accident scenarios can be examined with a derivative of an ethical case known as the trolley problem. Consider an autonomous car approaching a fork in the road without functioning brakes. On one side, a group of pedestrians are walking across, and on the other, a child is playing in the street. The car actually has three options: hit the group of pedestrians, hit the child, or hit a barrier thus putting the passenger(s) of the car at risk. The decision becomes even less obvious when there is only one person on each possible road.

 

One approach to this dilemma would consequently be to program cars to protect passengers at all costs. Alternatively, the vehicle may follow utilitarian principles by minimizing the amount of casualties that will result from a collision. Other considerations, including laws, may also be factored in to determine the vehicle’s response. 

 

Several deadly crashes involving automated cars have occurred in the past few years. After such crashes, public trust is damaged and users tend to find autonomous driving more risky than vehicles driven by humans (Fig 1).[2] As of today, there have been 13 serious accidents (including 6 deaths) correlating with self-driving cars.[3] Most collisions have been due to human assumptions of full self-driving capabilities instead of partial automation of the vehicle. However, as higher levels of AI software allow for more independent machines, a pressing question arises: who is legally liable for crashes? Unfortunately, there is no clear-cut answer. Many experts believe that when vehicles obtain level 5 status, companies behind the software and hardware should be at fault. However, this resolution may cause economic burdens to these businesses, resulting in a shortage of providers. A viable solution to this complication is limiting manufacturer liability by law. The National Childhood Vaccine Injury Act of 1986 allocates compensation to individuals found to be injured by specific vaccines.[4] Similarly, a program can be implemented for self-driving cars, allowing for a monetary cushion for manufacturers. 

While manufacturers behind automated vehicles will be responsible for creating such decision making algorithms, it is still being questioned who will be consulted or tasked with ultimately determining the cars’ priorities. A study conducted by the Open Roboethics Initiative found that 44% of participants would like self-driving cars to allow passengers to make these decisions, potentially through settings that could be altered to reflect the driver’s own principles in the actions of the car.[5] Such an option, however, may result in the car manufacturers being liable for giving passengers the ability to make potentially discriminatory decisions. Passengers may also be saddled with a heavier moral burden as they are forced to determine their own ethical priorities. Alternatively, lawmakers and public health officials can be consulted for a more thorough consideration of the dilemma. With the arrival of full vehicular automation, such concerns will eventually require discussion and tangible confrontation on the world stage.

 

References

[1] Canalys. “Canalys: 10% of New Cars in the US Sold with Level 2 Autonomy Driving Features.” Canalys, 9 Sept. 2019, canalys.com/newsroom/canalys-level-2-autonomy-vehicles-US-Q2-2019.

[2]  ZOË BERNARD, B. I. (2018, April 7). This graph shows how the public feels about self-driving cars now. ScienceAlert. https://www.sciencealert.com/this-graph-shows-how-the-public-feels-about-self-driving-cars-after-a-pedestrian-was-killed.

[3] Martin, Maria. “25+ Intriguing Self-Driving Car Statistics You Should Know.” Carsurance, 24 Feb. 2021, carsurance.net/blog/self-driving-car-statistics/.

[4] HRSA. “About the National Vaccine Injury Compensation Program.” Official Web Site of the U.S. Health Resources & Services Administration, 3 Aug. 2021, www.hrsa.gov/vaccine-compensation/about/index.html.

[5] Open Roboethics Initiative. “If Death by Autonomous Car Is Unavoidable, Who Should Die? Reader Poll Results.” Robohub, 23 June 2014, robohub.org/if-a-death-by-an-autonomous-car-is-unavoidable-who-should-die-results-from-our-reader-poll.

Leave a Reply

Your email address will not be published. Required fields are marked *