The automotive industry is working furiously to bring autonomous vehicles (AVs) into widespread use. The research has progressed to the point where the largest potential remaining obstacles come from the interplay of ethics, regulations, and technical innovation.
A research team from McMaster University in Canada notes the challenges of bringing driverless autos to market “eclipse the capacity of regulators to respond.” (1) Right now the practical status of autonomy is still low enough that on-road accountability lies with the human in the car. As autonomy shifts to the vehicle, on-road accountability shifts to manufacturers, vehicle owners, and policy makers.
This shift in responsibility increases the need to create and implement a practical framework of ethics and values that will guide all stakeholders. Manufacturers and AV owners need this framework so they can match capabilities to expected performance. Regulators need this framework so they can make informed decisions about how AVs will operate in the real world. For example, in a dilemma-inducing situation, where all the alternatives are disagreeable, what course of action should the vehicle be programmed to make?
To study this “disagreeable dilemma” scenario, the McMaster University researchers identified four relevant ethical foundations drawn from previous ethical studies:
- Ethical Egoism: Prioritizing by self interest
- Utilitarianism: Prioritizing by the best possible (or least bad) outcome
- Virtue Ethics: Prioritizing by deciding “what would a virtuous person do in this situation?”
- Moral Machine: Prioritizing by using data obtained in a survey of more than 30 million people on how to respond to a driving dilemma.
After running the equivalent of 20 million driving dilemma scenarios in simulation, Utilitarianism and Moral Machine were found to be the ethical guidelines that resulted in the least fatalities; Ethical Egoism resulted in the most fatalities. The researchers note that “the automotive industry has tended to design vehicles with an Ethical Egoist perspective.” These results are a challenge to regulatory issues surrounding AVs.
Regulators abhor uncertainty, yet the lifecycle of the autonomous vehicle will be complex. “If there is uncertainty around the vehicle, the related data must be analyzed,” says Leslie Nooteboom, chief product officer at Humanising Autonomy, an AI R&D firm specializing in the interaction of people and machines. “In our models, we don’t simply provide a prediction of what people think is happening around the car. We also provide an accurate prediction of what will happen.”
The AI data model must be able to cope with changes in human behavior. Nooteboom tells the story of a client getting unusual readings which the AI could not interpret, so it displayed them as “flying people without any body movement.” The development team and the vendor figured out the data was from people using electronic scooters in traffic, a class of behavior not previously accounted for. “So we created a new class of object, a mobility class that behaves differently from bicycles or walking.”
Compounding the complexity is the often challenging relationship between high tech innovation and government. For AVs the stakes are especially high. “The effects of vehicles’ decisions and the actions (or inactions) of manufacturers, regulators, and policy makers will affect reputations, market share, profits, and even future electability,” notes Thomas Winkle of Technical University Munich, another AV researcher. (2)
- The Looming Challenges of Regulating High Level Autonomous Vehicles, Mordue, Yeung, and Wu, McMaster University. https://doi.org/10.1016/j.tra.2019.11.007
- Winkle, T., 2016. Development and approval of automated vehicles: considerations of technical, legal, and economic risks. In: Maurer, M., Gerdes, J., Lenz, B., Winner, H. (Eds.), Autonomous Driving. Springer, Berlin, Heidelberg, pp. 589–618. https://link.springer.com/chapter/10.1007/978-3-662-48847-8_28