Autonomous vehicles are likely the most important development for highway transportation in the near future. Engineers, manufacturers, technology groups, legal companies, and government agencies are all anticipating and researching the emergence of vehicle automation. However, the full implications of vehicle automation have not been fully explored, particularly from an ethical standpoint. Of special concern to engineers is the fact that the likelihood of autonomous vehicles not crashing is dubious at best. Although most agree that full implementation of autonomous vehicles will result in vast safety improvements, the period of initial deployment and integration will likely produce unique risks that have not been fully explored. Therefore, engineers must prepare for this paradigm shift by discussing the ethical implications of vehicle automation, particularly in regards to safety.
An important question that needs to be answered is how autonomous vehicles will choose to crash if a crash is unavoidable. This is an ethical question, as any crash outcome will potentially affect the health and welfare of the vehicle occupants as well as any other vehicles involved. If a major thrust of engineering ethics is to protect the health and welfare of the public , autonomous vehicles then must be programmed to minimize the impacts to public wellbeing.
However, this leads to a second question regarding the metrics of crash decision. On what basis will autonomous vehicles measure the safety impacts of crashes? Are fatal crashes the most damaging, or should paralysis or severe injuries also be considered? Should an autonomous vehicle take all potential injuries into account, even those of varying severity? Ultimately, a guiding ethics system that enables the autonomous vehicle to make crash decisions must have some metric or basis for determining how to crash.
A third question then arises regarding fault. If an autonomous vehicle does crash, who is to blame? Are the auto-manufacturers at fault? Should vehicle owners be required to sign an agreement that manufacturers and dealers will not be held legally responsible? More salient to engineers, are the computer science and electrical engineering programmers who develop the guidance system for these vehicles to be held responsible if a crash occurs? Perhaps even more troubling is that certain ethical systems may cause an autonomous vehicle to choose one fatal crash over another crash. How does a vehicle decide between lives, and should the owner of the vehicle be responsible if someone is killed in a crash with an autonomous vehicle?
Clearly, autonomous vehicles require significant planning. Carefully detailing guidance programs and implementing sound ethical judgments may allow autonomous vehicles to minimize societal impacts of crashes. However, the terms and measurements of those impacts must be discussed and weighed first. Autonomous vehicles may very well revolutionize traffic safety, but engineers, particularly transportation engineers who understand crashes and computer scientists who will attempt to program ethical guidelines, must have an open discourse with the government and with manufacturers regarding the implications of vehicle automation.
T. Litman. Autonomous Vehicle Implementation Predictions: Implications for Transport Planning. Victoria Transport Policy Institute, 2014.
J. H. Smith, P. M. Harper and R. A. Burgess, Engineering Ethics: Concepts, Viewpoints, Cases, and Codes. National Institute for Engineering Ethics, Lubbock, TX, 2008.
W. Kumfer and R. Burgess. Investigation into the Role of Rational Ethics in Autonomous vehicle Crashes. To be presented at the 94th Annual Meeting of the Transportation Research Board, Washington, D.C., 2015.
G. E. Marchant and R. A. Lindor. The Coming Collision between Autonomous Vehicles and the Liability System. Santa Clara Law Review, Vol. 52, No. 4, 2012, pp. 1321-1340.