Guidelines for self-driving cars: all lives matter
In terms of regulations for self-driving cars, Germany takes the lead. The German federal government will adopt new guidelines for self-driving cars inside the country, which will prioritize the value and equality of human life over damage to property or animals.
These guidelines were presented on Aug. 23 2017 by an ethics committee on automated driving. They stress that self-driving cars must do the least amount of harm if put into a situation where hitting a human is unavoidable, and cannot discriminate based on age, gender, race, disability, or any other observable factors. In other words, all self-driving cars must be programmed to understand that human life is equal. This position takes action on an ethical dilemma for which there is no unique solution. From an ethical point of view it is not allowed to trade one human life against an other one. A long analysis of this problem may be found in a paper by Alexander Heyelke and Julian Nida-Rümelin.
(Alexander Heyelke and Juian Nida-Rümelin: Responsibility for Crashes of Autonomous Vehicles, An Ethical Analyis; Sci Eng. Ethics. 2015; 21(3); 619-630.
The moral machine
This question of who a vehicle should kill when placed in a situation where every outcome would end in death is often called the “Trolley problem.” It’s an ethical debate that’s lasted more than 60 years: Ethicists riddle each other with questions of whether it’s excusable to kill two elderly people to save one child, or save a pregnant woman while killing a man and a child, and on and on. MIT even made a game to test your own ethical predisposition in the situation.
Germany’s rules undercut the myriad arguments possible when weighing the potential of ending one life instead of another based on circumstances of birth. A self-driving car in Germany would choose to hit whichever person it determines it would hurt less, no matter age, race, or gender. How a car would determine the damage it would cause, however, remains uncertain.
Regardless, the country expects a net benefit from the technology; the ethics committee stated that a robot vehicle system would decrease human-caused accidents country-wide, and is thus ethically necessary.
Implementing these rules before fully autonomous cars are on the road puts Germany ahead of the rest of the world, especially the US. While the US Congress is hopeful that bipartisan guidelines can be achieved in the near future (without a defined timeline), individual states like California and Nevada have begun drafting their own set of rules, setting the stage for a confusing patchwork of regulation.
The Ethics Knob
The positions taken above are in contrast to an article that appeared recently in the New Scientist. Would you ride in a car that was prepared to kill you? An “ethical knob” could let the owners of self-driving cars choose their car’s ethical setting. You could set the car to sacrifice you for the survival of others, or even to always sacrifice others to save you. The dilemma of how self-driving cars should tackle moral decisions is one of the major problems facing manufacturers. When humans drive cars, instinct governs our reaction to danger. When fatal crashes occur, it is usually clear who is responsible.
But if cars are to drive themselves, they cannot rely on instinct, they must rely on code. And when the worst happens will it be the software engineers, the manufacturers or the car owner who is ultimately responsible?
Would a driver be in a position to take a position about this in an emergency situation. This is highly debatable.