Given the choice, would you choose to buy an autonomous vehicle programmed to minimize a perceived collective damage or to maximize efforts to save your life in an emergency situation? This is the question that many are asking as we grapple with the idea of driverless cars entering the roadway in the near future. Likely, that transition will not be 100% of cars right away, resulting in a mix of autonomous and manual vehicles navigating the roadways simultaneously.
Driverless vehicles have been in the news for the past few years, gaining additional coverage and traction as news stories hit the mainstream media. These vehicles function through multiple 360 degree cameras mounted on top of the vehicle that scan its surroundings constantly. These images are routed back to a computer that interprets the images through artificial intelligence so that the vehicle is aware of pedestrians, other cars, animals, and other inanimate objects. Computer programmers set the algorithms so the computer can learn from its interactions and experiences to continue optimizing. While Artificial Intelligence continues to develop and learn at a rapid pace, whom decides the moral code to program into that autonomous vehicle? New studies show promising results that we can program vehicles to act like humans do, or how we expect other humans to act. However, if we do not optimize and program vehicles to make decisions like a human, do we lose the main benefit of removing human error from the driving experience?
The advent of driverless cars brings a certain optimism that the technology will reduce human error, increase human productivity, and mitigate traffic concerns. While these are poised to be real benefits to society, the broader ethical dilemma must be considered. Some will be reluctant to change, but today we trust the calculations of a spreadsheet or calculator more than that of an accountant. We have adapted to technological advances before, but the act of programming a spreadsheet does not confer risk of death for ourselves or others.
As a society, we will need to determine who we trust to program in a moral code, and ultimately who is responsible legally for the resulting decisions of that vehicle. Maybe, we prefer companies to compete and let the market decide if we want our vehicles to optimize for saving as many lives as possible or merely to save our own and our families first. Or, maybe we entrust our elected officials to create legislation that protects as many consumers as possible. Maybe, it comes down to how much we trust each other. But, it begs the question, do we need to choose one ethical system for which to optimize? Would we prefer that the ethical code in our cars be chosen individually? If so, are we liable legally for the death of another in the event of an accident?
If the code is written during the development of the vehicle, and the car makes a decision that results in the death of a human being, is it considered premeditated murder? Who is at fault?