An MIT thought experiment offered humans a chance to weigh in on driverless cars presented with a moral choice when a crash is imminent. The Moral Machine, designed by computer scientist Edmond Awad, asks participants to choose between killing an old fat guy, for instance, or a young athlete; or 3 cats or just 2 humans. Whose life is more important? The person who takes care of his health, or the guy who has let himself go to pot? Do you save 3 (cats) in favor of saving 2 (humans)?
Awad was surprised at how quickly the Moral Machine caught on via social media. And while the experiment actually tells us nothing about how driverless cars might make moral choices that would be unbearable for humans, it does and should make us stop and think what we’re getting into. How can a car be programmed to make moral decisions that are complicated for humans? How do we decide the relative value of lives?
If humans have trouble playing God, what are the implications of leaving such “death and death” decisions to a driverless car? And what are we getting into with these cars???
The Moral Machine leaves one contemplating the idea of some truly awful future headlines:
“Driverless Car Kills Pregnant Woman to Spare a Litter of Kitties”
“Driverless Car Crashes into Train, Kills Hundreds of Passengers, Sparing Family of Five”
“Driverless Car Crashes into Ten Teens Crossing at Red Light, Sparing Senior”