Can the question of morality and self-driving cars be solved through an algorithm?
Utilitarianism and deontology.
Two words that many don’t know the meaning behind, yet we’ll have to side with one or the other sooner than later. And the topic? Self-driving cars. Automated vehicles are the world’s next big phenomenon, with Google touting that it’s self-driving vehicles, in years of testing, have only been in 11 minor incidents, which were all mostly due to human error.
That’s a staggering figure. In six years of testing, only a handful of minor accidents? Imagine all of the lives we could save with this self-driving technology. One thing’s for certain, self-driving software is much more attentive to the road than the human eye will ever be.
The technology is jaw-dropping, and so are the figures. But, that’s not the only factor we need to look at before letting self-driving cars run a large portion of our lives. After all, in the event of an incident, the self-driving car would rather drive off a cliff, killing one person, over driving into oncoming traffic, causing multiple casualties. that could involve multiple lives. It would rather sacrifice one life, the person in the driver’s seat, than potentially harm or kill multiple lives.
It’s called the trolley problem. Should you kill one life to save five others or should you kill six lives to save one? The solution for most is obvious: one must die to save five others–it’s the most logical solution with the best possible outcome. But, it’s not that easy. Change up the variables–what if that one person was your child? It gets much more complicated.
Click here to view the embedded video.
“Ultimately, this problem devolves into a choice between utilitarianism and deontology,” said UAB alumnus Ameen Barghi. “Utilitarianism tells us that we should always do what will produce the greatest happiness for the greatest number of people,” he explained. In that case, allowing one to die in place of the lives of five others is the solution. But, then again, maybe not.
Deontology says that “some values are simply categorically always true,” Barghi said. “For example, murder is always wrong, and we should never do it. Even if shifting the trolley will save five lives, we shouldn’t do it because we would be actively killing one,” Barghi said.
In other words, our self-driving cars should not be programmed to actively decide whether to sacrifice one person for five others, as they are told to do now. The problem goes even deeper, though. The car isn’t the one that would take your life in this hypothetical situation. No, the car is immoral, the software is immoral. Neither is inherently good or evil. “It just runs programs,” Ben and Crosby said in comedy sci-fi film Short Circuit.
I’m certain we can all agree on that.
Therefore, the death would be on the hands of the person who pushed the idea of having it in self-driving cars. That person certainly wouldn’t be prosecuted, but by these standards, it would be on his or her conscience. But, I can guarantee that that person isn’t out to purposely kill others with self-driving technology. No, that person is trying to make our lives better by reducing the number of fatalities in the world related to automobile accidents. Not only that, but said person is also trying to reduce time on the road and make it more efficient.
So, the problem doesn’t just lay on that person’s hands, but it goes even deeper than that.
The real question is, can morality be solved by a simple math equation or algorithm?
source: University of Alabama at Birmingham
Come comment on this article: Can the question of morality and self-driving cars be solved through an algorithm?




