Skip to main content

Moral programming will define the future of autonomous transportation

Image Credit: ThinCi

Our lives are becoming more automated by the minute. Many everyday tasks have been relegated to smart software algorithms without consumers even noticing. For example, a staggering 70 percent of stock trading is now fully automated. This automation has already resulted in job loss, as well as growing resentment toward nascent AI technologies. In some sense, this resentment may be deserved. Elon Musk predicts a displacement of 15 percent of the world’s workforce from driverless cars, a technology he is working to advance.

Musk, as well as other futurists and scientists, often make headlines with warnings about the manifold ethical issues that surround artificial intelligence. And it’s true that there will be social and economic consequences — quite possibly overwhelmingly negative ones — if the development of artificial intelligence goes unchecked.

Relinquishing human control

As AI technology advances, our opinion about it remains largely unchanged. We place trust in the judgement of humans and fear relinquishing control to machines, despite mounting evidence that machines do a much better job of performing most tasks than we do. Nearly 1.3 million people die in traffic accidents caused by humans each year, while driverless cars have resulted in only a handful of fatalities after billions of miles of testing. Intelligent machines also learn more quickly from their mistakes than we do, as they possess the ability to revisit and rehash mishaps thousands of times in a matter of minutes. Still, public opinion of AI-powered vehicles remains low, either due to inordinate hubris or prudent caution — most likely a mixture of both.

First do no harm?

Regardless of public sentiment, driverless cars are coming. Giants like Tesla Motors and Google have already poured billions of dollars into their respective technologies with reasonable success, and Elon Musk has said that we are much closer to a driverless future than most suspect. Robotics software engineers are making strides in self-driving AI at an awe-inspiring (and, for some, alarming) rate.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


Beyond our questions of whether we want to hand over the wheel to software, there are deeper, more troubling questions that must be asked. Regardless of current sentiment, driverless cars are on their way. The real questions we should be asking as we edge closer to completely autonomous roadways lie in ethically complex areas. Among these areas of concern, one very difficult question stands out. Should we program driverless cars to kill?

At first, the answer seems obvious. No AI should have the ability to choose to kill a human. We can more easily reconcile death that results from a malfunction of some kind — brakes that give out, a failure of the car’s visual monitoring system, or a bug in the AI’s programmatic makeup. However, defining how and when AI can inflict harm isn’t that simple.

Building ethics into autonomy

Imagine a scenario where a car with a family onboard is about to hit a pedestrian who’s running across a road. In this situation, the car must make a choice to either save the life of its passengers or the life of the pedestrian. Let’s say the car can swerve into a nearby tree to avoid the pedestrian, but this will undoubtedly kill its passengers. Or the car can continue down the road, hitting the pedestrian in an effort to avoid harming its passengers. Should the number of lives on each side matter? Should age factor into the AI’s decision? Would it be immoral not to endow the AI with the ability to make these important ethical decisions? Or should we allow the AI to run without any notion of saving or sacrificing lives?

“As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent,” asserts Jean-Francois Bonnefon, professor of economics, in a discussion with MIT Technology Review.

When asked, many agree that (speaking theoretically) saving multiple lives is more important than saving one. We may even believe that saving the young is more important than saving the old. But there is no unanimity, especially when these moral decisions would result in our own injury or death.

Not surprisingly, when told a driverless AI must sacrifice our life for the lives of others, we find its moral compass lacking. In most cases, self-preservation alters our perception of what is right and wrong.

Should driverless AI be programmed to preserve its passengers? Or should it be programmed to have a preference for reducing the number of deaths? Should we perpetuate self-preservation or the greater good? Are we, as owners of the driverless cars, at least partially responsible for the decisions they make?

Moral programming

At the present moment, there are no definitive answers to these questions. Driverless cars will no doubt encounter lose-lose scenarios and be tasked with making difficult decisions. Some may argue that driverless cars should be unburdened by moral dilemmas and refused the ability to willing take a human life, even in the name of the greater good. However, there will be situations where autonomous vehicles must weigh the evils to define a lesser one. In order to do this, autonomous vehicles must be endowed with moral programming.

How we will decide to program these moral algorithms is still largely unknown. How we decide to proceed and what conclusions we come to about these ethical issues will certainly shape the morality of our driverless future. As we continue to advance toward fully autonomous driving, the issue of moral programming will play an increasingly critical role.

Josh Althauser is an entrepreneur with a background in design and M&A