Do you remember the lifeboat exercise back in school? The teacher would set the stage: You’re in a lifeboat on the ocean with several other people. Food and water are running out. There’s no land or rescue in sight. Someone will have to be sacrificed so that the rest have a better chance at survival.

As a Christian, you feel the choice is clear, though not easy: You’ll follow your Lord’s example and sacrifice yourself.

But that’s not allowed. The teacher insists you play God, valuing each life against the others in order to decide who lives and who dies. Perhaps the young and healthy, with the most talent and potential, will be allowed to survive. And those who are old, sick or injured, with little to offer, will be tossed over the side.

It’s problematic enough when this scenario gets dropped into the laps of school-age children. Now automobile manufacturers are looking to create driverless cars with a similar “moral algorithm” programmed into them.

The cars in question are being developed by a variety of companies, including Google, Tesla and Audi, among others. They’re intended to increase traffic efficiency and reduce the number of accidents by taking human error out of the driving process.

But in the real world, accidents will still happen. Pedestrians will still behave in unexpected ways, or be in the wrong place at the wrong time. Consequently, the cars are to be programmed with a set of rules that would determine whether to swerve and crash to avoid the pedestrians, for example, or to protect the vehicle’s occupants at all costs.

A wide gap between theory and practice

In order to gauge public response to these plans, researchers surveyed hundreds of people, presenting them with a variety of scenarios in which they’d have to choose – much like the lifeboat exercise.

Should a single passenger die to save a large number of pedestrians? What about just one pedestrian, life for life? Or how about if the passenger is riding with a family member, perhaps their own child?

As it turned out, most people were in favour of a car that would sacrifice its passengers, including children, for the greater good – at least in theory. However, they balked at the idea of owning such a vehicle themselves, preferring one that would protect their own lives, no matter what.

This is to be expected, given human nature. What people profess to believe in principle is quite often at odds with how they respond in practice, especially when their self-preservation is at stake.

Christians aren’t immune to this failing, either. One is reminded of the Apostle Peter, claiming to be ready to die with Jesus and then denying Him in front of a servant girl by the fire.

Nevertheless, these results presented a problem for the researchers. Given the possibility of legislation that would mandate the use of driverless cars, it seems likely that most people would choose the self-preserving option. In effect, this would defeat the original purpose of the vehicles, to keep everyone safe.

Morality cannot be reduced to numbers

But the researchers recognized an even more serious problem. The greatest good for the greatest number may sound like an easy utilitarian solution to many moral dilemmas. But as a comprehensive basis for ethics, it’s woefully inadequate.

Morality can’t be reduced to a numbers game. Most of the survey respondents were willing to sacrifice themselves (at least in principle) for, say, 10 pedestrians. But if it was just a lone pedestrian, they were far less willing.

So then, at what point between one and 10 does it become a morally acceptable exchange? What if it’s two people in the car versus an elderly couple crossing the street?

Many people might be willing to die (once again, in principle) for a dozen schoolchildren on a crosswalk. But how about if it’s three or four street criminals, stumbling onto the road in the midst of a drug deal?

None of these scenarios lend themselves to easy answers, and the researchers know it. They realize that the moral algorithms proposed for these driverless cars will need to be far more nuanced if they are ever to be of any practical benefit.

It could hardly be otherwise. The essential property of moral choices is that they are, in fact, choices. They require a quality that cannot be replicated by legislation or software. They require human free will.

Sources and further reading

Bruce Bower, “Moral dilemma could put brakes on driverless cars,” Science News, June 23, 2016.

Michael Cook, “Facing a moral dilemma over driverless cars,” BioEdge, June 25, 2016.

Peter Dizikes, “Driverless cars: Who gets protected?MIT News, June 23, 2016.

Larry Greenemeier, “Driverless cars will face moral dilemmas,” Scientific American, June 23, 2016.

Keith Kirkpatrick, “The moral challenges of driverless cars,” Communications of the ACM, Vol. 58 No. 8, (August 2015), Pages 19-20.

Laura Wright, “Driverless vehicles must have sophisticated moral algorithms, say researchers,” CBC News, June 23, 2016.


Subby Szterszky is the managing editor of Focus on Faith and Culture, an e-newsletter produced by Focus on the Family Canada.

© 2016 Focus on the Family (Canada) Association. All rights reserved.

If you liked this article and would like to go deeper, we have some helpful resources below.

Our recommended resources

Join our newsletter

Advice for every stage of life delivered straight to your inbox