The Trolley Problem Revisited: Moral Intuition in the Age of Autonomous Vehicles
As self-driving cars move from thought experiment to city street, the philosophical puzzles of the twentieth century have become urgent engineering questions. What can Philippa Foot and Judith Jarvis Thomson teach Silicon Valley?
When Philippa Foot first introduced the trolley problem in 1967, she could not have anticipated that her hypothetical runaway tram would one day be encoded into the decision-making architecture of vehicles carrying real passengers at seventy miles per hour. Yet here we are.
The classical formulation is familiar: a trolley careers toward five people tied to the tracks. You stand at a lever. Pulling it diverts the trolley onto a side track, where only one person stands. Do you act? Most people say yes. The utilitarian calculus seems clear: five lives outweigh one.
Foot's genius was to then complicate this intuition. In the *footbridge* variant, you stand on a bridge above the tracks. The only way to stop the trolley is to push a large man off the bridge, using his body as a brake. The arithmetic is identical — one life to save five — yet almost everyone recoils. Something morally significant distinguishes pulling a lever from pushing a man.
From Thought Experiment to Firmware
Autonomous vehicle manufacturers face precisely this dilemma — not as a seminar exercise, but as a firmware specification. When a vehicle's braking system fails and a collision is unavoidable, the algorithm must adjudicate: swerve toward the kerb (risking the passenger) or continue straight (striking pedestrians). Every outcome assigns moral weight. Every deployment is a policy choice.
The MIT Moral Machine experiment, which collected 40 million decisions from 233 countries, revealed something remarkable: moral intuitions are not universal. Cultures that score high on individualism show stronger preferences for saving passengers. Collectivist societies more readily sacrifice the individual for the group. What Foot presented as a puzzle about human moral psychology turns out to expose the contingency of every moral framework we might consider universal.
The Problem of Consent
A further complication arises from what we might call the *consent asymmetry*. When you purchase a self-driving vehicle, you implicitly consent to whatever ethical parameters the manufacturer has programmed. But the pedestrian crossing the road has not. She has not agreed to be weighed against the passenger in a utilitarian calculus she was never shown.
This asymmetry has no analogue in the original trolley problem. The bystander on the side track did not sign a terms-and-conditions agreement. But in the context of autonomous vehicles, a regulatory framework could, in principle, mandate disclosure. The pedestrian might know, before crossing the road, that vehicles in this jurisdiction are programmed to prioritise the majority. Whether that knowledge constitutes meaningful consent is another question.
Beyond the Binary
It would be a mistake to think the trolley problem maps cleanly onto autonomous vehicle ethics. Real crash scenarios involve not two but dozens of variables — speed, weight, road conditions, the certainty of prediction. The problem is not simply which outcome to select but how to aggregate uncertain probabilities across incommensurable harms.
Deontological approaches, which prohibit treating persons merely as means, suggest the vehicle should be programmed to prioritise avoiding *action* — a kind of automated doctrine of double effect. But this quickly collapses into absurdity: a vehicle that refuses to steer in any direction that risks harm is simply a vehicle that does nothing.
What the autonomous vehicle problem teaches us, perhaps more than anything, is that ethics cannot be offloaded to engineers. The choices embedded in these systems are not technical problems with correct solutions — they are political choices about who bears risk and who is protected. Foot gave us the tools. It falls to us, as citizens, to decide how to use them.
About the Author
James Harrison
James Harrison is a Year 13 student at St Paul's School, London, studying Philosophy, Mathematics, and Economics at A-level. He is particularly interested in the relationship between moral theory and emerging technology.
