The Trolley Problem does not exist in Self-Driving Cars

Save a CEO or the bus full of nuns?!

You’ve probably heard of the Trolly Problem. It’s that classic ethical dilemma where you have to choose between saving five people by sacrificing one or doing nothing and letting five die. It’s a favorite of philosophers, ethicists, and anyone who wants to sound deep at a dinner party. But here’s the thing: when it comes to self-driving cars, the trolley problem is pretty much irrelevant.

Why? Because self-driving cars don’t work the way the trolley problem assumes they do. Let’s break it down.

1. Self-Driving Cars Aren’t Reactive. They’re Proactive

The trolley problem frames ethics as a split-second decision: Do I swerve left and kill one person, or go straight and kill five? But self-driving cars aren’t designed to make those kinds of choices. Instead, they’re built to avoid those situations altogether.

Here’s how it actually works:
- Sensors (cameras, LiDAR, radar) constantly scan the environment, creating a real-time map of the world.
- Software (AI) identifies objects. Like other cars, pedestrians, and traffic signs, and predicts how they’ll move.
- Rules of the road (like maintaining safe distances and obeying speed limits) guide the car’s behavior to minimize risk.

The goal isn’t to solve ethical dilemmas; it’s to prevent them from happening in the first place. If a self-driving car ever finds itself in a trolley problem scenario, something has already gone very wrong.

2. There’s No “Ethical Algorithm” Hidden in the Code

A lot of people imagine self-driving cars as cold, calculating machines that weigh the value of human lives in milliseconds. But that’s not how the technology works. There’s no secret line of code that says:

if obstacle_ahead:  
    choose_less_deadly_crash()  

Instead, the car’s decision-making is based on layers of safety protocols:

If the car ever faces a situation where harm is unavoidable, it’s not because the AI is making a moral choice. It’s because the system failed to predict or prevent the scenario.

3. The Real Ethical Issues Are Less Dramatic (But More Important)

While the trolley problem gets all the attention, the actual ethical challenges of self-driving cars are far less cinematic, and far more pressing. For example:

These aren’t hypotheticals. They’re real-world issues that engineers, policymakers, and ethicists are working to address. And they’re a lot more important than debating whether an AI would choose to save a CEO or a bus full of nuns.

4. Why the Trolley Problem is a Distraction

The trolley problem is a thought experiment, not a technical challenge. By fixating on it, we risk two big problems:

  1. Public mistrust: Framing self-driving cars as cold, calculating decision-makers stokes fear and misunderstanding.
  2. Misplaced priorities: Engineers need to focus on improving safety systems, not solving philosophical puzzles.

The truth is, self-driving cars aren’t about making ethical choices. They’re about safely taking a person from point A to point B. And the best way to do that is through better technology, better regulations, and better public education.


The trolley problem is a fascinating thought experiment, but it’s a terrible way to think about self-driving cars. These systems aren’t designed to make moral decisions; they’re designed to keep people safe.

Instead of worrying about hypothetical dilemmas, let’s focus on the real challenges: improving safety, reducing bias, and ensuring transparency. That’s how we’ll build self-driving cars that actually work, and that people can trust.

So next time someone brings up the trolley problem, feel free to roll your eyes. The future of self-driving cars isn’t about choosing who lives or dies. It’s about making sure everyone gets home safely.


Comments

There are no comments added yet.

Let's hear your thoughts

For my eyes only