We are a hairs breadth away from the era of autonomous machines. The possibilities that they offer are endless, and the way they will influence everything from how work gets done, to city design and transport mean a future that is barely recognizable. It looks like the first autonomous machines that society at large sees will be cars, but before they can become a part of our lives, we need to solve the problem of how our software decides who dies when there are no other choices.
The Trolley problem is a thought experiment in ethics that deals with situations in which act or a failure to act lead directly to a death. The only choice available is to cause death by action or do nothing and let a death occur. While the problem has been used to study how we weight lives – dealing with relatives vs strangers, old vs. young and many deaths vs. a few, the underlying problem is one that will be a reality for autonomous machines. Which is a problem, because someone has to tell them how to make it.
As an example scenario – a self driving car senses a pedestrian on a road way in front of the vehicle, it calculates that evasive action is possible, but the available evasive action is likely to result in the death of the driver. What should the car do in this situation? More pertinently, how should it be programmed to respond to this situation?
Computers are deterministic systems. Their programming encodes rules that they follow exactly. There is no randomness to their behavior (unless caused by bugs or bad input). Mostly, this is a great thing. When making life and death decisions though, it means that someone needs to tell the computer how to make the decision, that it will follow, exactly. Someone needs to tell the computer who to kill.
This means that we need to make the decision ahead of time about who will live, and who will die. More importantly, we need a societally acceptable way to program something to make the decision – and it doesn’t exist.
The problem is larger than autonomous machines. That there is no societally acceptable way to make a decision about who should die. It’s why legislation doesn’t tell us how much a life is worth. The decision is left to judges and actuaries.
In the context of autonomous vehicles, this means that the decision is going to get left to actuaries to game out, and programmers to execute. This means that ultimately, we won’t decide how to handle it. We’ll let a company make it, we’ll let someone die – and then we’ll wait years for a judge to decide who was liable, and if they made the decision acceptably – at which point it will become an insurance issue. The alternate scenario is that it becomes a political issue – and ultimately, I think that’s worse for everyone.
We don’t have legislation that tells us how much a life is worth because no politician wants to touch the issue. It is a no-win scenario. There is no political move forward in putting a price on a human life, or in providing a way to decide who should die. Politicians are also action oriented people, when was the last time a politician appeared on the news saying that they weren’t going to “do something”. Logically, the only thing a politician can do is impose a ban – or conditions that might as well be a ban.
This is one reason I think it’s going to be quite a while until we see a totally autonomous private vehicle – at least on a road shared with human drivers. At some point, there will be a real life trolley problem, and someone will die. If we have not decided how to make the decision in a societally acceptable way, we leave to chance that it will be a political process that makes it for us, and we may give up the next major advancement in the quality of our lives.