Should Cars be Programmed to Make Life or Death Decisions?

With self-driving cars in our near future, I’ve seen more and more articles about the moral dilemma of what the car should do when faced with an impossible decision, for example, to either kill a grandmother or drive into a flock of children. In my mind, the pundits are getting it all wrong; the underlying assumption that humans can abdicate responsibility to machines and the car’s behavior must be predictable is plain wrong.

Here is how one pundit explains the problem:

Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do?

Asking the question is important. However, giving a universally binding answer is not. The above article goes on to discuss the situation from an economic perspective (would you buy a car that would kill you rather than someone else?). It thereby falls into the same trap that I’ve seen all the other articles fall into: To ask for a universal decision, a societal consensus, to be programmed into the cars so it becomes predictable as to who the car will kill when faced with an impossible situation.

I suggest to simply leave it open. I don’t see how a society’s moral values can make such a highly personal decision for the driver. I certainly don’t see how a car manufacturer can cast that decision in software. It therefore should not be predictable how a self-driving car with a driver asleep will behave in these situations.

I haven’t thought much about how to implement it, but maybe asking the fresh owner of a car for his or her preference and then throwing in some randomness into the car’s behavior might be the right way to go. Whatever way the desired car behavior can be implemented, I think the underlying responsibility cannot be taken away from the individual driver.

5 thoughts on “Should Cars be Programmed to Make Life or Death Decisions?”

  1. I kinda agree with you, but I think we have to abolish the idea of a “driver”. Self driving cars will have no driver. Only passengers. You have no influence on such a decision as a passenger in a bus or taxi and you won’t in a self driving car.
    Also the car will most likely not be your own. It makes no sense to own a self driving car. Having a car sitting around parking makes no sense when the car could drive other people meanwhile and still be there when you need it.

    1. Hi Andi, thanks for the comment. However, I really believe there should be no machinery operating without a human being at the end of a chain of responsibility. We may call a car self-driving, but it does so at the authority and responsibility of a human being. If a driver is asleep while the rented car runs over another human being, that asleep driver is still to blame (possibly with the car manufacturer if they made false promises about the reliability etc.)

      1. That makes no sense to me. Would you expect users of a self driving taxi to need a driver’s license? Could you use a self driving taxi only when sober? Would you be required to permanently question the decisions of the car?

        1. If you hire a cab, you employ an agent, the cab driver. If you hire a self-driving taxi, you hire the taxi agency operating the taxi. My point is simply that we cannot delegate responsibility over life and death decisions to machines.

          1. And my point is that you have to. In the end the machine has to do the decision because there is no one else to do it in that split second. What exactly the decision should be has to be discussed beforehand.

            But as you said, we may not need to cater to every single possibility. A single “save life” directive might be enough. That will kill people sometimes, but cars with human drivers do so all the time.

Leave a Reply