I have a strong aversion against letting people drag their feet from being responsible for their actions. I feel particularly strongly about this when delegating work to machines, which are not able to act using an appropriate moral value system. Starting a car and letting an autonomous driving unit take over is one such example: When faced with an impossible situation (run over an old lady or three children or commit suicide), it still has to be the driver’s decision and not a machine’s.
To that end, I proposed a research project that lets a driver encode their value system in what I called a moral machine that they can then delegate decisions to, knowing the machine will truthfully produce their owner’s behavior. Sadly, the companies I pitched this project to rejected the proposal, arguing that consumers will never buy a car that in one form or another will ask them whether they value their life higher than a grandmother’s or those of three children.
When discussing this problem with a friend, he pointed out that it won’t be the driver’s problem soon, because in the future, nobody will own cars anyway. We will just hop on the next free autonomous taxi, which will take us where we want to go and that’s that. The moral problem of who to kill then becomes a problem of the taxi service provider, not the user of the service.
This change in responsibility may change everything and give my proposal a new thrust. A mobility service provider, when faced with the options to kill the old lady, or to kill the three children, or to kill the passenger, will have to consider the financial consequences of their actions. They will have no contract in place with the old lady or the three children, but they will have a contract in place with the passengers and will likely use that to limit their liability in case of passenger damage or death. Then, the financial risk of passenger death becomes much more manageable than that of the old lady or the three children.
Passengers would obviously not accept being put at a disadvantage, but rather request that it become their decision what happens. For this to be possible, we need a solution, where we can encode our moral values in some device, which we then make available to the taxi as we initiate the ride. It should be as easy as slamming down our mobile device onto some reader. Society may disagree, but that’s a different story.
Certainly a fun project. Not sure it can be done, but I feel confident now that more parties will pull into the direction of not abdicating responsibility to a machine.
Go back to: The argument against a moral machine in autonomous driving.
Leave a Reply