Ever since autonomous driving became a hot topic, I’ve tried to sell to our automotive industry partners the idea of a project to build a moral machine in autonomous driving. My definition of a moral machine (there are others) is:
A moral machine (for autonomous driving) is a machine that encodes the driver’s moral value system in such a way that it can make decisions on their behalf that accurately reflect their moral values.
As a research project, this is about as ambitious as it can get, but this is not my point here. My goal in proposing such a project and solving the problem is in not letting men get away with blaming the machine:
It wasn’t me who ran over the old lady, it was the autonomous driving unit!
When faced with an impossible situation (run over an old lady or three children or kill yourself), it has to be the driver’s decision, not a machine’s decision, what to do, even if the driver is soundly asleep at the wheel. In that case an agent, here the moral machine, will have to make the decision. Hence my call for a machine that can do just that.
I thought the automotive OEMs and their suppliers would love this project idea, after all, it might rid them of a major liability that comes with autonomous driving: What to do about lawsuits brought forward by those hurt by the autonomous driving unit?
Sadly, every time I tried, my proposal was shot down. The answer was always the same: Consumers will never buy a car that will ask them whether to run over an old lady or three children or to commit suicide. Consumers will want to believe it is the machine’s responsibility and not their own.
I eventually gave up, but yesterday a friend pointed out a change in scenario that might change everything.
Next up: The argument for a moral machine in autonomous driving.
Leave a Reply