I have a strong aversion against letting people drag their feet from being responsible for their actions. I feel particularly strongly about this when delegating work to machines, which are not able to act using an appropriate moral value system. Starting a car and letting an autonomous driving unit take over is one such example: When faced with an impossible situation (run over an old lady or three children or commit suicide), it still has to be the driver’s decision and not a machine’s.
Ever since autonomous driving became a hot topic, I’ve tried to sell to our automotive industry partners the idea of a project to build a moral machine in autonomous driving. My definition of a moral machine (there are others) is:
With self-driving cars in our near future, I’ve seen more and more articles about the moral dilemma of what the car should do when faced with an impossible decision, for example, to either kill a grandmother or drive into a flock of children. In my mind, the pundits are getting it all wrong; the underlying assumption that humans can abdicate responsibility to machines and the car’s behavior must be predictable is plain wrong.