The Argument For a Moral Machine in Autonomous Driving

I have a strong aversion against letting people drag their feet from being responsible for their actions. I feel particularly strongly about this when delegating work to machines, which are not able to act using an appropriate moral value system. Starting a car and letting an autonomous driving unit take over is one such example: When faced with an impossible situation (run over an old lady or three children or commit suicide), it still has to be the driver’s decision and not a machine’s.

Continue reading “The Argument For a Moral Machine in Autonomous Driving”

Should Cars be Programmed to Make Life or Death Decisions?

With self-driving cars in our near future, I’ve seen more and more articles about the moral dilemma of what the car should do when faced with an impossible decision, for example, to either kill a grandmother or drive into a flock of children. In my mind, the pundits are getting it all wrong; the underlying assumption that humans can abdicate responsibility to machines and the car’s behavior must be predictable is plain wrong.

Here is how one pundit explains the problem:
Continue reading “Should Cars be Programmed to Make Life or Death Decisions?”