How can we teach machines a lesson in morality? One of the most interesting questions of our time.
The discussion about how autonomous driving can change our society, help us save time and ensure traffic security is at the heart of the current media. However, when we think about scenarios in which you won't have a steering wheel in your transport vehicle and something malfunctions, what does the car do? Does it come to a complete halt? What if the brakes won't work and the car has to decide if it crashes into a wall, and the driver dies or if it drives into the next shopping in the hope that it won't hit anyone?
I find this new philosophic reasoning fascinating on how we try to cope with dilemmas in that regard. What do you think, how would you decide in such situations? Never before we had the situation as to have to find the reasoning behind actions during a car crash. As humans, we just react, try to minimize damage and we don't have the time to ponder about life and death situations while hitting the car into a shopping mall. Now, we have to teach a machine how to react in such circumstances, and suddenly everything seems wrong. Which way to go in a no-win scenario?
MIT found a clever way or at least a way to tackle such problems in the future. They try to gather data by offering different dilemmas in which you can decide what you would do. You will get an overview after your test and see how others answered the questions.
Feel free to give it a try and share your results in the comment section below.
Here is my latest result:
Thanks for reading!