So I just read an article about the ethical questions associated with the safety goals of autonomous cars. Generally the question is, "do you want your autonomous car to minimize casualties?" People generally say "Yes I want an autonomous car that does what it can to minimize how many people die in a car crash" This opinion changes when people are told that this prioritization might mean that their car would end up letting them person die in a crash to save a bunch of toddlers (I'm adding that last bit). The debate is based on a philosophy question, you are standing next to a switch that controls which path a trolley will take, for some bizarre reason their are people on both paths that the trolley will follow, on one track five people, the other there is one person. If you do nothing, the trolley will remain on its path and kill five people, if you pull the switch it will go down the path with the single individual. What do you do? (this question doesn't need to be answered by you, unless you want to leave a comment) My answer, who cares, the scenario is so abstract and unrealistic, why does it really matter in real terms?
The likelihood that you or an autonomous car are going to run into a situation where the option is kill a passenger or a group of pedestrians is negligible. I am more likely to win the lotto, while I am being launched into space on my way to visit my empire of sexy cyborgs (okay maybe not that last detail). Why can't I shout at the lone person on the track, or the people on the other track. If a car suddenly finds that their are people in its way, why can't it lay on the horn? I don't want to get pedantic, that's wasting everyones time, but I have a hard time envisioning a scenario where the options are so binary. Instead of ethicists telling researchers to build around a one in a billion freak occurrence. While I do believe engineers have a responsibility to make products as safe as they can, and ideally to the greatest benefit to society possible, we should not ignore solutions that are better than our current options 80% of the time and the rest of the time no worse than what we current have. History is more impressive with really radical changes, but that doesn't mean we should ignore slow and iterative improvements.
Please feel free to leave a comment, if you agree or disagree please say why.
The likelihood that you or an autonomous car are going to run into a situation where the option is kill a passenger or a group of pedestrians is negligible. I am more likely to win the lotto, while I am being launched into space on my way to visit my empire of sexy cyborgs (okay maybe not that last detail). Why can't I shout at the lone person on the track, or the people on the other track. If a car suddenly finds that their are people in its way, why can't it lay on the horn? I don't want to get pedantic, that's wasting everyones time, but I have a hard time envisioning a scenario where the options are so binary. Instead of ethicists telling researchers to build around a one in a billion freak occurrence. While I do believe engineers have a responsibility to make products as safe as they can, and ideally to the greatest benefit to society possible, we should not ignore solutions that are better than our current options 80% of the time and the rest of the time no worse than what we current have. History is more impressive with really radical changes, but that doesn't mean we should ignore slow and iterative improvements.
Please feel free to leave a comment, if you agree or disagree please say why.
No comments:
Post a Comment