As technology is evolving radically and we are seeing cars getting computers and becoming autonomous, it all sounds so wonderful, feels safer but still some car accidents may be inevitable. Now the serious questions arises here are how possibly could a self-driving car determine what to hit when faced with a situation where a crash is inevitable? How should they be programmed to correctly understand the gravity of the situation beforehand?
The chances are highly bright that these autonomous or self-driving cars would most probably stay on the highways and freeways where there are minimal chances of traffic disruptions or chances of things going wrong. These speculations are made based on the results of Google’s self-driving car, which has successfully travelled more than 700,000 miles without crashing into anything while it was manoeuvring down California’s freeways.
These results are pretty obvious as the freeways are considered among some of the safest places for driving; the changes of any unpredictable events to occur are next to none, so it is easier for the on-board computer to keep on going without having to face any misfortune.
Last month, Google made an announcement regarding their self-driving car logging miles in urban areas, which raised little unrest among people. How a car having no free-will, no emotions, and no feelings could possibly be programmed to abruptly and promptly behave under no-win situations. Having said that, how on earth will it decide to hit when a imminent crash is a must.
Many of us know that it does not matter what programming options they choose, all of them have their own drawback, so at this point, we can say, the questions are more than the answers for now. Like any other ethical problem, how its answer is going to be reasoned is eventually of as much significance as the answer itself.