Are Self-Driving Cars ethically imperative— can technology be trusted?

Ethics is about sharing the world we live in.
It is about making the world you live in, a safer world and something, more habitable, for all generations to live in.  When the world remains so fixated about living in a world that is safe, the very thought about driving a driver-less car certainly tampers with the safe world we aspire to maintain.
There are teams around the world, designing self-driving cars and coming to a dead point on creating it with decision-making system that acts within the lines of ethics.  It draws the line where there are tough questions to answer about how these cars will avoid and reduce collisions on the roads. While experts come with a line of thought that self-driving cars could be much better than manual ones, there is still much thought to be put in.
When we live in a world that is so ethically aware, we also live in a world of mutual trust. When you assume the driver seat, you assume the responsibility to consider all the potential aspects and implications of your actions on others that may be affected by it. Simply put, driving allows you to assume control of just the vehicle you drive, but also the significant consequences that result out of it. This does not necessarily have to be negative. When you are able to drive your grandma for a visit, or help a friend move, these are positive implications of your ability to drive.  However, indecisive turns, lane changes among other things, put to risk some serious challenges on other people. Not everyone understands the stakes of irresponsible behaviour when they drive. Recklessness to say the least puts impertinent danger on the lives of others.
It is therefore easily deductible that not every person who sits behind the wheel has the ability to operate it within ethical logic.  Partly this appeals most about self-driving cars. These have the divine ability to take the responsibility off the shoulder of the human to drive the car. When computed and commanded, self-driving cars can take complete control with the information provided by humans, respond effectively to the inputs, and use logical computing algorithms to carry out the action.  Any harm to humans, property, and extraneous animals due to driver-error would plummet.
You would then think, “Wow! The idea seems quite plausible. It should toll out right away”. Nevertheless, I am afraid there is a slight hitch in that plan.
One of the most important things about driving is the ability to predict the other drivers on the road and react accordingly. In a system that depends on the different mechanisms to work in coordination, having knowledge and predicting your fellow drivers on the road is quite important. Careful engineering may also give self-driving cars a reasonable model of the likely behaviour of human drivers; however, there may be limitations in its ability to cover different scenarios.
The teams designing the guidance systems for self-driving cars are working hard to build in better responses to situations that human drivers often get wrong. However, it is worth asking whether cars that are displaying different (“better”) responses to these situations could also be changing the situation for human drivers in a way that might make their decision-making harder.
In weighing these considerations, we may decide that an embrace of self-driving cars is the right choice for us to make. Deciding to trust the ethical judgements built into our self-driving cars — to give the self-driving mechanisms control over how to respond to tricky situations — is itself an ethical choice. How that makes us feel as ethical agents may be at least as important to us as how the car feels driving down the freeway.
Wall Street Journal talks about their take on Self-Driving or Driver-less Cars.

Related Post