As we have covered in these pages, the roadways of the future will likely be populated with more and more autonomous vehicles. Autonomy is a question of degree. The Society of Automatic Engineers has defined five different levels of automation, ranging from Level 0, where the driver must constantly supervise any support features, to Level 5, where the “driver” will not ever be required to disable driving features and take over driving.
Under Level 5 automation the human in the driver’ seat, autonomous features are always in effect and the vehicle can drive under all conditions and circumstances.
Level 5 automation remains to be seen on our roads, but many commentators believe that such vehicles will one day be a common sight. The hope is that the widespread usage of self-driving cars will have a profound impact on society. At the present time, human error such as inattention, distraction, or misjudgment is a major cause of car collisions. Humans are not perfect and tend to make mistakes or errors in judgment. While these mistakes and errors are, for the most part, preventable, they continue to happen on roads across the nation, causing serious injuries to innocent motorists. Fully autonomous “self-driving” cars represent an opportunity to reduce the number of crashes, and corresponding injuries, on our roads.
Widespread adoption of fully autonomous vehicles will raise difficult ethical questions. Self-driving cars represent vexing questions about technology and engineering, but also philosophical dilemmas that autonomous vehicle programmers will have to grapple with in the laboratory.
A recent article posted on the website for the Stanford University Institute for Human-Centered Artificial Intelligence explores some of these issues, including the ethical issues will eventually have to deal with, including how autonomous vehicles should handle an “unavoidable collision.” Researchers exemplify some of the ethical issues soon to be raised by autonomous vehicles through the classic philosophical dilemma known as the “trolley problem,” which asks whether someone should pull lever to redirect a runaway trolley so that it kills one person instead of five. On one hand, such action would save the lives of five people at the expense of one, resulting in a net savings of four lives. On the other hand, but for the pulling of the lever, the other person would have lived.
Human drivers are often faced with emergency situations which call for instinctual, reflexive reactions that do not always qualify as conscious, deliberate choices. Autonomous vehicles, meanwhile, can be preprogrammed to react to certain stimuli and make certain decisions based on what’s on the road ahead. Ethics matter for autonomous cars because it can affect roadway safety.
How will autonomous vehicle programmers account for the roadside equivalent of a “trolley problem”?
The attorneys at William Mattar, P.C. have been monitoring advances in motor vehicle technology. While today’s vehicles are equipped with assistive technology, such as forward collision, lane departure, rear cross traffic, and blind spot warnings, these sorts of technologies merely assist the human driver, who remains ultimately in control and responsible for the vehicles’ maneuverings, including any injuries to pedestrians and other motorists.
If you were hurt by a vehicle with assistive technology, please do not hesitate to contact our attorneys. It is no excuse that a certain assistive technology was not sufficiently assistive. The driver remains at the helm, and has to operate the vehicle in a reasonably safe manner. (844) 444 - 4444.