Who is Responsible for an Accident in Which an Autonomous Car was Involved?
One day it is likely that children and grandchildren will ask their elders in amazement, “You mean that cars used to be able to go off-course, make mistakes, and collide?” The shift in their mindset is apparent because the question presupposes the fact that cars can independently proceed along some automatically generated pathway that prevents accidents and keeps motorists safe. Furthermore, their starting point is that the car itself, and not the driver, could have been able to make a mistake. However, before we reach a quasi-utopian world of highway safety where technology has effectively eliminated the possibility of automobile crashes, there are bound to be some growing pains and stumbles.
Technological innovations in the automotive industry are already changing the focus of liability that was traditionally premised on aspects such as mechanical defects, physical design flaws, and human error. Today, the focus on vehicle defects is shifting to the failure of automotive software and computer hardware. In fact, one famous vehicle hacker now refers to many cars and trucks as nothing more than “rolling computers.” Autonomous cars and crash warning technology will undoubtedly change the way that lawyers, insurance companies, and manufacturers approach and pursue truck and car accident cases.
The Rise of Advanced Passive and Active Safety Systems
For the last few years, the availability of electronic safety systems to assist drivers and avoid accidents has seemingly developed at near warp speed. However, these developments have been in the planning and prototype stages for years and, in reality, are a long-time coming.
Autonomous and semi-autonomous vehicles evolved over time beginning with anti-lock brakes and then in the late 1990’s progressed to electronic stability control (ESC). ESC was a technology that grew from being implemented in luxury vehicles in the mid-1990s and it is now standard equipment in most vehicles. ESC intervenes when it detects a possible loss of steering control, i.e., when the vehicle is not going where the driver is steering because it has entered a skid. The system reacts to and corrects skidding much faster and more effectively than the typical human driver. NHTSA concluded that ESC reduced crashes by 35% compared to normal vehicles and, in the case of sport utility vehicles (SUVs), there were 67% fewer accidents.
Drawing from the success of these technologies, most governments worldwide have adopted long-term safety goals and targets eventually ending up with zero vehicular fatalities and zero injuries. Since 90 to 95% percent of all motor vehicle incidents and crashes are the result of human error, eliminating human mistakes has the largest potential impact towards reaching the target of zero casualties.
America is at a historic turning point for automotive travel and motor vehicles. Driver relationships with vehicles are likely to change significantly in the next ten to twenty years and perhaps more than they have in the last 100. Ford and GM have announced that they plan to have affordable and successful autonomous technology in their vehicles within the next five years. That is, by 2020 General Motors and Ford expects autonomous safety technology to be present in a broad array of vehicles ranging from luxury to economy models.
What is Considered an Autonomous Vehicle?
NHTSA is responsible for developing, setting, and enforcing Federal Motor Vehicle Safety Standards (FMVSS) and regulations for motor vehicles or motor vehicle equipment. On May 30, 2013, NHTSA announced a new policy concerning vehicle automation and stated that self-driving vehicles are those vehicles in which the operation of the vehicle occurs without direct driver input to control the steering, acceleration, and braking. They are designed such that the driver is not expected to constantly monitor the roadway while operating in self-driving mode.
In the fully automated vehicle, the driver would only provide destination inputs and the vehicle’s safe operation would rely on automated systems. The technology currently exists for a fully autonomous vehicle. Currently, Google is testing its technology on a fleet of Toyota Prius, Audi, and Lexus vehicles. The Google vehicle navigates and detects traffic and measures and analyzes surroundings for integrated use of radar sensors, laser range sensors, video cameras, global positioning systems, and maps. The Google car has been driven in excess of two million road miles without any major accidents attributed to the automated car’s fault. Autonomous commercial trucks are also being tested on public highways and the trucking industry is also employing integrated and advanced safety systems such as “Bendix Wingman Fusion”, “OnGuard”, “OnGuard Active Collison”, and “Detroit Assurance.”
Furthermore, the technology does not only exist in prototype settings. There are a number of high-end vehicles such as the Mercedes Benz already employing limited autonomous features. These vehicles currently employ and coordinate advanced sensor systems integrated with various onboard computers working together to actively keep the vehicle in its lane, warn of blind spot obstacles, as well as warn the driver of pedestrians or objects in front of it and apply the brakes as necessary. The Mercedes Benz, like many other vehicles, even parks itself with an active parking assist feature. It also detects whether the driver is distracted and warns the driver if signs of distraction are detected. This safety feature is in response to studies showing that more than 35% of all automobile accidents are due to distracted drivers.
Similarly, third-party after-market solutions also already exist on the market. Cost-effective solutions for cars such as Mobileye are available anywhere from $700 to $1,000 installed. Mobileye operates by using a sophisticated vision algorithm and a single camera that scans the road, sees information, feeds it to an algorithm, and warns drivers of potential hazards before they have a chance to occur. It tracks lane departures, blind spots, as well as warns of safe braking conditions. Currently the New York Taxi and Limousine Commission stated that they intend to use Mobileye technology in a pilot program to alert drivers if they are too close to the vehicle in front of them, and additionally, approximately 29 manufacturers have formed relationships with Mobileye to utilize their product and accident avoidance and detection technology.
Challenges Presented by the Shift to Autonomous and Assistive Vehicles
Currently, as long as human drivers are required to monitor computer systems and traffic situations, liability still fully or partially rests with drivers. However, for systems not requiring driver monitoring, such as is the case with a completely autonomous vehicle, the manufacturer will become responsible. The manufacturer is responsible for the time period when the driver is not operating the vehicle unless, of course, an accident is caused by a third-party or by an override of the automatic systems by the driver. However, it is clear that the model of driver-fault will no longer accurately assign responsibility for the causes of an accident once autonomous systems take over.
There are a number of challenges in dealing with product liability issues for lawyers litigating accidents involving autonomous vehicles. These challenges include:
- How will it be possible for manufacturers to design for any foreseeable misuse of the autonomous technology?
- How will we be able to distinguish between foreseeable misuse and system abuse?
- Requirements on instructions, warnings, redundancies, and design of machine interfaces will be excessive and demanding.
- How will the hacking of autonomous systems be prevented? Cybersecurity, i.e. the risk that external hostile forces can penetrate the shields that manufacturers developed for advanced systems of autonomy.
In fact, Wired Magazine wrote an article concerning the hacking of a Jeep vehicle by computer experts operating a laptop. Working remotely, they were able to seize control of the Jeep from across the country. They sent commands through the Jeep’s entertainment system to its dashboard functions, steering system, brakes, and transmission. As automakers rush to add wireless features to their cars, it opens the door to cyber attacks, hackers, and viruses. Many automobile manufacturers such a BMW and Tesla have sent over the air software patches to millions of vehicles after it was discovered that there were holes in porous software. But, if you can patch a car through the Internet, you can most certainly hack a car through the Internet.
Many manufacturers and governments are concerned about the integrity and protection of their software systems and automakers have recently signed an agreement to work together on cybersecurity issues and members of Congress have formed their own cybersecurity bill.
Software Glitches in Autonomous Cars Remain as Only One Cause for Concern and Manufacturer Liability
In the last year, we have seen millions of cars and trucks recalled due to vehicle software glitches, bug, and vulnerabilities. Many cars on the market currently have semi-autonomous features such as adaptive braking systems or lane assist systems paired with cruise controls which utilize radar or camera sensors to supply information to brake the car or keep the car in its lane if it detects an object or obstacle in the path of the vehicle. Unfortunately, many times there have been failures of the mitigation system that incorrectly interpret certain roadside objects or obstacles. When the systems that drivers’ rely upon fail, the manufacturer bears accountability for the failure and resultant accident.
Even where systems do not fail, consider that many legal and ethical issues will arise when a totally autonomous vehicle is confronted with a situation where a collision is not avoidable. The autonomous vehicle abides by the computer’s algorithms rules. Obviously, the algorithms that must process the information will still be programmed by humans in the near future. How does the programmer decide who lives and who dies? Who suffers catastrophic injuries in a violent wreck and who escapes? Are the criteria for the age of the vehicle occupants? Projections about the severity of the injuries likely to be sustained? The make and model of the other vehicles involved? These are all open questions with no clear answers.
We are just beginning to scratch the surface of many complex legal issues of how computers in autonomous vehicles should think and act, and we will continue to explore this topic. As a renowned consultant in the computer engineering field stated: “It is safe to assume that software is unsafe until you accumulate enough evidence that you can demonstrate that it is safe.” As we move closer to fully autonomous or fully automated vehicles, more of the decision-making must be made by crash optimizing algorithms created by humans. Information that is received via various sensors, lasers, cameras, Lidar systems, and radar must be processed by an algorithm in order to make split-second decisions. This is sure to raise compelling legal and ethical issues.