What Challenges Must Automakers Handle Before Self-Driving Vehicles Are Viable?
For good reasons, regulators in California and the federal government have taken slow and incremental steps towards autonomous and self-driving vehicles. While cars and trucks that can drive themselves present many potential benefits, the fact of the matter is that the technology needs to develop and mature before the promised safety benefits are realized. If the vehicles are deployed prematurely and before essential systems are fully developed the result could be accidents, gridlock, and consumer pushback that sets the technology back by years or decades.
There are an array of challenges that automakers must meet before the vehicles are ready for deployment on a broad scale. Among these challenges are regulatory concerns, user-interface (UI) concerns, algorithmic concerns, and communication concerns. Each of these challenges makes up a piece of a puzzle that must be completed before these vehicles can be considered safe.
User Concerns in the Development of Autonomous Vehicles
Unfortunately, some automakers are already deploying semi-autonomous cars and trucks despite known rough edges with the technology. For instance, Tesla has recently rolled out an auto-drive and a “summoning” feature. These features allow the car to operate without or with minimal human involvement. However, these features generally make the common error that is often seen with new and existing technologies. That is, these features require the user to conform his or her behavior to the design of the machine. This takes a significant amount of research and self-education on the driver’s part. Unfortunately, most users want technology to “just work.” Thus, there are frequent and long-running online discussions where some users bemoan a lack of an intuitive interface or operations while others chide these users for failing to educate themselves with colorful acronyms such as RTFM (Read the ____-ing Manual) and other choice phrases.
However, if decades of experience with computers, smartphones, or even the classic blinking 12:00 on the venerable VCR is any indication a large subset of users will never read the manual. Furthermore, for one who has ever watched a less tech literate friend or family member attempt to navigate a computer system after a program or operating system (OS) upgrade, they generally expect things to work the same way that they did previously. These are valid consumer expectations. Furthermore, in large part, they have been poorly satisfied by most tech companies and products.
Unfortunately, when it comes to self-driving cars and trucks this same line of reasoning fails to hold water. Furthermore, these failures to anticipate a public that will not read the manual will move from creating annoyances to real safety hazards. Additionally, these systems must make users feel safe. The failure to build trust in users though hand-off procedures and other interface elements may result in users failing to use the system, taking control unnecessarily, or otherwise failing to allow the system to do its job and perhaps interfering with its safe operation as user and system battle for control.
Federal Regulatory Concerns Self-Driving Cars Must Satisfy
In March 2013, the National Highway Traffic Safety Administration (NHTSA) published an official Preliminary Statement of Policy Concerning Automated Vehicles. The statement recognizes that self-driving vehicles represent a potential “historic turning point for automotive travel.” The majority of the statement focuses on the potential safety benefits presented by these vehicles. However, it also contemplates NHTSA’s role as the development and enforcement arm for Federal Motor Vehicle Safety Standards (FMVSS). Furthermore, the document sets forth standards by which autonomous vehicles are classified. These classification standards are:
- Level 0 – No-Automation – This level describes, essentially, the classic human-driven car. While the human retains full control over the vehicle, the classification does permit for systems that provide warnings to the human operator. Forward collision warning is an example of technologies permitted under this classification.
- Level 1 – Function-specific Automation – This level of classification exists for vehicles that automate single functions independently. However, the human driver still maintains ultimate control over the vehicle. Systems permitted in this classification level would include dynamic braking or adaptive cruise control.
- Level 2 – Combined Function Automation – At this level, two or more primary vehicle functions are automated and synchronized. Furthermore, the driver and computer systems share responsibility for the safe operation of the vehicle. The driver may disengage from operating the vehicle, but the vehicle may also return control to the human without warning.
- Level 3 – Limited Self-Driving Automation – At this level, the driver may safely cede full control to the vehicle for all operation and safety-critical functions. The driver may be required to take back control from the vehicle at times, but the vehicle will provide ample warning.
- Level 4 – Full Self-Driving Automation – At this level, the vehicle is fully autonomous and responds to dangers and threats without user intervention. The only input required of the driver may be a destination.
The document also sets forth concerns regarding many of the concerns we have expressed above.
Recent Developments Regarding the Regulation of Self-Driving Cars
In January 2016, the issued and approved new regulations regarding minimum requirements for self-driving vehicles used on public roadways in the state. The regulations were disappointing for autonomous vehicle evangelists, however, they set forth a common sense approach while the vehicles and systems continue to mature from their incipient state. The requirement causing the most consternation at Google and other companies is the requirement for a licensed driver in the vehicle at all times. The regulation states that: California Department of Motor Vehicles
The operator will be responsible for monitoring the safe operation of the vehicle at all times, and must be capable of taking over immediate control in the event of an autonomous technology failure or other emergency. In addition, operators will be responsible for all traffic violations that occur while operating the autonomous vehicle
Furthermore, vehicles must be equipped with steering wheels and other controls allowing a human driver to take manual control, in case of autonomous technology fails or self-driving vehicle hackers. Thus, under the California regulations, only level 2 vehicles will be legal.
In addition, in response to Google’s request for clarifications, NHTSA provided information as to whether it will permit an autonomous system to be considered a driver. However, the agency rejected Google’s claims that its autonomous driver-less vehicles complied with FMVSS regulations including requirements for braking, steering, wheels and other manual controls. Rather, perhaps in recognition that the standards were designed for manual control vehicles, the agency did state that Google may apply for exemptions, but some exemptions would likely require compliance with federal rule-making procedures. However, an agency spokesman did state that “the burden remains on self-driving car manufacturers to prove that their vehicles meet rigorous federal safety standards.”
Algorithmic Concerns with Self-Driving Cars
Self-driving cars are powered by an array of powerful onboard computers. However, without underlying software and algorithms, these computers would do little more than take up space in the vehicle. Enormous strides with self-driving vehicles have already been made. Vehicles with lane-assist, emergency braking, automatic parallel parking, and other advanced features are evidence of these strides. Furthermore, the Tesla auto-driving feature is also proof of the progress that has been made. But, at best, these vehicles are still level 2 autonomous vehicles. They still require a human operator and may abruptly return control to the human operator. Despite the impressive feat of hardware and software engineering of Tesla’s feature, the automaker was forced to roll out software-imposed restrictions on the mode due to people assuming that the technology was more capable than it truly was.
Thus, there is certainly room for improvement regarding the software that controls these vehicles. While improvement has certainly been swift, there is a well-known aphorism in computer programing that:
The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time — Tom Cargill, Bell Labs
Only time will tell if vehicle software can keep-up its rapid pace of development.
Vehicle Communication Safety Concerns Presented by Self-Driving Cars
While we have addressed this topic in the past, vehicles that can communicate with each other, or talk, should provide important safety benefits. A vehicle that knows where every other relevant car or truck is and can use the other vehicle’s sensor data to anticipate hazards and threats, improve fuel efficiency and other goals is significantly more capable and useful vehicle. However, mesh networks and any network bring challenges of their own. These challenges range from the development of a communications standard so that all vehicles can communicate regardless of make or model. While TCP/IP plays this role with traditional electronic communications over the Internet, the standard developed for vehicle-to-vehicle (V2V) communication would undoubtedly require additional security features.
In fact, one of the concerns often raised at the federal and state levels is the one presented by vehicle hackers. The recent California regulations raised concerns relating to cyber security and driver privacy. Security experts have already shown that it is possible to hack today’s Internet-connected vehicles. However, automakers have often been less than prompt in patching security holes. In one case, GM took five years to patch a full control hack in its OnStar system. Furthermore, the concern that there is, currently, no such thing as a hacking-proof system is also cause for concern.
Despite Challenges and Injury Risks, Autonomous Development Pushes Forward
Despite these challenges and risks automakers and the federal government seem to be willing to push forward and to take calculated risks with autonomous vehicles. However, consumer confusion and the failure to provide adequate warning regarding the limits of these systems has already been seen in the Tesla debacle. While no injuries or accidents occurred as a result of this issue, the potential for death or serious bodily or personal injury remains. That is not to say that we should not push forward, but automakers must begin to more thoroughly consider consumer expectations as they roll out these systems. Furthermore, consumers should thoroughly consider the benefits of the autonomous feature as current law in California and in other states will still hold the driver liable for mistakes in engaging the system, taking back control, and potentially even for bugs and glitches in the system that could lead to a loss of vehicle control.
Contact a The Reiff Law Firm Car Accident Lawyer
If you’ve been hurt due to an autonomous vehicle malfunction or defect or through any other type of serious car or truck accident, contact an experienced personal injury lawyer of The Reiff Law Firm by calling (215) 709-6940. Over the decades we have developed our reputation as strategic and aggressive litigators who you can trust to handle your matter so that you can receive the compensation you need for your medical bills and recovery.