You Pay No Fee Until We Win

Here to Help 24/7. Current wait time 22 seconds.

Autonomous Vehicle Crash Causes Uber to Suspend Self-Driving Car Testing

Automakers, insurance companies, and even agencies of the federal government seem to believe that autonomous cars and trucks are the best path forward towards a car accident free future. Based on investments into self-driving vehicles and intensive testing activities, one could surmise that ride-sharing companies such as Uber also see autonomous vehicles as an essential part of the path forward. While one might disagree as to the proximate reason for these actions by Uber, few would disagree that a drive to reduce accidents caused due to human error would undoubtedly save the company money over the short and long run.

However, the path to fully autonomous and self-driving vehicles was never going to be without some obstacles and bumps along the way. Unfortunately, a recent accident involving a self-driving Uber vehicle has resulted in a temporary halt to the testing program. This incident illustrates not only the dangers of the still fledgling autonomous systems but also the challenges that automakers and regulators must overcome before the promised benefits of self-driving vehicles can become a reality.

What we Know About the Self-Driving Car Crash

Despite initial reports that sounded the alarm regarding a problem in the autonomous vehicle, it now appears that the cause of the accident was a significantly less novel factor:  human error. That is, according to a police report, first obtained by Business Insider, the self-driving car was not at fault in the accident. However, the incident has raised questions about how humans and computers will interact on the roads. But, first the facts of the incident.

According to the police report, Uber’s self-driving car was traveling at approximately 38 miles per hour as the vehicle approach and traveled through a yellow light at an intersection. This was just below the posted 40 mph speed limit.

As the Uber vehicle approached the intersection, a Honda CRV approached the intersection from the opposite direction. The driver of the CRV told the police that she was attempting to make a left turn when the accident occurred.

To turn left at this intersection, the CRV needed to travel through three lanes of traffic. In the police report, the driver stated “As far as I could tell, the third lane had no one coming in it so I was clear to make my turn.” However, partially through the turn, “Right as I got to the middle lane about to cross the third I saw a car flying through the intersection, but couldn’t brake fast enough to completely avoid the collision.”

Witness to Accident Raise Question As to Whether Accident Would Have Occurred with Human Drivers

Despite the characterization of the accident in the police report, one witness questions whether the police findings were accurate or reasonable based on the circumstances. In any sense, these statements raise new questions about how human drivers and computer drivers will interact on roads and highways.

According to the eyewitness, “We saw the [Honda] car, it was coming fine on her end, but the other person [the Uber] just wanted to beat the light and kept going. All I want to say is it was good on the end of the [Honda] driving toward us, it was the other driver’s fault [Uber] for trying to beat the light and hitting the gas so hard.”

While the above characterization is certainly open to interpretation, Uber states that it is not possible for the vehicle to have accelerated at the light. This is because Uber states that its vehicles are programmed to continue through a yellow light while maintaining its current speed when it is possible to do so. In other words, Uber’s self-driving cars will not speed up to beat the light.

However, as nearly any experienced driver would likely note, while this method of driving is technically correct and legal, this isn’t at all how human operators of a motor vehicle drive. Typically, humans are particularly good at estimating or “ballparking” timing and certain events. By contrast, a specifically designed computer algorithm can calculate vehicle timing and positioning to a much finer degree of accuracy.

Thus, when a human driver approaches a yellow light, he or she is likely to speed up to beat the light or to slow down to come to a stop at the light for an array of reasons. Except for instances where it is clear that the driver will make the light, it is highly unusual for a human driver to maintain a constant speed at a yellow light. By contrast, the computer and sensors in a self-driving car can use factors like the speed or travel, expected traffic signal duration, and additional factors to compute exact positioning. As such, a computer-driven vehicle programmed in the same fashion as Uber’s vehicles will either maintain speed or stop at an intersection.

car bus crash

This accident raises real questions about interactions between human and computer driven cars and trucks. Can human and computerized vehicle operators speak the same language? Autonomous systems still seem to have trouble with the often informal courtesies and practices that human drivers follow. Meanwhile, humans seem to have difficulties adjusting to the type of “by the book” driving that autonomous vehicles are programmed to perform.

Some See Shades of Earlier Self-Driving Tesla Crash in Latest Incident

Aside from the issue raised above, other commentators have raised questions about the inherent safety of level 3 automated driving systems. A level 2 system is a conditionally automated vehicle. These cars and trucks come equipped with a fully autonomous mode. However, there is an expectation that the driver will stay alert and intervene appropriately when the system cannot handle a challenge.

Here, a human driver was present in the Uber. However, the driver of the Uber states that he simply did not have time to take control of the vehicle and avoid the collision. Unlike in the Tesla incident where the driver had approximately 7 seconds to intervene but did not, the timing of these events was much tighter.

While it isn’t clear exactly how much time the Uber “safety” driver had to react, the incident does raise additional questions regarding whether a human operator can respond appropriately to a sudden hazard on the road. In the Tesla incident, it appears that the driver was tuned out due to overreliance on the system. However, when drivers must take action in a split second, it isn’t clear if they can orient themselves to the conditions quickly enough. Furthermore, at least some drivers may delay action because they still expect the automated system to take evasive action. By the time the driver realizes that the system has handed control off, it may already be too late.

 


  • Share your experience
    we will call you back with a free case review

  • This field is for validation purposes and should be left unchanged.