You Pay No Fee Until We Win

Here to Help 24/7. Current wait time 22 seconds.

“Knowing What I Know about Computer Vision, I Wouldn’t Take My Hands off the Steering Wheel.”

Many technology evangelists have flocked to cutting-edge vehicles like the Tesla Model S and Model X vehicles equipped with Autopilot. The Autopilot feature is marketed on Tesla’s website as allowing a vehicle to: 

steer within a lane, change lanes with the simple tap of a turn signal, and manage speed by using active, traffic-aware cruise control. Digital control of motors, brakes, and steering helps avoid collisions from the front and sides, and prevents the car from wandering off the road. Autopilot also enables your car to scan for a parking space and parallel park on command. And our new Summon feature lets you “call” your car from your phone so it can come greet you at the front door in the morning.

Essentially, the Autopilot feature is marketed as being able to control the operation of a vehicle with no input or minimal input from the driver. A number of other competing systems have been developed by other luxury and mainstream auto manufacturers. For instance, Volvo has launched and marketed a “Pilot Assist” feature while BMW and Mercedes-Benz have also introduced competing semi-autonomous systems.

However, a recent article in the New York Times raises the possibility that certain aspects of self-driving and semi-autonomous technology may not be ready for broad deployment. In particular, the article focuses on shortcomings in digital vision solutions. Perhaps most poignantly one of the scientists and researchers interviewed, Jitendra Malik a researcher in computerized vision for three decades, states that “Knowing what I know about computer vision, I wouldn’t take my hands off the steering wheel.”

Were you injured in a vehicle accident in the Pennsylvania region? Contact a Philadelphia car accident attorney today at (215) 709-6940.

Following Model S Autopilot Crash, Tesla Modified System Functionality

Dr. Malik was probably referring to a fatal crash that took the life of a 40-year-old entrepreneur. The Ohio man was killed when he failed to maintain his focus and attention on the road when using the Tesla Autopilot system. Apparently, the cameras utilized by the Autopilot system were unable to distinguish the broadside of a commercial truck crossing an intersection from the sky. Since the vehicle’s cameras did not “see” the truck, the vehicle hurtled into the intersection where the fatal crash occurred.

bigstock Detail Of Tesla Model S Car In 124505663 - “Knowing What I Know about Computer Vision, I Wouldn’t Take My Hands off the Steering Wheel.”

Tesla’s Autopilot was subsequently modified. The modifications were slated to roll out in mid-September 2016. The changes to the system include a reduced amount of time drivers can remove their hands from the wheel and enhancements to the vehicle’s radar capabilities. Tesla founder Elon Musk indicated that he believed that these changes would have prevented the fatal crash that occurred in May.

Digital Camera Systems Require Additional Data to Improve Recognition Accuracy

Essentially, this accident shows that there is still work to be done regarding the accuracy and efficacy of these systems. However, there is an important distinction to draw here. That is, the crash does not mean that the technology is a failure. Rather, the crash means that the underlying software and data are still being developed. Even a well-designed algorithm is subject to the limitations of the data the system has to work with.

Thus, one of the efforts that will be advanced to improve the efficacy of computer vision systems involves feeding millions of images into the system to train the algorithms. ImageNet is a collaborative venture between Stanford that has sorted, categorized, and labeled more than 14 million images in 22,000 categories. To train for variation among similar objects, the system contains numerous representations of similar objects. For instance, the image database is stocked with more than 60,000 images of cats.

Visual Systems Still Need to Learn How to Process Images Contextually

Another area where more development is needed involves the contextual analysis of scenes and images detected by digital cameras.  One example provided by researchers involves a digital camera that “witnesses” a dinner party scene. The article sums up the problem of context as:

A person carrying a platter will serve food. A woman raising a fork will stab the lettuce on her plate and put it in her mouth. A water glass teetering on the edge of the table is about to fall, spilling its contents. Predicting what happens next and understanding the physics of everyday life are inherent in human visual intelligence, but beyond the reach of current deep learning technology.

While certain advancements have been made in recent years, according to Dr. Farhadi, a computer scientist at the University of Washington and a researcher at the Allen Institute for Artificial Intelligence, we are unfortunately years away from systems that can approximate a human’s ability to contextualize. Dr. Farhadi states that “…we’re still very, very far from visual intelligence, understanding scenes and actions the way humans do.”

Recommendations before Autonomous Cars Are Mainstreamed

The scientists interviewed note a number of technological advances and safety improvements they believe are necessary to prevent injury or death before self-driving cars go mainstream. Researchers say that gains in the efficacy of radar and LIDAR are still required. Furthermore, current digital mapping resolution leaves at least something to be desired. Furthermore, the scientists state that autonomous vehicles and their deep learning algorithms should be subjected to millions of additional miles of driving before they are sold to consumers.

Driving Ominous 6 - “Knowing What I Know about Computer Vision, I Wouldn’t Take My Hands off the Steering Wheel.”

The scientists do note that the pace of advancement has been extremely rapid. Furthermore, they state that a system that perfectly replicates human vision is not necessary to the development of a “safe” self-driving car. However, where the line is drawn between a “perfect” system and a “good enough” system will likely determine the vehicle accident and safety outcomes.

Our Philadelphia Car Accident Lawyers can help

Since an array of sources predict the widespread adoption of these vehicles to occur sometime between 2018 and 2021, significant work is still left to be done if potentially serious accidents due to technological glitches are to be avoided. However, some have posited that we live in an “exponential age” and increases in computing power will usher in an array of technologies – including autonomous cars – more rapidly than most people would imagine. Whether the auto industry will elevate first to market status and the likely profits that will follow over the safety of motorists throughout the nation will be revealed over the course of the upcoming years. If you were hurt in an autonomous vehicle, call the Philadelphia car accident lawyers of The Reiff Law Firm – (215) 709-6940.

  • Share your experience
    we will call you back with a free case review

  • This field is for validation purposes and should be left unchanged.