Software reportedly at fault in Uber’s fatal self-driving accident
On March 19, 49-year-old pedestrian Elaine Herzberg was struck by an Uberself-driving car in Tempe, AZ, USA, later succumbing to the injuries. At the time of the crash, a specific reason as to what caused the accident was not known. Now, it is being reported that a software malfunction is to blame.
An article in The Information from Amir Efrati says that—according to two anonymous sources who talked to Efrati—Uber’s sensor suite did in fact detect Herzberg as she crossed the street with her bicycle. However, the software reportedly classified her as a “false positive” and did not stop the vehicle for her.
Following the crash, numerous reactions and commentary regarding the fatal crash were provided by industry experts. One such person was Matthew Johnson-Roberson, an engineering professor at the University of Michigan who works with Ford Motor Co. on autonomous vehicle research, who was quite confident that the issue was with the autonomous vehicle’s software.
"The real challenge is you need to distinguish the difference between people and cars and bushes and paper bags and anything else that could be out in the road environment," he told Bloomberg. "The detection algorithms may have failed to detect the person or distinguish her from a bush."
Raj Rajkumar, a professor of electrical and computer engineering at Carnegie Mellon University who works on autonomous vehicles, offered a similar sentiment in the Bloomberg article, noting that any shortcoming would likely be ascribed to classification software "because it was an interesting combination of bicycle, bags and a pedestrian standing stationary on the median," he said. Had the software recognized a pedestrian standing close to the road, he added, "it would have at least slammed on the brakes."
Software designers, wrote Ars Technica, face a basic tradeoff: If the software is programmed to be be too cautious, the ride will be slow and jerk, as the car will often slow down for objects that pose no threat to the car or aren't there at all. If the software is tuned in the opposite direction, it will produce a smooth ride most of the time, but at the risk that the software will occasionally ignore a real object. This, according to Efrati, is what happened in Arizona this past March.
"There's a reason Uber would tune its system to be less cautious about objects around the car," Efrati wrote. "It is trying to develop a self-driving car that is comfortable to ride in. Uber had been racing to meet an end-of-year internal goal of allowing customers in the Phoenix area to ride in Uber’s autonomous Volvo vehicles with no safety driver sitting behind the wheel," he added.
Needless to say, this software glitch represents an all-too-real reminder that fully autonomous self-driving cars are still a work in progress.
At the time this was written, Uber had declined to provide information to The Information or to Ars Technica, citing confidentiality requirements related to an ongoing investigation by the National Transportation Safety Board.
View the Ars Technica article.
View The Information article.
Share your vision-related news by contacting James Carroll, Senior Web Editor, Vision Systems Design
To receive news like this in your inbox, click here.
Join our LinkedIn group | Like us on Facebook | Follow us on Twitter
About the Author
James Carroll
Former VSD Editor James Carroll joined the team 2013. Carroll covered machine vision and imaging from numerous angles, including application stories, industry news, market updates, and new products. In addition to writing and editing articles, Carroll managed the Innovators Awards program and webcasts.