Autonomous driving system learns by watching other drivers
If human drivers have an advantage over autonomous driving systems, it is the ability to improvise. Informed by prior experience, a human driver can make quick decisions, even without training to handle a specific situation.
Autonomous driving systems, on the other hand, must be trained on data that account for as many variables as possible to achieve the same quality of results, generally, as a human driver. This means generating tremendous amounts of training data. In a paper titled “Learning by Watching” (bit.ly/VSD-LBW), researchers at Boston University (Boston, MA, USA; www.bu.edu) suggest that algorithms carefully observing cars on the road can generate training data for autonomous driving systems.
The proposed Learning by Watching (LbW) model first calculates a birds-eye-view (BEV) of the autonomous vehicle’s immediate environment by combining LiDAR and RGB data and tracking the 3D position of surrounding vehicles over time. This allows an algorithm to trace the movement of each car in range of observation relative to that car’s environment, i.e. to estimate why each observed vehicle moves the way it moves.
“The algorithm then translates actions of surrounding vehicles to its own frame of reference to train itself how to drive better,” says Eshed Ohn-Bar, one of the researchers on the project.
The researchers also proposed a supervised learning approach tailored to safely use data generated by the LbW method. To test a model trained by the LbW method, researchers used the CARLA benchmark (carla.org). CARLA allows the user to create driving simulations, controlling variables such as different maps, types of sensors used, and weather conditions. It also includes a state-of-the-art autonomous driving model for comparisons.
“While models generally require many hours of training data to learn to drive, the LbW model was able to drive within minutes in a novel town or scenario,” says Ohn-Bar.
A complex series of experiments compared the CARLA model with the LbW model based on 10 minutes, 30 minutes, and one hour of training data for each model, using varied methods for processing training data generated by the LbW method. The researchers’ method showed improvements in autonomous driving performance and the efficient use of data for training.
Future research will incorporate the sort of “noisy” data, i.e. data gathered from imperfect drivers with a large variety of driving styles, that the LbW system might generate in live conditions, versus using simulated data as in these experiments.
Dennis Scimeca
Dennis Scimeca is a veteran technology journalist with expertise in interactive entertainment and virtual reality. At Vision Systems Design, Dennis covered machine vision and image processing with an eye toward leading-edge technologies and practical applications for making a better world. Currently, he is the senior editor for technology at IndustryWeek, a partner publication to Vision Systems Design.