Held by the New England Chapter of the Association of Unmanned Vehicles International (AUVSI) from June 6-9, ROBOTICA 2016 brought together experts in unmanned and robotic systems to discuss the latest technologies and trends in the industry.
On Wednesday, June 8, I had the opportunity to attend ROBOTICA, which featured industry talks, workshops, and live demonstrations in the exhibitor tent. I came in shortly after John Leonard, MIT, Area Lead, Autonomous Driving, Toyota Research Institute, began his keynote speech that morning. In this article, I will go through this speech, as well as all of the other sessions I had the opportunity to attend.
Challenges and opportunities in autonomous driving
During Leonard’s keynote, as the title, would suggest, he covered the potential benefits, challenges, and opportunities in self-driving vehicles and robot perception and navigation.
Potential benefits of self-driving vehicles included:
- Safety
- Efficiency
- Recovery of time lost due to commuting
- Reduced need for parking
- New models for personal mobility
Challenges that exist, when it comes to robot perception and navigation, include the need to strive toward more human-level perception for robots, and the need for inference, where Leonard says we are hungry for data. In terms of self-driving cars, Leonard suggested that we might be “further away than we think,” and provided a number of reasons as to why this might be the case. Perhaps the best example, though, was a video he showed, which presented a number of difficult situations for self-driving cars, including weather scenarios, crossing guards and police, left turns across traffic, and changes to road surface markings.
The need for better sensors and algorithms is highlighted by such examples, he suggested. Additionally, he said that "having a network and connectivity between vehicles could be a game changer."
How can this be achieved? Leonard explained to the audience a number of opportunities as to how the technology behind self-driving cars and robot perception can be improved. He first mentioned using NVIDIA GPUs for dense 3D mapping, citing a project called Kintinuous, which is an extension of the KinectFusion that creates accurate models and point clouds. This is achieved through real-time dense loop closure using mesh deformation, an algorithm that quickly creates accurate models, which Leonard said would be useful for things like avoiding obstacles and navigating.
Next, he talked about how understanding the world in terms of objects is important.
"Vision is the process of discovering from images what is present in the world and where it is," he said, quoting David Marr.
Leonard then followed up: "We will need an object-based understanding of the environment that facilitates life-long learning," he said. "Let’s build rich representations that leverage knowledge of location to better understand about objects, and concurrently uses information about objects to better understand location."
Before concluding, he noted how important it was to educate the world about autonomy, how it works, and the varying levels of the technologies that can be used.
"My dream is to achieve persistent autonomy and lifelong map learning in highly dynamic environments," he said, during his conclusion. "Can we robustly integrate mapping and localization with real-time planning and control?"
He added, "It is an exciting time to work in mobile sensing."
Next page: Protecting the skies: Efforts and challenges in protecting our air space
Keynote | Protecting our skies | Intelligent and automated vehicles | Conclusion