Back in March, I wrote about ADAS (advanced driver assistance systems), which I noted was "quietly but quite rapidly becoming a huge technology success story." This week, I'm revisiting the topic, along with the related subject of full vehicle autonomy, by showcasing several presentations from the recent Embedded Vision Summit.
First off, I'd like to encourage you to take a look at the talk given by Marco Jacobs of videantis. Jacobs, in his presentation "Computer Vision in Cars: Status, Challenges, and Trends," gives a thorough overview of the state of ADAS today, along with a preview of the increasingly autonomous future of vehicles. He highlights technology trends, challenges, and lessons learned, with a focus on the crucial role that computer vision plays in these systems. Here's a preview:
Next up is Tom Wilson of NXP Semiconductors, with the presentation "Sensing Technologies for the Autonomous Vehicle." My March column noted that although computer vision-centric ADAS approaches are increasingly common, higher-end ADAS implementations combine the data generated by cameras and other sensor technologies such as LIDAR and radar, since conventional image sensors aren't optimal for use after dark, for example, or in inclement weather conditions. Wilson's talk compares vision-based sensing with complementary sensing technologies, explores key trends in sensors for autonomous vehicles, and analyzes challenges and opportunities in fusing the output of multiple sensor technologies to enable robust perception and mapping for autonomous vehicles. Here's a preview:
Finally, I recommend viewing the tutorial "Making Existing Cars Smart Via Embedded Vision and Deep Learning," delivered by Stefan Heck, CEO and co-founder of NAUTO. The NAUTO system provides an easy way to upgrade any vehicle with ADAS capabilities. NAUTO’s product contains video cameras and other sensors that capture data inside and outside the car, both warning the driver of dangers in real time and (in the near future) networking with other vehicles to provide per-lane traffic information, a view of available street parking spots, and warnings about upcoming road hazards. In his presentation, Heck explains how his company uses deep learning, computer vision and embedded processors to deliver these capabilities. Here's a preview:
I also encourage you to check out NXP Semiconductors' and videantis' Embedded Vision Summit demonstration videos, along with many others, on the Alliance's YouTube channel. As you'll see from the demos, Marco Jacobs' company, videantis, is a vision processor core developer with a particular focus on ADAS applications. Similarly, Tom Wilson originally worked for CogniVue, another ADAS-focused vision processor IP supplier. As I mentioned last month, CogniVue was acquired last September by Freescale, which subsequently merged with NXP Semiconductors; the company's demo video showcases the resultant ADAS synergy. Last month, I also suggested that "this isn't the first time that two or more Embedded Vision Alliance member companies have joined up via partnership and/or acquisition arrangements, and it assuredly won't be the last."
Shortly thereafter, in fact, Alliance member Intel announced that it had acquired fellow member Itseez, a well-known developer of OpenCV-based and other computer vision algorithms and implementations for embedded and specialized hardware (automotive applications were one key motivation Intel cited as rationale for the acquisition, along with Internet of Things (IoT) opportunities). After all, as I also mentioned last month, "one key benefit of Alliance membership is the connections each member company acquires with ecosystem partners, along with increased visibility among vision system and application developers, and the insights obtained into market research, technology trends, and customer requirements." For more information on Embedded Vision Alliance membership, please email [email protected].
Regards,
Brian Dipert
Editor-in-Chief, Embedded Vision Alliance
[email protected]