By Jeff Bier
If you told me that you could do deep-learning image classification in a few seconds on a $5 microcontroller+camera board and that you could do so without being some sort of a ninja-level computer vision expert, well, my first instinct might be to dismiss you as nuts.
But at the Embedded Vision Summit, my second instinct would be to say: “Show me!”
If you’re not familiar with the Summit, it’s the premier conference for innovators incorporating vision into products. It’s focused 100% on practical, deployable computer vision and edge AI.
And that challenge—“Show me!”—is why demos have always been a key part of the Summit. Demos are where the rubber meets the road: They’re how you find out which technologies are real, how they work, and what they can be used for. Demos are where you get to see innovations with your own eyes and discover things you didn’t think were possible. Demos give you the best ideas for what to use in your next product.
Here’s a rundown of just some of the trends I’m seeing in the more than 75 demos you’ll be able to watch at the Embedded Vision Summit coming up online May 25-28.
It’s faster, easier, and cheaper than ever to build embedded vision systems these days.
- That $5 deep-learning image classifier I mentioned? That’s a real thing. And, Edge Impulse will show you how to use its Edge Impulse Studio software to train an image classifier neural network using transfer learning and run it on a $4.99 ESP32-CAM—no ninjas required.
- Intel will be showing how to get edge applications running in minutes using its DevCloud for the Edge, which allows you to develop deep-learning-based computer vision applications starting with pre-built samples—all you need is a web browser.
- Perceptilabs will demonstrate using image classification and transfer learning to train a model to classify brain tumors in MRI images using its TensorFlow-based visual modeling tool, allowing rapid creation and visualization of deep learning models.
3D sensing is becoming more important in all sorts of real-world applications.
- Luxonis will show multiple demos on spatial AI and CV, covering safety applications from machines that know where your hands are to ways that bicyclists can avoid rear end collisions.
- Synopsys will be showing off a simultaneous location and mapping (SLAM) implementation on its DesignWare ARC EV7x processor.
- eYs3D Microelectronics will demonstrate stereo vision for robotic automation and depth sensor fusion for autonomous mobile robots.
Small is beautiful.
- How much can you compress a neural network? Nota will be demoing Netspresso, an AI model compression tool designed for deploying lightweight deep learning models in the cloud and at the edge.
- In a similar vein, Deeplite will show how to enable deep neural networks on edge devices with constrained resources.
- And, Syntiant will demonstrate low-power visual and audio wakewords on the same device.
Come to the Summit and challenge an exhibitor or two to show you something surprising!
Jeff Bier is the President of consulting firm BDTI, founder of the Edge AI and Vision Alliance, and the General Chair of the Embedded Vision Summit—the premier event for innovators incorporating vision and AI in products—which will be held online May 25-28. Be sure to register using promo code VSDSUMMIT21 to save 15%!