Memory Allocation in AI and Computer Vision Applications
Efficient edge computing is the foundation underlying AI and computer vision applications in autonomous vehicles, cameras, drones and many other applications. Because deep neural networks (DNNs) are memory intensive, creating efficient implementations of DNNs requires efficient use of memory and memory bandwidth.
In this talk, Drucker presents CEVA’s novel approach to enable efficient memory allocation to enable implementing DNNs under strict size and power constraints. The company’s approach utilizes a unified computational graph and takes into account the differing characteristics of different classes of memory (on-chip L1 and L2 SRAM and external DDR). Drucker also introduces CEVA’s XM6 and SensPro processors for vision and AI.
See here for a PDF of the slides.
To view the rest of the 2020 Embedded Vision Summit videos, visit the event's video archive.