Memory Allocation in AI and Computer Vision Applications

April 29, 2021
Presented by Rami Drucker, Machine Learning System Team Leader at CEVA, at the September 2020 Embedded Vision Summit.

Efficient edge computing is the foundation underlying AI and computer vision applications in autonomous vehicles, cameras, drones and many other applications. Because deep neural networks (DNNs) are memory intensive, creating efficient implementations of DNNs requires efficient use of memory and memory bandwidth.

In this talk, Drucker presents CEVA’s novel approach to enable efficient memory allocation to enable implementing DNNs under strict size and power constraints. The company’s approach utilizes a unified computational graph and takes into account the differing characteristics of different classes of memory (on-chip L1 and L2 SRAM and external DDR). Drucker also introduces CEVA’s XM6 and SensPro processors for vision and AI.

See here for a PDF of the slides.

To view the rest of the 2020 Embedded Vision Summit videos, visit the event's video archive.

Register for the 2021 Embedded Vision Summit here.

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!