Researchers Create Experimental Camera System for Event-Based Imaging

Oct. 10, 2024
The machine vision solution could lead to improvements in perception for mobile robots and self-driving vehicles.

Researchers at the University of Maryland and several other universities have developed an experimental event-based camera that addresses a significant shortcoming of the technology, potentially leading to improvements in machine vision applications such as mobile robotics.

In event-based, or neuromorphic, sensors, each pixel operates independently and reports new data about a scene only when it senses movement in the form of changes in luminance. Because the pixels only report changes, the sensing technology typically uses less energy and is faster than conventional cameras. The technology shows promise machine vision applications, such as in obstacle sensing and localization tasks for mobile robots and self-driving vehicles.

But the researchers at the University of Maryland (College Park, MD, USA), Zhejiang University (Hangzhou, China), and Hong Kong University of Science and Technology (Hong Kong, China) say event-based cameras do not capture information on the edges of objects that are parallel to the camera’s motion, leading to missing information.

This can be an issue for visual perception algorithms, they explain in a paper published in Science Robotics

“It’s a big problem because robots and many other technologies—such as self-driving cars—rely on accurate and timely images to react correctly to a changing environment,” explains the paper’s lead author, Botao He, a doctoral student in computer science at the University of Maryland.

To solve this problem, the researchers developed a system that is inspired by human perception, which uses small involuntary eye movements while staring at an object or scene. These eye movements, known as microsaccades, allow humans to perceive an entire scene without losing the details at the edges.

“Our approach is the first to implement a microsaccade version” of event-based vision, says Yiannis Aloimonos, another study author and director of the Computer Vision Laboratory at the University of Maryland.

The Experimental Vision System

In their system, the researchers manipulate the direction of incoming light. “Our aim is to vary the direction between the scene texture and the image motion,” they write. “Moreover, if the direction of the incoming light can be steered continuously rather than in discrete steps, the efficacy also will be improved.”

The centerpiece of their solution is a rotating wedge-shaped prism that mounted in front of the lens of an event camera.  As the prism rotates on the Z-axis of the camera, it creates motion in a continuous circular trajectory, redirecting light and triggering the pixels to report new data. The event stream from the camera includes all boundary information in a scene.

The event camera they used is a DVXplorer from iniVation (Zurich, Switzerland).  With a resolution of 640 x 480, the camera has a built-in Nvidia (Santa Clara, CA, USA) computer, and it can output data to either Gigabit Ethernet or USB-C.  

The hardware device is integrated with a software framework, creating a system they call Artificial MIcrosaccade-enhanced Event Camera (AMI-EV). One of the software algorithms compensates for the movement of the prism as it rotates.

In numerous experiments, the researchers tested AMI-EV against a standard event camera and an RGB-D camera—a D435 Realsense camera from Intel (Santa Clara, CA, USA). The tests were designed to evaluate the AMI-EV’s performance in producing an event stream, feature detection and matching, motion segmentation, and human detection and pose estimation.

They found that the AMI-EV performed better than the other cameras. The experiments show that the experimental system has potential in both low-level tasks, such as looking at a human moving, or high-level tasks, such as recognizing what activity the human is doing, Aloimonos explains.  

Future research should focus on several areas, they write in the paper. First, researchers must improve the energy efficiency of AMI-EV, which consumes more power than traditional event cameras because of the prism structure. Second, researchers should address the additional computational complexity caused by the addition of the compensation algorithm. 

 

 

About the Author

Linda Wilson | Editor in Chief

Linda Wilson joined the team at Vision Systems Design in 2022. She has more than 25 years of experience in B2B publishing and has written for numerous publications, including Modern Healthcare, InformationWeek, Computerworld, Health Data Management, and many others. Before joining VSD, she was the senior editor at Medical Laboratory Observer, a sister publication to VSD.         

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!