Proprietary CMOS sensor enables accurate, fast 3D acquisition of moving objects

March 12, 2020
The sensor works similar to the Bayer filter mosaic, where each colorcoded pixel has a unique role in the final, debayered output.

3D sensing in motion poses various challenges, including limited exposure time, moving objects, and lack of light. To address these limitations, Photoneo (Bratislava, Slovakia; www.photoneo.com) developed a novel mosaic shutter CMOS sensor (bit.ly/VSD-PHTN) that allows the capture of a moving object by reconstructing a 3D image from one single shot of the sensor. This sensor consists of a [confidential] amount of super-pixel blocks which are further divided into subpixels. 

The sensor works similar to the Bayer filter mosaic, where each colorcoded pixel has a unique role in the final, debayered output. Each sub-pixel can be controlled by a defined unique modulation signal / pattern—referring to the electronic modulation of internal pixels to control when a pixel receives light and when it does not hence the name mosaic shutter or matrix shutter CMOS sensor. 

At the end of the exposure, the raw is demosaiced to gather a set—a number that correlates with the number of sub-pixels per super-pixel—of uniquely-coded virtual images, as opposed to capturing several images with modulated projection. This means that the pixel modulation (turning on or off) is done on the sensor as opposed to the projection field (transmitter). Because there is full control over pixel modulation functions, it is possible to use the same sequential structured light algorithms and get the same kind of information, but from one frame and with a little bit lower resolution.

Based on the sensor, the company developed the MotionCam-3D camera, the only Platinum-level honoree in the 2019 Innovators Awards program. The camera uses parallel structured light technology, which refers to its ability to capture multiple images of structured light in parallel rather than sequentially. 

The sensor is combined with a proprietary pattern projector that acts similarly to a structured light projector but uses a rotating laser. The exact angle of the laser deflector (mirror) when the laser line passes over a point in a scene is important, as this angle is being encoded through pixel modulation and provides the 3D information. 

In the camera, the laser is on the entire time, and the pixels are repeatedly turned on and off using multiple storages and the ability to flush the pixels (transfer of a charge from photodiode to storage). In this way, the scene is paralyzed to acquire the 3D image of a moving object without motion blur, with" paralyzed" referring to the fact that the per-pixel shutter is very short, near 10 µs. 

The MotionCam-3D enables the capture of high-resolution images of moving objects at a maximum speed of 40 m/s. Featuring an NVIDIA (Santa Clara, CA, USA; www.nvidia.com) Maxwell GPU, the camera incorporates the proprietary CMOS image sensor and can acquire 1068 x 800 point clouds at up to 20 fps. A class 3R visible red-light laser (638 nm) is used for illumination.

About the Author

James Carroll

Former VSD Editor James Carroll joined the team 2013.  Carroll covered machine vision and imaging from numerous angles, including application stories, industry news, market updates, and new products. In addition to writing and editing articles, Carroll managed the Innovators Awards program and webcasts.

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!