Understanding the latest in high-speed 3D imaging: Part two
(Read part one of this story here.)
In structured light or active illumination setups, a light source such as a laser or LED projects a narrow band of light or pattern onto a target surface. When imaged from an observation perspective other than that of the light source, the pattern appears distorted. The pattern is acquired by a camera and used for geometric reconstruction of the surface shape.
Structured light technology is used in numerous 3D imaging products on the market today, including those offered by Photoneo (Bratislava, Slovakia; www.photoneo.com). The company’s MotionCam-3D camera, for example, uses Photoneo’s patented “parallel structured light” technology, which refers to its ability to capture multiple images of structured light in parallel rather than sequentially. The MotionCam-3D enables the capture of high-resolution images of moving objects at a maximum speed of 40 m/second. The camera features an NVIDIA Maxwell GPU and a proprietary CMOS image sensor, and can acquire 1068 x 800 point clouds at up to 20 fps. Used for illumination in this camera—which is available in two models—is a class 3R visible red-light laser (638 nm).
The camera was designed for bin picking applications, and to enable robots to handle smaller and sensitive objects in palletizing, depalletizing, machine tending, quality control, and metrology, according to Tomas Kovacovsky, CTO and co-founder of Photoneo.
“One of the things our camera does very well is capture moving scenes from different viewpoints and create its complete 3D reconstruction,” says Kovacovsky.
Additionally, the company offers the PhoXi line of 3D scanners in five models offering different specifications like baseline, scanning ranges and area, and calibration accuracy. These scanners use class 3R visible red light (638 nm) laser projection (models M, L, XL also 2R) and NVIDIA Maxwell GPUs, while offering data acquisition times of 250 to 2000 ms (XS model), 250 to 2250 ms (S model), 250 to 2500 ms (M model), 250 to 2750 ms (L model), and 250 to 3000 ms (XL model).
Ensenso cameras from IDS Imaging Development Systems (Obersulm, Germany; www.ids-imaging.com) operate using the stereo vision principle. Each model has two integrated CMOS image sensors (ranging from 752 x 480 to 5 MPixels each) and a projector that casts high-contrast texture onto an object for measurement using a pattern mask. The two cameras acquire images from the same scene from two different positions, and although the cameras see the same scene, different object positions according to the cameras’ projection rays exist. Matching algorithms compare the two images and search for corresponding points and visualize all point displacements in a disparity map.
Ensenso cameras target applications including 3D object recognition and reconstruction, robotics, logistics, and conveyor belt picking, and feature three product lines: N series (USB or GigE, working distances up to 3 m), X series (GigE, working distance up to 5 m), and XR series (GigE, Wi-Fi, onboard processing, working distances up to 5 m). Additionally, the company’s FlexView technology further improves the detail level of the disparity map of static scenes, as the position of the pattern mask in the projection rays can be translated in small steps by a mechanical system using a piezoelectric actuator.
“As currently measured, the 3D frame/data rate of the Ensenso XR is comparable to using an Ensenso X with a Core i7 four-core CPU, which will be even faster in the future,” says Martin Hennemann, Product Manager, 3D. “Ensenso 3D cameras can deliver 3D data rates from approximately 1 to 15 Hz while delivering spatial resolution of about 1 mm over a 1 m field, and better.”
Available from SICK are the Visionary-S 3D snapshot cameras, which use infrared laser light (808 nm) to capture up to 30 color depth images/s using stereo technology and offer a resolution of 640 x 512 with depth resolution down to the sub-millimeter. These cameras target applications such as bin picking, robotic navigation and positioning, quality control, and palletizing/depalletizing.
With its 3D-A5000 series, Cognex offers a 3D camera that uses 3D LightBurst technology, which casts a blue light pattern on a part to acquire full field-of-view 3D point cloud images as fast as 200 ms. The camera features a 10 Gigabit Ethernet interface and targets applications in the automotive, consumer goods, and logistics industries.
Designed for 3D robotic vision applications, Zivid’s (Oslo, Norway, www.zivid.com) One and One Plus color 3D structured light cameras output 1920 x 1200 3D RGB images (X, Y, Z and R, G, B for each pixel) at an acquisition rate of >13 Hz. Zivid One Plus cameras are offered in small, medium, and large models, each of which offer varying working distances, field of view, spatial resolution, and point precision. Additionally, the cameras feature a USB 3.0 interface, a rugged aluminum casing that is dust and waterproof, and a passive cooling system.
In the Scorpion Vision line of 3D products from Tordivel (Oslo, Norway; www.scorpionvision.com), three types of 3D cameras are offered, including the Scorpion 3D Stinger camera (Figure 1), which targets robotics, assembly verification, and gauging applications. Offered in baseline options from 35 to 1500 mm, this stereo vision setup is based on cameras (VGA to 29 MPixel) from Sony (Tokyo, Japan; www.sony.com) and Basler and offers models with passive stereo, random pattern projection laser, multiline laser, and red laser (660 nm) options.
With VGA, the Scorpion 3D Stinger camera supports up to 30 fps, but in a more demanding application, such as 3D picking of pallets and tea sacks, the speeds slow down a bit, explains Thor Vollset, CEO, Tordivel.
“In a system with a 5 m working distance, 1500 x 1500 mm working area, and up to 2500 mm working height, we can reach 2 or 3 fps while generating dense 3D images to make reliable 3D picking coordinates in 0.5 to 2 seconds,” he says.
Targeting ID tracking, object tracking, height measurement, object counting, and assembly verifications, the Scorpion 3D Venom camera uses a single color or monochrome camera with resolution from VGA to 20 MPixels. A mirror design creates two virtual 3D cameras that focus at a user-specified working distance. The camera can achieve frame rates of up to 200 fps. Additionally, the company offers a Scorpion 3D Stinger Scanner based on FPGA 3D laser triangulation models. This scanner, according to Vollset, reaches speeds of 50,000 fps/laser lines per second and has an integrated encoder interface for working on running conveyor lines.
Tordivel also offers the Scorpion 3D Box camera, which is offered in a standard version with integrated white or infrared LEDs, or in a random pattern projection version with red or infrared illumination. In addition to the lighting options, the full setup consists of two or more Scorpion Box cameras and a Stinger camera flexible length bracket. This stereo vision camera is available with cameras up to 10 MPixels in size and is designed for 3D measurement and object location tasks such as pallet picking.
Visio Nerf’s (Nuaillé, France; www.visionerf.com) cirrus3D scanners use a structured light approach with blue LEDs, combined with two 4 MPixel cameras for a stereo vision system, and an integrated processor for 3D point calculations. Suited for applications such as bin picking, localization, identification, and quality inspection, these scanners can acquire 1 million 3D points in 0.2 s and are offered in six models with varying working volumes and 3D image resolution options.
For its 3D structured light DepthScan system, Ajile Light Industries (Ottawa, ON, Canada; www.ajile.ca) uses a digital mirror device (DMD) projector for pattern generation, RGB LEDs, a 4 MPixel CMOS image sensor, FPGA and GPU for processing, and a proprietary lighting controller for the system. DepthScan achieves a scan rate of 2 Hz at maximum accuracy and resolution, and up to 30 Hz at lower resolution.
Intel’s (Santa Clara, CA, USA; www.intel.com) RealSense series represents a hugely popular, mainstream example of stereo vision technology. The D435e depth camera from FRAMOS (Taufkirchen, Germany; www.framos.com) features Intel’s D430 depth module, offers a 0.9 MPixel global shutter depth sensor and a 2 MPixel rolling shutter RGB module, and is appropriate for 3D vision in robotics, automated vehicle, and smart machine applications. Additionally, the camera has a frame rate of 30 fps for simultaneous RGB and depth streams.
Also using stereo vision technology are the Tara and TaraXL cameras from e-con Systems (San Jose, CA, USA; www.e-consystems.com) Both the Tara and TaraXL feature the MT9V024 CMOS image sensor from ON Semiconductor (Phoenix, AZ, USA; www.onsemi.com), supporting WVGA at 60 fps over USB 3.0. While the Tara camera targets customers looking to integrate stereo cameras into product designs for applications such as machine vision, drones, surgical robotics, and depth sensing, the TaraXL is optimized for NVIDIA’s Jetson AGX Xavier GPU development kit.
Also offered by the company is the STEEReoCAM, a 2 MPixel 3D MIPI stereo camera designed for NVIDIA’s Jetson Nano, AGX Xavier, and TX2 developer kit. Based on the OV2311 CMOS image sensor from OmniVision Technologies (Santa Clara, CA, USA; www.ovt.com), the camera is bundled with a proprietary CUDA-accelerated Stereo SDK that runs on the GPU of an NVIDIA Tegra processor. This camera provides 3D depth mapping at 30 fps and suits applications such as autonomous vehicles, robotics, and facial recognition.
Another stereo vision camera available today is the Bumblebee 2 stereo vision camera from FLIR Machine Vision (Richmond, BC, Canada; www.flir.com/mv), which is capable of reaching 48 fps. This camera is 0.3 MPixels in color or monochrome, has a GPIO connector for external trigger, and strobe functionality.
Based on stereo vision techniques but using line scan sensors instead of area scan sensors to generate 3D data, are 3DPIXA cameras (Figure 2) from Chromasens (Konstanz, Germany; www.chromasens.de/en). Available in both compact and dual models, the cameras reach line scan speeds of up to 30 kHz at full resolution and maximum speeds of 147 mm/s (compact model), and 148 mm/s (dual model). These cameras feature quad-linear (compact) and tri-linear (dual) RGB CCD line sensors and target applications including 3D web inspection, high-speed in-line height measurement, wire bond inspection, PCB inspection, and metal surface inspection.
One mature product available on the market using stereo vision technology is FANUC’s (Oshino, Japan; www.fanuc.com) FANUC iRVision 3D camera. Designed to work only with FANUC robots, this camera offers a stereo structured light imaging system that ties in directly with a robot controller.
How to interpret structured light speeds
Structured light or active illumination speeds are more straightforward. When looking at raw speeds, the main thing to understand is that these speeds are the result of highly complex stereo or single-camera 3D analysis, usually involving many images, anywhere from a few to a few dozen. These systems must do a large amount of processing on multiple images, plus stereo correlation to obtain a full 3D image, according to David Dechow, Principal Vision Systems Architect, Integro Technologies.
“These cameras are taking greyscale images of structured light patterns, rectifying the images, performing correspondence between points, extracting the disparity image from the individual correspondence points, and turning that into a depth map image that provides X, Y, and Z locations for every pixel,” he says. “Not only does this require many images to do this, but such systems also do a lot more processing over a full-area 3D image than other systems.”
Furthermore, structured light techniques require a tradeoff between speed and resolution. As compared to Time of Flight techniques, structured light-based products deliver significantly more resolution and precision in the 3D image, but at much slower speeds.
Confocal imaging
Line confocal imaging technology (LCI) is a patented 3D measurement method where white light emitting from a sensor’s transmitter is split into a continuous spectrum of wavelengths. Each wavelength is focused on the measured surface at a certain distance from the sensor to form a perpendicular focal plane. The distance to the surface is measured by determining the dominant wavelength of the reflected light. Recently acquired by the TKH Group (Haaksbergen, Netherlands; www.tkhgroup.com) and joining the LMI Technologies group, FocalSpec (Oulu, Finland; www.focalspec.com) deploys this technology in its LCI sensors.
FocalSpec’s sensors were designed to overcome the limitations of standard optical 3D technologies with transparent materials and/or glossy surfaces. In its latest models, the LCI1220 and LCI1620 (Figure 3), the sensor simultaneously captures surface 3D topography, 3D tomography, and 2D intensity data at speeds of up to 16,000 profiles/s. With 1,728 points/profile, the sampling speed can go up to 27,000,000 data points/s, depending on depth of field.
These sensors target machine vision inspection applications including curved edge mobile phone display measurement, roughness analysis of transparent/non-transparent surfaces, defect detection on multi-layer components, and burr height analysis in the metal industry.
How to interpret confocal imaging speeds
Products using confocal imaging technology like FocalSpec are particularly well suited for extremely small field of view inspections. For example, these sensors would inspect a computer chip as opposed to an entire computer board. Additionally, while perhaps not as fast as Time of Flight, laser-based techniques, or some structured light techniques, confocal imaging sensors offer resolution tradeoff, according to Dechow.
“In terms of the speeds, confocal imaging sensors offer good speeds at reasonable production rates but provide much higher levels of accuracy that cannot be achieved with the other types of products,” he says.
Conclusion
Several other companies, of course, offer 3D imaging products for machine vision on the market today. These include Canon USA (Melville, NY, USA; www.usa.canon.com), Ricoh (Tokyo, Japan; https://industry.ricoh.com/en), trinamiX (Ludwigshafen, Germany; www.trinamix.de), Creaform (Lévis, QC, Canada; www.creaform.com), and several others. Just like with the products described here and in part one (page 29), it is important to understand the listed speeds and how they might apply to the requirements of a given machine vision application.
This story was originally printed in the November/December 2019 issue of Vision Systems Design magazine.
About the Author
James Carroll
Former VSD Editor James Carroll joined the team 2013. Carroll covered machine vision and imaging from numerous angles, including application stories, industry news, market updates, and new products. In addition to writing and editing articles, Carroll managed the Innovators Awards program and webcasts.