Factory Automation

3-D Machine Vision Guides Robots into Action

Suppliers and integrators are merging the best features of machine-vision and robotic-control systems by leveraging cameras, boards, and software that provide 3-D capabilities.
Jan. 1, 2013
7 min read

Suppliers and integrators are merging the best features of machine-vision and robotic-control systems by leveraging cameras, boards, and software that provide 3-D capabilities

Andrew Wilson, Editor

Two technologies -- machine vision and robotics -- are combining to propel automation to levels of higher reliability. Systems that are integrating these technologies differ dramatically from those deployed during the early times of manufacturing automation. With the advent of low-cost smart cameras, PC-based frame grabbers, and pattern-matching software, many suppliers and system integrators are merging the best features of machine-vision and robotic-control systems.

Nowhere was this more apparent than at the VISION 2012 tradeshow in Stuttgart where a number of companies demonstrated vision-based robotic systems designed to perform industrial automation tasks. Many of the systems on display used 3-D imaging techniques to locate and analyze objects in a variety of scenarios.

Lasers and robots

On the booth at ImagingLab, the company demonstrated a system developed for SINTEF that is designed to automate the process of fish fileting (see Fig. 1). It is first necessary to recreate a 3-D image of the fish as it is transported along a conveyor, so a structured laser light from Z-Laser is used to illuminate the profile of the fish. This structured laser light is then captured along with visible light using a 3-D Ranger camera from SICK.

FIGURE 1. To automate the process of fish fileting, structured laser light is used to illuminate the profile of the fish. Inset: Images captured by the camera are then used to render a color image and a 3-D model of the fish.

In the demonstration at the show, both the structured light and camera were mounted on a robot from DENSO Robotics. In this configuration, the light source and camera were moved across a stationary model of a fish.

By incorporating ImagingLab's robotics library for DENSO robots with the 3D-Machine Vision Library (MVL) from AQSENSE, captured images from the robot can be rendered as both a 3-D profile and a color image. This enables the size and quality of the fish to be determined as well as how to position a cutting mechanism to filet the fish. At present the system uses structured and color image data; future implementations may incorporate ultraviolet (UV) and infrared (IR) imagers to provide additional information about the quality of the fish.

To ease the task of configuring 3-D systems, AQSENSE demonstrated its 3DExpress preprocessing software at VISION 2012. Designed to be used with a number of structured-light systems and 3-D cameras, the software allows data from these sources to automatically generate a 3-D representation, manipulate the point cloud, and finally export the result so it can be further processed by standard third-party imaging tools such as Sherlock from Teledyne DALSA or with programming languages such as C++ or .NET. A video tutorial about 3DExpress can be found at http://bit.ly/SLxmQw.

Understanding that structured light-based systems may need to be tested while under development, Tordivel released its latest Version X of the Scorpion Vision software package at the show. The software includes a 3-D modeling tool that allows developers to simulate 3-D stereo vision based on 3-D CAD models. By importing a CAD model of the part to be inspected into the package, a developer can simulate the effects of imaging the part with a structured line light. This simulated line light then regenerates a virtual 3-D model of the part as it would be imaged, allowing developers to optimize laser fan angles and distances before any structured-light system is installed.

Three-dimensional imaging techniques have also been used by Rick van de Zedde, business development manager at Wageningen UR–Food and Biobased Research (FBR), in a system designed to inspect and sort tomato seedlings (see "Vision system sorts tomato seedlings," Vision Systems Design, June 2012).

At VISION 2012, van de Zedde described other projects that FBR had successfully accomplished, including a harvesting system for automatically picking roses. A moving gutter system transports the roses under a 3-D vision system that measures the ripeness and position of the roses. These 3-D positions are then fed to a robotic controller that signals a picking robot, which is used to grip an individual rose. A second vision system is then used to track the stem and a second robot retrieves the rose. A video of the system in action can be seen at http://bit.ly/V65iUR.

Bin picking

One of the major industrial uses of vision-guided robotics is bin picking-the task of picking unordered objects from a container or bin. These tasks present different challenges for the designers of vision-guided robotic systems depending on the types of parts to be picked. This was most dramatically demonstrated by Kevin Ackerman, controls specialist with JMP Engineering in his recent webcast entitled "How to implement 2.5D vision guided robotics." In the presentation, Ackerman showed a number of different systems that could perform bin picking on materials ranging from cereal cartons to automotive parts, based on off-the-shelf cameras, lasers, and software.

At VISION 2012, several companies demonstrated how such capabilities could be realized. At the booth of MVTec Software, the company teamed with Convergent Information Technologies to show how MVTec's HALCON 11 image-processing software and Convergent's AutomAPPPS path-planning software could be used to rapidly configure bin-picking applications. Using the development environment, engineers can choose between different robot manufacturers, grippers, and gripping modes as well as multiple 2-D and 3-D sensors. At the show, the companies showed an industrial robot that identified objects in arbitrary poses and that was then used to subsequently pick and place these objects (see Fig. 2).

FIGURE 2. MVTec Software teamed with Convergent Information Technologies to show how MVTec's HALCON 11 image-processing software and Convergent's AutomAPPPS path-planning software could be used with an industrial robot to identify objects in arbitrary poses and subsequently pick and place these objects.

Vision-guided robots were also on show from the Fraunhofer Institute for Manufacturing and Engineering. In the system that the institute demonstrated, the object recognition task was based on software that compared the best fit of geometric primitives within a 3-D point cloud. With the Fraunhofer IPA 3-D object recognition and localization algorithms, bin picking of objects with dominant geometric features is accomplished in approximately 0.5 sec. According to Fraunhofer, a typical system can localize the parts with an accuracy of ±0.5 mm.

To inspect specular surfaces, the institute also demonstrated a phase measurement technique that uses a diffuse display as a pattern generator to illuminate the object to be measured. When mounted on a robot, the system can be used to measure very large specular surfaces such as auto body panels (look for more on this in the February issue of Vision Systems Design).

However, perhaps the most impressive demonstration at the show could be found on the booth at Robomotive. There Michael Vermeer, general manager, demonstrated a "humanoid robot" from Yaskawa Motoman Robotics that was used in conjunction with a 3-D structured-light imaging system to randomly pick mechanical shopping cart components from a bin (see Fig. 3). After picking these parts, the dual-arm robot was shown assembling a complete shopping cart wheel. A video of the system in action can be found at http://bit.ly/Ud6XqW.

FIGURE 3. Robomotive's "humanoid robot" used in conjunction with a 3-D structured-light imaging system can be used to randomly pick mechanical shopping cart components from a bin and assemble a complete part.

Companies Mentioned in this Article

AQSENSE
www.aqsense.com

Convergent Information Technologies
http://bit.ly/TLOOBk

DENSO Robotics
www.densorobotics.com

Fraunhofer Institute for Manufacturing and Engineering
www.ipa.fraunhofer.com

ImagingLab
www.imaginglab.it

JMP Engineering
www.jmpeng.com

MVTec Software
www.mvtec.com

Robomotive
www.robomotive.nl

SICK
www.sick.com

SINTEF
www.sintef.no

Teledyne DALSA
www.teledynedalsa.com

Tordivel
www.scorpionvision.com

Wageningen UR–Food and Biobased Research
www.fbr.wur.nl

Yaskawa Motoman Robotics
www.motoman.com

Z-Laser
www.z-laser.com

For comprehensive listings of component vendors by product category, visit the Vision Systems Design Buyer's Guide (http://bit.ly/NNgN5v). To locate robotics vendors, visit the Robots section (http://bit.ly/122FAqe).

Sign up for Vision Systems Design Newsletters

Voice Your Opinion!

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!