VISION-GUIDED ROBOTICS: Sophisticated software speeds automotive assembly
In the automotive industry, roller hemming is used to join independently stamped inner panels such as hoods, decks, and fender reinforcements to the outer panels that make up a car’s body. The edge of the outer panel is first formed so that it extends perpendicularly over the inner panel. Then a hemmer folds the outer panel 45° over the inner panel. This “pre-hem” is then followed by the actual hem, which folds the metal outer portion over the inner reinforcement.
Traditionally, hemming is performed in large presses that are not flexible. Recently these presses have been replaced by roller hemming technology where a flexible hemming head with two rollers is mounted to the end of a robot arm. The robot can be programmed to follow an almost infinite number of hem paths to hem an endless amount of panels. Because the parts must be accurately located for the robot to strike the path with precision, a robotic guidance system is employed.
“In the past,” says Joseph Cyrek, director of joining and vision technology at Comau (Southfield, MI, USA), “vision was never needed because the part was placed in an anvil for precise locating. With our wheelhouse hemming we are taking the anvil to the part.”
Many vision systems for this task use stereo camera systems to locate fiducials in 3-D space and require calibration of 3-D targets before each system can be deployed. Cyrek and his colleagues have developed a system that requires no calibration and allows robotic guidance without fiducial location.
The system, known as RecogniSense, overcomes the problems of inaccurate part positioning and requires a single robot-mounted camera to perform 3-D part recognition and robot guidance. The system has been deployed at several major automotive manufacturers worldwide.
In the hemming operation, a fully framed vehicle enters the station and RecogniSense locates the position of the wheel arc in six degrees of freedom (DOF). To keep the car body from moving during hemming, the wheel house is held firmly in position using a fixed structure known as an anvil that is mounted to a six-DOF robot from Fanuc (see figure).
The anvil is fitted with two MetaWhite ExoLight LEDs from Metaphase (Bensalem, PA, USA) that provide illumination for a TXG06-I7, 776 × 582-pixel GigE camera from Baumer (Radeberg, Germany) mounted onto the center of the anvil. Images from this camera are transferred to an industrial vision controller, equipped with an Intel Pro/1000 network interface card (NIC), via the power-over-GigE interface.
Before parts can be hemmed, the system must be trained to recognize where each part is located in 3-D space. Unlike other vision systems that use two or three cameras to perform this task, Comau’s RecogniSense system uses a single camera and Cortex Recognition software that Comau has licensed from Recognition Robotics (Elyria, OH, USA).
The Cortex Recognition software can learn thousands of objects within images and then recognize those learned objects within milliseconds when presented to the system. Objects to be recognized can be present anywhere in space—whether far away, rotated, tilted, or tipped.
With RecogniSense, an operator places a region of interest around the object of interest and actuates a teach button without using a calibration target. With a single camera, a robot can be guided to pick an 8-ft-wide part with accuracy on the order of 0.005 in.
At a wheelhouse hemming system, RecogniSense has achieved positional accuracy of ±0.5 mm with allowable part variation over ±75 mm in x, y, and z and ±5° in Rx, Ry, and Rz.
Vision Systems Articles Archives