Factory Automation

Vision enables freestyle bin picking

A layered approach uses two cameras, a supervisory program, a robot controller, and a custom compensating gripper.
June 1, 2007
9 min read

A layered approach uses two cameras, a supervisory program, a robot controller, and a custom compensating gripper.

By Winn Hardin, Contributing Editor

Vision-guided robots are used widely in the automotive industry in structured applications such as paint dispensing, welding, and assembly. Traditionally, these applications use either hard fixtures and/or material-handling equipment to singulate or orientate a part or assembly before it reaches the robot to create a repeatable and controlled environment. Even in tightly controlled environments, however, robots often require a machine-vision system to fine-tune robot guidance when picking up and locating parts in complex applications, such as autoracking.

Taking the application to the next step-picking objects from an unstructured environment-has stymied the best combinations of vision and robots. This inability to pick-and-place a part from a randomly ordered pile of parts poses a problem for automotive and durable-goods manufacturers because most components arrive randomly packed in bins or similar protective dunnage.

There are signs that unstructured bin-picking applications are coming of age. In spring 2007, DaimlerChrysler purchased several bin-picking systems from automation integrator Auto/con after being shown the benefits of a bin-picking workcell developed jointly by American Axle Manufacturing Cycle Time Improvement and Automation, vision-guided-robot software provider Shafi Inc., and Auto/con. The system uses a pair of Sony cameras and Cognex VisionPro 3.4 image-processing library along with Shafi Reliabot bin-picking software to control a Motoman UP130 robot arm with custom end-of-arm-tooling to unload a bin of 240 axles/bin at a rate of one every 15 s (see Fig. 1 on p. 52).

FIGURE 1. In a layered application, a pair of Sony XC-HR70 cameras is mounted orthogonally on a custom gripper for bin picking. The Auto/con gripper is designed to accommodate ±5-mm positional error in the horizontal and ±12-mm positional error in the vertical direction.

Click here to enlarge image

The challenges to machine-vision systems for bin picking are not new, and most traditional solutions are not applicable. Typically, the designer of a vision cell must simplify the application as much as possible, identifying and isolating each variable in the process (for example, part geometries, material handling, lighting) and then controlling those variables cost-effectively before designing the vision system. This helps limit the requirements placed on the vision system, which also lowers cost and increases reliability. Process variability and unstructured environments are a problem for all automated industrial processes.

With bin picking, however, the designer is limited in how much the application can be simplified. The parts come loose in a box, and the application demands that they be picked up-one at a time-and placed on a conveyor or other material-handling system without damaging the part, its neighbors, or the conveyor. It would take an unreasonable number of feeding conveyors to singulate the axles from one bin using only hard-fixtures; and given the part’s size, weight, and mass, such a solution would likely create defective and damaged parts.

In current practice, the parts are unloaded manually, posing health problems because of weight of the objects and throughput restrictions. In addition, a worker cannot safely move as fast as a properly specified robot.

The successful bin-picking vision solution requires intelligent automation. However, vision systems have difficulty identifying similarly colored and shaped parts that lie haphazardly on top of one another. “This is a very complicated scene to work with,” notes Adil Shafi, president and owner of Shafi Inc. “If you were to look at a black-and-white 2-D image of a bin filled with axles, most of what you would see is a lot of crisscrossed lines. You would have no sense of what’s on top and what’s below. Moreover, depending on lighting, these axle shafts can have a shiny or rough finish to them, and we don’t know this information in advance.”

FIGURE 2. Standard ceiling-mounted halogen lamps illuminate the wire bin-picking application. The end user demanded that the application work without using structured lighting or redesigned dunnage. The layered approach requires two cameras, a controller to direct the robot, and a special gripper built for the specific part to be handled.

Click here to enlarge image

The solution requires a layered technical approach that uses image processing to acquire a wide-angle image of a group of parts, a second vision sensor to acquire a image of the candidate part for picking from that group, a supervisory program to convert the part coordinates from the vision system to a program path for the robot, a controller to direct the robot, and finally, a special compensating gripper built specifically for the part (axle) to account for small fluctuations in position and the physical restraints imposed by the dunnage container (see Fig. 2).

Layered Vision

The axle application began with the creation of a 3-D representation of the axle. Most vision applications singulate parts so that they can treat 3-D objects as 2-D objects and, therefore, exponentially reduce the amount of image processing required to find key features. American Axle provided test parts to Shafi Inc., which created a 3-D representation of the part based on processing of multiple images to create a file of key features and their spatial relationships. This 3-D representation, or model, can also be generated from CAD files when available.

The 3-D representation was uploaded to the Reliabot software running on a nearby PC. Shafi recommends a PC with a minimum 850-MHz Intel Pentium or comparable microprocessor, 256 Mbytes RAM, 10-Gbyte hard drive, floppy drive, Windows XP Professional/2000 OS, at least two free PCI slots, a standard parallel port slot, RS-232 serial port, 10/100 Ethernet port, and USB slots for quick backups.

3-D Calibration

Before initiating the bin-picking operation, the vision system and robot controller must be calibrated to the same global coordinate system and to the shape-specific physics of the part, in this case a 3-ft metal rod with a heavy metal flange at one end. Reliabot has calibration routines based on 12 standard 3-D part geometries that are common to the automotive industry, such as rods for pistons and drive shafts, flanges for axles, and so forth.

An automotive axle is essentially a rod connected to a flanged plate-like structure at one end. The calibration routines are designed to help the Reliabot program generate a 3-D location for the product to within ±5 mm in the x and y axis, and ±12 mm in the z direction.

The calibration routine begins by triggering one of two Sony XC-HR70 cameras mounted on the robot gripper. The first camera is designed to capture a wide-area view of one of four areas inside the bin and to pass that image along to the PC RAM via custom flex cables for robot vision applications from Orri Corp. to a Cognex MVS-8504 digital frame grabber that decodes the digital packet data (see Fig. 3).

FIGURE 3. A camera captures a wide-angled view and locates several candidates for picking; the orange window is the search window (top). Another camera shows the close up view and a different viewing angle; the purple window shows that the vision system is now focusing on a single part. In this view the part angle is also detected (bottom).

Click here to enlarge image

Angled edge-detection algorithms in the Cognex image-processing library running on the PC take the images and identify the angled edges, outlining all visible-axles based histogram thresholds. The axles with the greatest length of identified edge are considered to be on the top layer of the bin.

Reliabot determines the rough 3-D locations of these top-layer axles based on the known position of the robot and angle of the camera on the end-effector provided to the Reliabot software/PC via Ethernet from the Motoman NX100 robot controller. The geometric search algorithm roughly determines the height of each identified axle based on height of the camera on the robot and the size of the blob represented by each axle in the wide-angle image.

Reliabot software then prioritizes each axle for pickup based on the probability of the least amount of interference from neighboring axles or bin walls. This prioritization is based on a set of rules relevant to the axle’s physical dimensions and properties and is included in the Reliabot calibration routine for axle bin picking (see Fig. 4).

FIGURE 4. Reliabot software prioritizes each axle for pickup based on the probability of the least amount of interference from neighboring axels or bin walls.

Click here to enlarge image

To further refine the position of the axis in 3-D and create a program path for the robot based on the physical properties and weights of the axle and the end-effector, Reliabot moves the robot arm to a position near the side of the first axle to be picked. A second Sony XC-HR70 camera captures a closer image of the target axle, and the Reliabot software uses a modified geometric pattern search for 3-D objects to refine the position, yaw, pitch, and roll of the axle. This final position data, along with any relevant rules (such as axle near bin wall or anticipated center of gravity located x distance from flange end) are then used by Reliabot to create the final robot path, which is passed along to the UP130 controller for final implementation.

Even with the power of the Pentium, staged image-processing paradigms, and contextual rules for 3-D applications, the success of a vision-guided robot workcell still depends on a little help from a metal widget. The design of the robot gripper uses a single set of pinchers flanked before and aft by two V-grooved supports.

The gripper uses a two-finger pincher configuration to grasp the axle rod and then pull it upward against the flanking V-grooved nests. Auto/Con designed the gripper this way so it could accommodate a 10 mm side-to-side and 15-mm height margin for error while still successfully picking up 100% of the axles.

Click here to enlarge image

Features, advantages, benefits

Dan Bickersteth
Click here to enlarge image

“This application presented a lot of technical complexities,” says American Axle Manufacturing corporate manager Dan Bickersteth. “We needed to pick axle shafts in a semi-oriented unit that was loaded. The axles are placed in a wire-basket dunnage in layers, which is standard for our processes. We need to stick to these parts in these dunnages because if you drive them into custom dunnage, it erodes the business case pretty fast.

“So the system designers took axle and drive-train bin picking challenges and pulled them both off. We didn’t end up doing an installation on the axle application because we rearranged work in our facility and were able to have that load operation combined with another manual operation that we couldn’t design out of our process. But if we hadn’t been able to do that we’d have three cells on our floor right now.”

Sign up for Vision Systems Design Newsletters

Voice Your Opinion!

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!