Cameras and Accessories

3D vision helps robots pick and disentangle hooks

A 3D scanner, vision and motion planning software, and a collaborative robot combine to solve one of machine vision’s toughest tasks.
Nov. 16, 2020
9 min read

Smash, drop. Smash, drop. In many factories today, numerous tasks exist that people no longer want or need to do. In a spring manufacturing facility, for example, where employees must manually insert hooks into a press and flatten them during spring assembly over and over throughout the course of a day. Innovative machine vision technologies have removed this need and helped increase productivity and efficiency, while freeing up human workers for more meaningful work elsewhere.

Many manufacturing environments deploy robotic bin picking systems to automate material handling tasks. A system typically comprises a 2D or 3D machine vision system, a robotic manipulator, and control software and helps pick up objects from a container and place them onto another area such as a conveyor, pallet, or into a process. While these systems help streamline production and protect workers from repetitive and potentially hazardous tasks, bin picking applications prove challenging because the parts are loose and randomly piled in the bin, making it hard for the system to differentiate and pick up parts.

Related: 3D scanners measure and inspect aerospace turbine blades

Bin picking becomes even more challenging when dealing with objects like hooks or springs (Figure 1). Ace Wire Spring (McKees Rocks, PA, USA; www.acewirespring.com), for example, deals with processes involving bending metal wire into various shapes, including spring and wire forms. Because springs and wires become entangled when placed into a bin, automatically picking and placing these objects creates significant challenges. Previously, the company’s process saw hooks forming on one machine and dumped into a bin that transfers to another side of the shop. An employee would then pick hooks out of the bin one at a time and insert the end of the hook into a press that flattens the end, making it wider. The hook would then go over to another station where another employee puts a bead on the hook and that gets pressed together with a spring, which holds the spring into place. These hook and spring assemblies see use in tractor trailers and garage door openers, for example.

Seeking to automate the hook flattening portion of the process, Ace Wire Spring enlisted CapSen Robotics (Pittsburgh, PA, USA; www.capsenrobotics.com), a company specializing in 3D vision and motion planning software. In addition to overlapping parts, additional challenges with the system include the fact that hooks go down to as little as 2 in. in size and that the hooks become entangled when picked up. CapSen Robotics’ 3D vision, motion planning, and control software CapSen PiC takes aim at these advanced bin picking problems.

With small part sizes, the system needs a 0.75 m working distance from the camera to the bin (Figure 2), which measures about 16 x 13 x 8 in and requires a high-end 3D camera, according to Jared Glover, CEO, CapSen Robotics. This led to the team choosing the PhoXi 3D Scanner Model S from Photoneo (Bratislava, Slovakia, www.photoneo.com), which uses class 3R visible red light (638 nm) laser projection and an NVIDIA (Santa Clara, CA, USA; www.nvidia.com) Maxwell graphics processing unit (GPU), while offering a data acquisition time of 250 to 2250 ms.

During operation the 3D scanner flashes a projection pattern into the bin and a PAVS6 six-axis collaborative robot from Precise Automation (Fremont, CA, USA; www.preciseautomation.com) picks up a hook from the bin (Figure 3) using an LEHZ32K2-22-S56P1D actuator from SMC Pneumatics (Yorba Linda, CA, USA; www.smcpneumatics.com). The robot disentangles the hook by rotating its arm and letting other hooks fall back into the bin. Two challenges existed for this portion of the process, however, the first of which dealt with the gripper. Picking up small hooks with the gripper with a stable grasp and without bumping into other hooks when entering the bin proved difficult, so CapSen Robotics designed custom fingers for the robot’s end effector that can precisely pick up the hooks with either fingertips or with the full hand grasp.

Secondly, when the robot picks the hook from a bin, it is typically not holding it from the side that allows it to go into the press (Figure 4). Even if it is being held on the correct side, it may not have a stable grasp. To solve this, the team programmed the robot to place a picked hook onto a peg fixture, allowing the robotic arm and gripper to reorient and properly grip the hook for placement into the press.

After the press comes down, the hook drops into a shoot or destination bin and the process starts over. After the robot picks the previous hook and the arm moves out of the way of the bin, the camera snaps the next image and CapSen’s software starts processing it and planning for the next pick, removing the need to stop the robot between picks. When a bin becomes empty, a light stack from Banner Engineering (Minneapolis, MN, USA; www.bannerengineering.com) beeps and turns red, signaling a human operator for refill. 

E1210 and E12100 Ethernet remote input/output (I/O) modules from Moxa (Brea, CA, USA; www.moxa.com) connect a custom Linux computer equipped with an NVIDIA GTX1080 to the end effector’s I/O. The 3D camera also connects to the computer, which runs CapSen Robotics’ software. A magnetic sensor signals whether the robot has a hook in place. Another magnet in the fixture helps hold the hook in place so that when the robot goes to pick it up again, it remains in the same orientation. Doing so helped this portion of the task become much more predictable, according to Glover.

Motion planning for the robot represents another critical factor in the task, explains Glover, who says that portions of the path that always remain the same are pre-recorded by hand-guiding the robot.

“A short action script provides a list of step by step instructions for a particular task, such as locating objects in the bin, selecting the easiest hook to pick, and picking the hook,” he says. “Without that script our software doesn’t know anything about the task or the object it is picking.”

Additional tasks in the action script include holding a hook above the bin, disentanglement, and placing a hook onto a fixture. The action script also refers to action locations for the robot, which allows it to move to the various locations that are important to the task.

“For example, when the robot has to insert the hook into the press, it must be taught what that means – hold it here, then here, and then back out again, and those motions/paths are recorded,” he says. “Some of the locations are not a fixed position with respect to the robot, so it is not going to the same place every time but rather a location that has something to do with the object or other parts of the environment.”

Related: 3D laser profiler inspects cosmetics

Based on the recorded, relevant locations, the motion planning software plans how to accomplish the tasks. It isn’t just figuring out how to get from Point A to Point B, but also which Point A and which Point B should be chosen. A lot of the user interface beyond just start and stop involves teaching the robot different locations, explains Glover.

The fact that the team went through 20 to 30 different finger designs before getting it perfect was a contributing factor in CapSen Robotics doing much of the integration work, despite the fact they primarily focus on software, explains Glover.

Over the past few years, the team has made continuous improvements to its software to enable it to handle bin picking applications in clutter, and to disentangle previously difficult-to-deal-with parts. The software uses 3D models of objects, and in situations where no CAD models exist, the team scans in its own models using the CapSen Scanner scan table. The GPU-accelerated software uses geometry-based vision algorithms paired with machine learning techniques that enables it to improve upon object detection accuracy. Essentially, says Glover, using 3D models turns the task into a CAD matching problem, where the software searches among all possible 3D locations (the positions and orientations) of the object in the bin.

“Having 3D models of objects removes the need for a machine learning system that is flexible enough to recognize any animal or a person’s face, for example,” says Glover. “In this case, the software knows exactly what to look for in a 3D model, so it is a matter of figuring out where a good match exists or not. There is a lot more variability in what a cat could look like compared to what a particular hook would look like in this bin in a factory.”

For that reason, the software doesn’t require as many parameters to describe the machine learning models. A few thousand parameters as opposed to a million parameters—commonly required for deep learning applications—suffice, which creates a much smaller dataset for training.

Previously when the company had employees manually flattening hooks, an operator could do about 200 pieces per hour. If a person does not stop for an entire eight-hour shift, that equates to 1,600 pieces. Human operators cannot work non-stop for an entire shift, but the new robot installed by Capsen Robotics does, according to Rich Froehlich, Owner, Ace Wire Spring.

“The machine can flatten about 250 pieces per hour, which equals out to a 25% increase,” he says. “The reality is that its more than that because people require periodic breaks while the robot runs 24 hours a day.”

About the Author

James Carroll

Former VSD Editor James Carroll joined the team 2013.  Carroll covered machine vision and imaging from numerous angles, including application stories, industry news, market updates, and new products. In addition to writing and editing articles, Carroll managed the Innovators Awards program and webcasts.

Sign up for Vision Systems Design Newsletters

Voice Your Opinion!

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!