Home

VISION-GUIDED ROBOTICS: 3-D measurement systems verify pallet packing

One of the problems faced by developers of fully or semi-automated packaging facilities is ensuring that packages are correctly stacked on pallets.
Sept. 1, 2009
4 min read

One of the problems faced by developers of fully or semi-automated packaging facilities is ensuring that packages are correctly stacked on pallets. Whether these be placed on autonomously guided vehicles (AGVs) or moved through a warehouse or packaging facility via conveyor, correct placement ensures that the goods will be transported without incident. Should these packages be incorrectly stacked, packages may clip the side of other pallets or objects in a storage area, or worse, they may topple from the pallet.

To ensure pallets are packed in a correct manner, two technologies are currently being used by developers of automated pallet packaging and transportation systems. Although both methods achieve the same result, they are based on two distinct technologies.

At the June 2009 International Robots, Vision & Motion Control Show (Rosemont, IL, USA), two companies showed how the technologies could be used to ensure correct transportation of packages placed on pallets. The first, dubbed the Sentinel AGV, from Nagle Research (Cedar Park, TX, USA; www.nagleresearch.com) uses two LMS 400 phase time-of-flight sensors from SICK USA (Minneapolis, MN, USA; www.sickusa.com) to map a 3-D field as the pallet passes under a gantry.

“In the design of the system,” says John Nagle, president of Nagle Research, “the two sensors are positioned approximately 8 ft above a gantry. Because each sensor can scan an area of approximately 70°, positioning two sensors at opposite corners at the top of the gantry will ensure that the complete pallet is scanned as it moves under the gantry.” Measuring larger loads is also possible using different LMS models.

As laser light scans the pallet, the reflected light is detected and the distance from object to sensor is calculated from the reflectivity values of the returned light. Using software developed by Nagle Research, data from both sensors is then combined into a 3-D point cloud model (see figure).

Two different types of image sensors are now being used to render 3-D models for pallet packaging inspection. The first (above), from Nagle Research, uses two SICK laser scanners to capture image data. The second (not shown), from ifm efector, uses a solid-state 3-D sensor. In both cases, captured data are rendered as point cloud images.

Click here to enlarge image

“By examining this model,” says Nagle, “data about objects packed on each pallet can be analyzed to within 0.25 in.”Automated measurement of this data is then used to control a PLC that can reroute any AGV that is incorrectly stacked to a rework station for repacking. Currently deployed at two major retailer warehouses, the $55,000 Sentinel AGV is being offered as a fully integrated system. According to Nagle, the company is also interested in marketing the software as a standalone product.

At the same trade show, ifm efector (Exton, PA, USA; www.ifmefector.com) showed a system based on technology from PMD Technologies (Siegen, Germany; www.pmdtec.com). Rather than using structured laser light to illuminate the part to be inspected, the company’s 3-D image sensor comprises a rectangular array of red LEDs that are strobed at 20 MHz. As light is reflected from these LEDs, the light is captured by a 64 × 48 array of smart pixels.

“In a traditional time-of-flight image sensor system, returning light is mixed at the same frequency as that of the light detected by a pixel,” says Garrett Place, senior product manager with ifm efector. “But because the gate of each smart pixel within the 64 × 48 array is modulated at this frequency, this mixing process is performed on-chip, allowing each smart pixel to return phase and amplitude values.” The company showed the 3-D sensor in a system designed to measure the dimensions of a pallet as it sat under a 7-ft gantry.

“At this distance,” says Place, “the system can be used to measure an area of approximately 6 × 4 ft and resolve a minimum object of 1 × 1 in. across.” Like the system developed by Nagle Research, point cloud information from ifm efector’s image sensor is displayed as point cloud data and can be further processed to control a PLC-based factory automation system. Costing $1450, the 3-D sensor is shipped with software to render captured image data as point cloud information.

Sign up for Vision Systems Design Newsletters

Voice Your Opinion!

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!