April 2018 snapshots: Vision-guided harvesting robots, Amazon Go, flying taxi, and door-opening dog robots

April 1, 2018
In the April 2018 snapshots, learn about a robot being developed by researchers for the automated harvesting of cucumbers, the checkout-free Amazon Go store that relies on computer vision and deep learning technologies, as well as a vision-guided flying taxi that has started taking passengers, and a new video from Boston Dynamics that shows the SpotMini robot opening a door.

Researchers develop lightweight dual-arm robot system for cucumber harvesting

A group of international researchers are developing and testing a lightweight, dual-arm, vision-guided robot for the automated harvesting of cucumbers in Germany.

The “Cucumber Gathering – Green Field Experiments,” or “CATCH” project team includes the Fraunhofer Institute for Production Systems and Design Technology IPK (Berlin, Germany; www.ipk.fraunhofer.de/en), the Leibniz Institute for Agricultural Engineering and Bioeconomy (Potsdam, Germany; www.atb-potsdam.de/en) and the Center for Automation and Robotics CSIC-UPM (Madrid, Spain; www.csic.es/home). In Germany, explains Fraunhofer IPK, manual harvesting of pickling cucumbers use farm vehicles with wing-like attachments, on which seasonal workers lie on their stomachs and pluck ripe cucumbers. This being a labor-intensive and uneconomical operation, the researchers are studying the potential for automated cucumber harvests.

Dr. Roemi Fernandez Saavedra, Researcher at Center for Automation and Robotics CSIC-UPM, the team developing the vision system, explains two different vision system options are being considered. The first involves a CCD camera paired with a Time of Flight camera. The reflectance measurements in the visible region provided by the progressive camera are utilized as basic input for the detection of areas of interest. The TOF 3D camera supplies simultaneously fast acquisition of accurate distances and intensity images of targets, enabling the localization of cucumbers in the coordinate space when color and range information are registered.

A Prosilica GC2450 camera from Allied Vision (Stadtroda, Germany; www.alliedvision.com) was used as the color camera in this option. This GigE Vision camera features the 5 MPixel Sony (Tokyo, Japan; www.sony.com) ICX625 CCD image sensor, which has a frame rate of up to 15 fps at 2448 × 2050 pixels resolution. The ToF camera provides a depth map and an amplitude image at the resolution of 176 × 144 pixels with 16-bit floating-point precision and maximum frame rate of 54 fps, as well as x, y and z coordinates to each pixel in the depth map, according to Dr. Fernandez.

The second system being considered, according to Fernandez, consists of the monochrome version of Prosilica camera, with a custom-made filter wheel and servomotor enabling the accurate positioning of the wheel, which allows interchanging of up to five optical filters. This setup enables the analysis of spectral information, providing additional capabilities, such as the early detection of diseases. Both vision systems utilized the SwissRanger SR-400011 3D Time of Flight camera from Mesa Imaging (Now Heptagon; Zürich, Switzerland; www.hptg.com). Fernandez pointed out that other cameras can be used for this application, as the key issue is the combination of data and the processing algorithms.

It terms of which vision system may be utilized, Fernandez explained: “The first set-up is simpler and provides a faster acquisition. The input data provided by this system is enough for cucumber detection. The second set-up can provide additional capabilities, such as early detection of disease on the crops, thanks to the multispectral information. Nevertheless, the acquisition time with this system is slower, since it is necessary to move the filter wheel to take images in different spectral bands.”

Fraunhofer IPK developed the robot arms based on hardware modules developed by igus (Cologne, Germany; www.igus.eu). The team is tasked with developing three gripper prototypes: grippers based on vacuum technology, a set of bionic gripper jaws (Fin Ray) and a customized “cucumber hand” based on OpenBionics robot hands.

IPK researchers are working on the system so that it can plan, program and control the behavior of robots harvesting cucumbers. These pre-programmed behavioral patterns will reportedly enable the robot to search for cucumbers as a person would, including pushing leaves aside and changing approach to grasp a cucumber with up to 95% accuracy, according to Dr. Dragoljub Surdilovic, a scientist at Fraunhofer IPK.

This past July, the Leibniz Institute for Agricultural Engineering and Bioeconomy conducted initial field testing of the robot system at its test site, which validated basic functionality, and since the fall of 2017, project partners have been conducting additional tests in a Leibniz Institute greenhouse.

Once testing has been completed, the team will look to make it commercially available, as companies, cucumber farmers, and agricultural associations have expressed considerable interest in the dual-arm robot.

Amazon Go checkout-free convenience store opens to public

Based on computer vision technologies and deep learning algorithms that enable shoppers to purchase goods without the need for lines or checkout, the Amazon (Seattle, WA, USA; www.amazon.com) Go convenience store is now open to the public.

Located in Seattle, WA, USA at the company’s headquarters, Amazon Go was previously only open to Amazon employees.

The shopping experience, according to Amazon, is made possible by the same types of technologies that are used in self-driving and autonomous vehicles such as cars. That is, technologies such as computer vision, sensor fusion, and deep learning and machine learning technologies.

With “Just Walk Out” technology, shoppers can enter the store with the Amazon Go app, then shop for products, and finally walk out of the store without having to wait in lines or deal with conventional point of sale checkout hassles.

The technology automatically detects when products are taken from or returned to shelves and keeps track of which products have been selected in a virtual shopping cart. When the customer has finished shopping, all they have to do is walk out of the store and upon leaving their Amazon account is charged shortly thereafter.

As noted in our coverage of the announcement, the Amazon patent filings (http://bit.ly/VSD-AMPA) show that the cameras used in Amazon Go may include RGB cameras, depth sensing cameras, and infrared sensors. Within the patent filings, however, are some additional details that suggest simply using the app to enter may not be quite as simple as it sounds.

It is noted that upon detecting a user entering and/or passing through a transition area, the user is identified, and that various techniques may be used to identify the user.

This includes a camera that captures an image that is processed using facial recognition, and that “in some implementations, one or more input devices may collect data that is used to identify when the user enters the materials handling facility.”

Reports last year suggested that there were some issues in the Amazon Go store when it became crowded within the store:

“Amazon has run into problems tracking more than about 20 people in the store at one time, as well as the difficulty of keeping tabs on an item if it has been moved from its specific spot on the shelf, according to the people,” it was noted in The Wall Street Journal.

These issues must have been worked out, as the store is now open 7 AM to 9 PM, Monday through Friday, at the Amazon headquarters. Items that shoppers can purchase, sans-line, include breakfast, lunch, dinner, and snack options, as well as staple grocery items like bread, milk, cheese, and so on. In order to use Amazon Go, users must have an Amazon account, the free Amazon Go app, and a recent-generation iPhone or Android phone. You can find the Amazon Go app on the Apple App Store, Google Play, and Amazon Appstore.

“When you arrive, use the app to enter the store, then feel free to put your phone away—you don’t need it to shop. Then just browse and shop like you would at any other store. Once you’re done shopping, you’re on your way! No lines, no checkout,” according to Amazon.

Autonomous flying taxi seen carrying passengers into flight

Chinese company EHANG (Guangzhou, China; www.ehang.com) has released footage of its EHang 184 vision-guided autonomous flying taxi carrying passengers into flight for testing.

The EHANG 184 passenger drone was unveiled at the Las Vegas Convention Center during CES 2016. The drone stands just under 5 ft. tall and is made of reinforced composite material with carbon fiber and epoxy and weighs approximately 573 lbs. EHANG notes that since the company was founded in 2014, more than 150 technical engineers have conducted thousands of test flights, including a vertical climbing test reaching up to nearly 1,000 ft (300 m), a loaded test flight carrying approximately 507 lbs. (230 kg), a routed test flight covering 9.3 mi. (5 km), and a high-speed cruising test that reached 80 mph (130 km/h).

The drone can be fully charged in one hour and fly for 25 minutes at sea level, and features a downward facing camera, air conditioning, and four doubled propellers spinning parallel to the ground. When passengers enter, they set a flight plan and from there, need to give only two commands, which are “take off,” and “land,” each controlled by a click on a Microsoft Surface tablet.

EHANG notes that the flight tests are just the latest in a series of tests to ensure that the autonomous air vehicle (AAV) will be safe and ready for public use in the near future. Among the 40 some-odd passengers who helped in this testing phase were Wang Dong, deputy mayor of Guangzhou, and Huazhi Hu, EHANG Founder and CEO.

“Performing manned test flights enables us to demonstrate the safety and stability of our vehicles,” Hu said. “What we’re doing isn’t an extreme sport, so the safety of each passenger always comes first. Now that we’ve successfully tested the EHANG 184, I’m really excited to see what the future holds for us in terms of air mobility.”

EHANG is still working on improving the 184 AAV, including an emphasis on passenger experience and adding on an optional manual control. Additionally, according to a press release, the company has already developed and tested a two-set AAV with a payload of up to 617 lbs. (280 kg).

In terms of when the AAV could be available for public use, it is clear that EHANG is still in the testing phase.

“This is a step-by-step process,” commented Hu, “and at EHANG, we have our own road map. When it comes to the development and application of any transformative technology, first the technological innovation makes an impact, then the relevant policies are created and developed. This goes on to push further development of the industry.”

However, EHANG already has an agreement with the state of Nevada and Dubai’s transport authority to carry out testing, according to New Atlas (https://newatlas.com).

Vision-guided quadruped robot from Boston Dynamics now opens doors

Boston Dynamics (Waltham, MA, USA; www.bostondynamics.com) has released a video of the latest version of its SpotMini vision-guided quadruped robot opening a door and using its foot to prop it open, letting another SpotMini robot walk through.

SpotMini is a four-legged robot that weighs just over 55 lbs. (25 kg) or 66 lbs. (30 kg) if you count its arm. The all-electric robot can operate for about 90 minutes on a charge, depending on the task, and is the quietest robot Boston Dynamics has built yet. The sensor suite, according to the company, includes stereo cameras, depth cameras, and position/force sensors in the limbs for navigation and mobile manipulation. In an IEEE Spectrum (New York, NY, USA; https://spectrum.ieee.org) article that looked at an early version of the SpotMini, it was observed that the robot was equipped with a MultiSense S7 3D stereo camera from Carnegie Robotics (Pittsburgh, PA, USA; https://carnegierobotics.com). The stereo camera is fitted with either 2 MPixel CMV2000 or 4 MPixel CMV4000 color CMOS image sensors from CMOSIS (ams Sensor Belgium, Antwerp, Belgium; www.cmosis.com). All stereo processing on the camera is done inside the sensor itself, and an ROS-based API enables users to view live image and 3D range data, adjust camera and stereo parameters, and log data.

IEEE Spectrum also noted that SpotMini was equipped with a Velodyne (San Jose, CA, USA; www.velodynelidar.com) VLP-16 LiDAR puck, which provides a 360° surround field of view.

Any fan of Black Mirror will notice a striking resemblance to the robot dog that terrorized the main character in the “Metalhead” episode of Season 4 of the Netflix series. Creator Charlie Brooker told Entertainment Weekly (New York, NY, USA; www.ew.com) that the robots were based on those that were developed in real life by Boston Dynamics. View the video of the SpotMini robot here: http://bit.ly/VSD-SPOT.

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!