Vision systems power innovations in automation: Insights from the 2019 A3 Business Forum
Robots and automation create more jobs than they're taking away, the robotics business is booming, and vision systemsmake some of the most exciting innovations in automation possible. These are the lessons to take away from the 2019 A3 Business Forum, held in Orlando, FL, USA from January 14 to January 16.
Vision systems are a supporting technology for the automation industry with which the A3 conference is primarily concerned, and thus one of the three pillars of the conference. Vision systems at A3 are primarily represented by the Automated Imaging Association (AIA), and at the AIA's breakout session on Tuesday afternoon conference, attendees were given a brief look at three new initiatives that depend on vision systems for success.
Transdev operates dozens of transport services across the United States. The company's experience is multimodal. The company has worked with public transportation, rail systems, and personal transportation like taxis and black car services. The president of Transdev, Andrew Chatham, in his session "A Vision for Applied Automation in Mobility," stressed that his company is concerned with making sure that the automated transportation systems developed by Transdev can be controlled and adjusted by users who are not experts in robotics, a key requirement for installing automated transportation systems into human communities.
In order for automated transportation systems to succeed, argued Chatham, these systems must be able to replicate all the functions performed by human drivers. Autonomous driving systems (ADS) replace the human driver but cannot alone ensure passenger safety and security. Safe robotization and virtual safety drivers have to supplement the ADS. User Interfaces Management, the ability to manage stops and access the vehicle's pathing generally, inform and reassure passengers as to the status of the vehicle, and control and regulate the system as a whole, are equally important to the vision systems that enable the ADS.
Chatham presented a few of the projects Transdev is working on, like the Rouen-Normandy Autonomous Lab. Passengers utilize an app to request transportation from a mixed fleet of Nissan/Renault and Transdev/Lohr vehicles and are then picked up and dropped off, creating a system akin to an automated Lyft or Uber. The system consists of three loops totaling 6 miles of open roads and a total of 17 stops.
Transdev has partnered with Jacksonville, Florida's transportation authority to help spearhead the JTA's Ultimate Urban Circulator initiative. Transdev was awarded a BUILD grant for preliminary engineering work. The company is also working with Montreal, QC, Canada's Olympic Park, building a two-vehicle pilot on mixed-use, pedestrian-heavy paths, with plans to expand the service to the entire park beginning in May 2019.
Babcock Ranch is a community in South Florida designed from the ground up for sustainability, with a focus on wide implementation of solar power. Transdev is currently building the community's automated transportation system, using Babcock as a virtual laboratory for the company's mobility-as-a-service technologies.
Eric Danziger, CEO and co-founder of Invisible AI, in his session "The Future of Computer Vision in Manufacturing & Logistics," presented the company's goal of creating vision systems that could further improve efficiency by tracking human movements. Currently-available vision and automation systems such as those built into automated warehouses can track the position of products from their location in a warehouse to the moment they are loaded onto a truck for delivery. By monitoring the position of workers over time, including information on the positioning of their limbs, vision systems could provide data that informs the design of work areas and creation of standard procedures to increase efficiency.
Danziger's company relies in part on his previous experience working with autonomous driving systems. The challenge for implementing the sort of worker-tracking system Invisible AI proposes is no longer algorithms, as demonstrated by the success of current self-driving machines. Engineers already know how to design vision systems that inform machines how to correctly see and process the environment. The challenge, Danziger believes, is infrastructure and the cost of deploying systems.
According to Danziger, the vision technology that could drive such a system is as common as the cameras installed into smartphones. The photo editing capabilities native to smartphones today were in the past only available via software such as Photoshop. Facial recognition routines in smartphones have become viable. On-device processing can lead to real-time insights while requiring low bandwidth to receive and process the data, with no IT or server integrations required.
David M.S. Johnson, PhD., CEO of Dexai Robotics, has a friend who owns a Subway sandwich shop His friend needs to work double-shifts for the inability to find employees, and is suffering a problem that plagues the food service industry. According to Johnson, 75% of restaurants are currently understaffed and service restaurants see 146% turnover. Automation would seem to be an obvious way to address the problem. Most current solutions available to restaurateurs, however, are built to automate specific, individual tasks. What's needed is a robot that can perform multiple tasks.
Enter Alfred, a "flexible food robot," and the focus of Dexai Robotics' research. Alfred is designed to change utensils on the fly, pick and drop items with tongs, and scoop and dispense with spoons and dishes. In an experiment in which Alfred assembled salads, the robot fulfilled 50 orders per hour, weighed portions with a 5% error rate, handled ingredients of multiple shapes, and adapted to various layouts. Alfred has also been used to serve ice cream to attendees at professional conferences.
One of the primary challenges in designing Alfred were the related vision issues, for which deep learning has proved to be the solution. When faced with a selection of components, RGB image-segmentation routines can separate objects into classes and label each point on the object in a point-cloud. Once the objects are registered, a colorized point-cloud that includes object labels is fed into Elastic Fusion software to produce a final, semantically-labeled cloud. Once the system learns the characteristics of recurring objects, this solution can help solve for the issue of partial occlusion.
Dexai Robotics also experiments with pose interpreter networks, a deep-learning routine that can predict the future position of objects based on the object's prior position. This routine can allow a robot such as Alfred to properly recognize, pick, and grasp components even as they move around within a work area. This routine in part creates the flexibility Alfred requires to work with varying types of food and recipes, versus being limited to a single product, and also assists in obstruction avoidance.
Share your vision-related news by contacting Dennis Scimeca,Associate Editor, Vision Systems Design
To receive news like this in your inbox, click here.
Join our LinkedIn group | Like us on Facebook | Follow us on Twitter
Dennis Scimeca
Dennis Scimeca is a veteran technology journalist with expertise in interactive entertainment and virtual reality. At Vision Systems Design, Dennis covered machine vision and image processing with an eye toward leading-edge technologies and practical applications for making a better world. Currently, he is the senior editor for technology at IndustryWeek, a partner publication to Vision Systems Design.