Machine Vision and Imaging Trends in Automation: A View from the Trenches for 2024
Automation technicians, operators, engineers and managers generally can’t afford to rely on the latest “trends.” Of course, it is important to keep up with and evaluate promising new technologies. In the end, though, the latest highly hyped widget or app rarely turns out to be the broadly applicable silver bullet solution promised in marketing materials and staged demos. Instead, what’s critically important in the “trenches” is the practical implementation and use of technologies that successfully and reliably address the needs of a process or production environment.
As we consider the ever-expanding range of inspection, guidance and measurement tasks that benefit from vision and imaging technologies, some key topics and directions emerge. With that in mind, here are some thoughts on things that may be worth reviewing as we conclude this year.
Overcoming Challenges in “Easy-To-Use” and “No-Code” Solutions
An ongoing trend in the vision and imaging marketplace has been the promotion of "ease-of-use" products and configurable or “no-code” solutions. Ease of use, though, is a difficult concept because the metric for determining what is “easy” is not only subjective but also highly dependent on one’s skill set and training. The implication is that these technologies should be or could be used without any prerequisite skills. The fact is, however, that implementing imaging and analysis for vision applications has long been quite easy to accomplish, given a reasonable understanding of the methods. Nonetheless, ease of use is a compelling argument for a product.
In vision applications, however, there are some fundamental obstacles that must be overcome. The first, and perhaps most important, is the imaging and image-acquisition design. Products that must be inspected, the features that must be detected, and the automation environments that produce these products all contribute to a nearly incalculable range of variation for inspection tasks. A scratch on an engine block (or for future consideration, an electric motor stator) presents much differently than a scratch on a cell phone or medical device display. And without correct imaging component architectures, these features will not even be seen in the resulting image. This is a complication for ease of use, as a generic imaging solution is difficult, if not impossible, to apply to all potential use cases.
On the analysis side, the requirements of each application differ significantly. For example, even for simple defect detection, it might be necessary to judge not just presence/absence but also check for defect size, color, geometry and more. These kinds of project requirements predicate the need for a generally high level of configuration and customization of an application, even to the extent of utilizing special code to achieve reliable results. The analysis obstacle then for ease of use can be summarized in the axiom that, to some degree, configurability and capability must be limited to achieve the desired level of ease of use.
Even with the potential obstacles, successful vision products and systems with greater ease of use are emerging and should see continued growth. The key in most cases is to constrain the task to specific use cases that can be suitably generalized for both imaging and analysis, and the market is seeing some growth in these types of systems. Specific application areas are somewhat diverse, and one that has been prominent is 3D imaging, particularly for robotic guidance.
Growth in 3D Imaging Solutions for Vision Guided Robotics
No longer an emerging imaging technology, 3D is becoming a mature product offering for automated imaging with continued component growth and sophistication. Obtaining and processing a high-quality point cloud or depth image from a scene is now an expected capability, and the marketplace has a wide array of component choices. A recent industry trend takes advantage of the expansion of 3D capabilities in the form of application-specific solutions that purport to make 3D vision guided robotics (VGR) easy to use. Once called the “holy grail” of machine vision, bin picking as an application is now available as a stand-alone solution. Other applications—including common but sometimes complex tasks such as random product palletizing and depalletizing—are emerging as “packaged” systems. This approach to 3D VGR holds promise for expanding the use of these complementary technologies. To be clear, however, capabilities still need to be competently matched to project requirements, and as always there is not a one-size-fits-all solution. Still, 3D imaging continues to be increasingly accessible to the vision engineer for a variety of general use cases.
AI and Deep Learning as Automated Inspection Tools
When we talk about AI, let’s agree that in automated inspection, almost without exception, AI refers to deep learning in some form. In general purpose vision for automation, deep learning has emerged as a valuable tool for segmentation and classification of images or features in images. The main benefit is that the desired images or features to be detected are learned by the software rather than explicitly defined algorithmically by their appearance and geometry within the image. Deep learning excels at being able to identify things that are more subjective than discrete, like a human. While the technology has suffered for some years from extreme marketing hype and inflated expectations, it seems that deep learning is settling into a quieter role and the trend will be to use it as a valuable tool in the broader capabilities of automated inspection.
What the market has observed about deep learning, despite early promises, are the following lessons:
- Image formation requirements are the same for deep learning applications as they are for standard machine vision, meaning that deep learning cannot overcome bad lighting and/or optical design.
- Deep learning is not a solution for all machine vision applications.
- Deep learning as a stand-alone technology requires a high level of skill and experience to implement.
The most successful product solutions using deep learning appear to be those where a hybrid approach is taken using both analytical and deep learning tools. As for ease of use, solutions are trending that also make use of pre-configured models for specific applications where extensive training and learning are not required. These often, as noted above, make use of both standard machine vision and the deep learning tools.
Vision and Imaging from the Trenches
Trends may or may not turn into a common practice. Only those things that truly deliver long-term success when implemented will survive the hype curve. Machine vision and imaging in industrial automation are thriving and will continue to thrive buoyed by a wealth of excellent and evolving technologies—including imaging components, systems and software—that are not revered as the latest “trends.” Ease of use, 3D, deep learning, and all the other trends are important to consider, but the wise technologist in the trenches will continue to apply proven and reliable tools when addressing most applications.
David Dechow
With more than 35 years of experience, David Dechow is the founder and owner of Machine Vision Source (Salisbury, NC, USA), a machine vision integration firm. He has been the founder and owner of two successful machine vision integration companies. He is the 2007 recipient of the AIA Automated Imaging Achievement Award honoring industry leaders for outstanding career contributions in industrial and/or scientific imaging.
Dechow is a regular speaker at conferences and seminars worldwide, and has had numerous articles on machine vision technology and integration published in trade journals and magazines. He has been a key educator in the industry and has participated in the training of hundreds of machine vision engineers as an instructor with the AIA Certified Vision Professional program.