On Vision: 2D or not 2D, That Is the Question
I often find that historical references help provide a perspective for much of today’s “new” technology. I’m not talking about the playful title above, which, of course, is a cheap pun based on one of Shakespeare’s most famous lines. The history I want to reference here is a personal anecdote about 3D imaging. Back in the mid 1980s I designed and programmed my first 3D machine vision robotic guidance system. It was for an automobile manufacturer, and the intended use was for riveting (yes really using rivets) in the assembly of truck frames. I devised an imaging system and software that analyzed the coincident angular relationship of two crossed laser lines on an expected planar surface to determine Z, yaw, and pitch, which was then combined with X and Y of a rivet hole to provide a single 6-degree-of-freedom robot point. I didn’t know of anyone then, but I’m sure many others also were creating various 3D machine vision solutions at that time for applications in industrial automation—drop me a note if you were one of those folks.
My point is that with advanced machine vision (and perhaps most advanced) technologies, knowing it has been implemented for a long time might help “demystify” the current offerings. The bottom line is that 3D system capabilities have expanded almost geometrically over the decades. And, imaging systems that construct 3D data from a scene have been at the forefront of that growth for many years. While more difficult in some cases than 2D imaging, 3D is a well-understood solution in today’s machine vision industry that can solve a wide range of applications, including some that are unachievable or would be unreliable in 2D. Still, there are plenty of valid use cases for 3D vision that are going unserved, while on the other hand, there have been some unrealistic expectations and related exaggerations of execution in targeted applications. So, how can the industry move to greater adoption of 3D without “over-hyping” the capabilities?
In my opinion, these two admirable goals—greater adoption and less hype—go somewhat hand in hand. Engineers who make the final decision on machine vision technologies in the marketplace tend to be able to “sniff out” sales hyperbole but are more than willing to embrace solutions that deliver proven results. With 3D, proven results means that the system can create a highly repeatable image at the proper resolution for the application and with associated software that can perform operations on the image (feature detection/location, measurements, etc.) reliably and within the required precision, accuracy, or trueness. Avoiding the hype aversion means, for a component vendor, demonstrating these results convincingly by showing function and operation with a real-world application and most importantly an application that is nontrivial. Certainly, the manufacturer wants to present systems showing the most robust and easiest use cases. However, these examples often may not be representative of the typically more difficult use cases encountered on the plant floor. For the end user, a good practice is to specify the needs of the application clearly, as always, and then diligently evaluate the technologies to fully understand if and how the proposed solution will succeed.
But, let’s get back to that “2D or not 2D”. The underlying intent of the title is to highlight a key “best practice” in considering advanced machine vision technologies like 3D imaging: consider all the possibilities before turning to the most advanced and complex solution. Over the years, I’ve observed a multitude of component specifications where the end user or machine vision engineer was convinced that 3D imaging had to be implemented, but where there really were less-complex architectures available. Take, for example, a critical surface inspection where small, localized defects might exist above or below the surface (think “chips” or “pits”). While 3D imaging can identify and measure these defects to some level of resolution, is it necessary to accurately quantify the defect geometrically, or is it enough to just detect a condition where the surface is not smooth? For discrete height/depth measurement, perhaps 3D imaging is indicated for this application. In the case of the latter specification though, 2D imaging with creative illumination might perform very reliably. Similarly, there has been strong interest in 3D imaging for vision guided robotics (VGR). However, again it is incorrect to assume that all VGR must or should be 3D. 2D imaging is suitable for the vast majority of VGR use cases even where the object’s profile might have 3D features. When an object will be presented in one or more “stable resting states” on a consistent surface, as long as 2D imaging can locate the part as presented and differentiate which stable resting state is facing the camera, 2D location and suitable tool offsets should be sufficient to successfully pick the part.
A further observation is that not all robotic guidance must necessarily be related to “bin picking.” I’ve seen examples where parts are put into a bin in the process just so a bin picking solution can be implemented—perhaps a correct solution in some cases, but also perhaps not the most efficient automation approach.
3D imaging truly is a broadly viable technology for many use cases. I encourage end users and machine vision engineers alike to learn about the technology and analyze the applications carefully to ensure success.