Introduction to machine vision webcast questions and answers
On the January 23 webcast, Introduction to machine vision: Definitions, components, benefits, applications,David Dechow, Staff Engineer, Intelligent Robotics/Machine Vision, FANUC America Corporation (pictured), provided an introduction to machine vision including basic definitions, benefits, applications, and components.
During the webcast, Dechow answered common questions such as:
- What is machine vision?
- What can machine vision do?
- How does machine vision work?
- What are the parts of a vision system?
- What are the different machine vision system types and platforms?
At the end of the webcast, we took as many questions as we could, but we had quite a few that day, and we were not able to get to all of them on the live call. Fortunately, Dechow has provided answers to all of the questions that were not addressed:
Question:
Don't you have to balance the lens resolution with the sensor resolution to optimize the overall optical resolution?
Answer:
Yes, you are correct. I was referring to this briefly in my discussion on lenses where I broadly mentioned using a "high-quality" lens.
The details of optical resolution were beyond this "fundamentals" discussion. However, in the vast majority of industrial machine vision applications, optical resolution does not come into question. In my experience, only a small number of industrial applications approach Nyquist limits in feature detection, and the diffraction limits of any good quality lens system should be sufficient overall.
To your point though, it certainly is important in certain applications to evaluate optical resolution to ensure feature detail.
Question:
Is there any special equipment you can recommend for environments where you can’t control vibration?
Answer:
If you mean vibration that affects the imaging and causes blur this can be overcome by using high-speed strobe lights. The light flashes for just a brief period (as small as microseconds), and virtually stops the motion. If you look into strobe lights with most any machine vision LED light supplier you can learn more.
Question:
How fast can you take an image (smaller than 640 x 480 pixels, frame to frame)?
Answer:
Image acquisition and transmission time varies widely from camera to camera depending on model, capabilities, and interface type. (Smart cameras effectively do not have "transmission" time because the sensor is integrated into the electronics of the camera processor.) Exposure time also is additive to image acquisition and transmission time. The number of pixels on the sensor, or in a constrained image, affects acquisition and transition time. With all those variables, it's hard to generalize the potential speed of image acquisition, but it can be extremely fast in some cases, possibly hundreds of images a second.
If you are talking specifically about Fanuc machine vision, speed is not a huge requirement in most of our applications. We normally acquire images at about 30 fps or less.
Of course, processing time must be considered. Most industrial machine vision applications do not function at "frame rate". That means that the processing time is longer than the image acquisition and transmission time. I would say that it is common for our vision-guided robotics applications to run at about 100ms per image.
Question:
The high-numerical-aperture lenses employed on factory floors normally correspond to shallow depth of field and consequent blurring when finding edges. Can you tell us about strategies for overcoming errors introduced by such blurring?
Answer:
First just to clarify; the term "high-numerical-aperture" is an optical design term. When we talk about "aperture" in industrial lens systems usually it refers to the mechanical iris or "pupil" and is a number related to the lens focal length (N=f/D). Overall, lenses in use in industrial machine vision do not necessarily have a "high-numerical-aperture", at least not as compared with lenses used in scientific research, astronomy, and elsewhere. Also, blurred edges due to depth of focus are not generally a problem. In industrial machine vision, usually the objects are presented at repeatable positions and therefore once the features of interest are in focus, they remain in focus even with shallow depth of focus.
A broader comment is that in industrial imaging, the physical aperture not normally used in a wide-open configuration. As such, for most applications a reasonable, if not very good, depth of focus can be achieved to overcome slight positional variation. In circumstances where the objects will be at widely varying position, we specify camera position and lens selection so that the lens will be focused at infinity for the space in which the objects will be imaged.
Question:
How can I prevent motion blur in high-speed applications?
Answer:
Motion blur can be reduced to an acceptable pixel or sub-pixel amount, but of course never eliminated. (When a part is moving, there will always be some time while the sensor is active where the part slightly changes position.)
To reduce blur use as low an exposure time as possible. This will require the use of higher intensity lighting due to the low exposure time. Further, one can use strobed illumination where the light is pulsed for a very short time. In most cases, the motion can be reduced in the image to handle very fast part movement.
Be sure, though, to calculate how much blur will actually be left to ensure that it does not introduce error, particularly in measurement or location applications.
Question:
Where can I find notable differences (or application-based differentiation) between the various platforms? i.e. Camera Link, CoaXPress, USB3, and GigE.
Answer:
With respect to camera interfaces like Camera Link, GigE, CoaXPress, USB3, or others, the differentiation will be based upon the required performance and architecture of the machine vision system for any specific application.
The different interfaces all have different physical characteristics including cable lengths, connections to the computer and camera, as well as different maximum speeds of image transfer, and cost and complexity. It is these specifications that one would consider when selecting the right interface for a system. For ease of use, GigE and USB3 are quite popular, but the final decision should be based upon the application.
Question:
Is machine vision able to work with transparent or translucent objects?
Answer:
The broad answer is "yes.” However, this is an imaging and lighting question. The issue will be "can the required features be illuminated and imaged so that there is enough contrast to use machine vision tools on those features?" I don't know the specifics of your application, but I have done a lot of imaging of transparent objects for defect detection or measurement with good success.
Question:
How does deep learning work for machine vision? What's the difference between traditional machine vision and deep learning add-on vision systems?
Answer:
Deep learning in imaging is very effective in comparison, classification, and differentiation. (Consider the classic "Google" environment where one wishes to get all pictures of a "cat.” This is an over-simplification, but essentially a deep learning system has processed the millions of available images and classified the appropriate ones as "cat" images.)
In industrial machine vision, deep learning can be very successful at the same things. A machine learning system can be trained with a large sample of images of good parts and a large sample of images with differences (defects, wrong features, etc.) and based on the learning, the system will be able to identify good parts vs. bad parts, even when the feature or defect on the bad part is new and has not previously been discretely trained.
Deep learning will be a useful tool for these types of applications, but it is not well indicated in all cases. The many applications where discrete object analysis (like guidance, location, metrology, etc.) are not good candidates for deep learning.
James Carroll
Former VSD Editor James Carroll joined the team 2013. Carroll covered machine vision and imaging from numerous angles, including application stories, industry news, market updates, and new products. In addition to writing and editing articles, Carroll managed the Innovators Awards program and webcasts.