Software adds upgraded edge detection
Andrew Wilson, Editor, [email protected]
One of the highlights of the Vision Summit held in conjunction with NIWeek 2007 was a presentation by Dinesh Nair, principal architect of the Research and Development Group at National Instruments (NI; Austin, TX, USA; www.ni.com). In his presentation, Nair explained the principles behind the latest edge-detection algorithms that the company is including with the newly released version of the NI Vision 8.5 Development Module.
“Edge detection is one of the most commonly used machine-vision tools,” says Nair, “because it is fast, simple to use, and applicable to a variety of applications that include image detection, alignment, and gauging.” In implementing one-dimensional edge detection, pixel values along a line are digitized and the gradient information computed. The peaks along this gradient will then represent the locations of edges within the image.
“However,” says Nair, “edge detection along a single line is sensitive to noise. This can be improved by averaging pixels perpendicular to the search direction (see Fig. 1).” In this process, the 2-D region is first extracted using bilinear interpolation and the average or median values along the columns computed. After applying an edge-detection kernel to these data, the gradient can then be recomputed, returning the peak along a user-specified line within the 2-D image.
Figure 1. Edge detection along a single line is sensitive to noise. This can be improved by averaging pixels perpendicular to the search direction. The 2-D region is first extracted using bi-linear interpolation and the average or median values along the columns computed. After applying an edge detection kernel to this data, the gradient can then be recomputed, returning the peak along a user-specified line within the 2-D image.
Averaging along the columns is recommended when the image contains uniform or Gaussian noise. However, the median is a better option when the image contains salt-n-pepper noise. “By computing the median pixel values along the columns instead of a simple average,” says Nair, “the edge detector returns unique as opposed to multiple false peaks.” To improve the accuracy even further, parabolic interpolation is used to compute the subpixel location of these edges.
To study the accuracy and repeatability of this edge detection, Nair and his colleagues performed a set of experiments that involved detecting edges of targets moved on high-precision motion stages. In these experiments, targets were moved 6-µm intervals (or approximately 0.024 pixels) over 50 steps and multiple images acquired at each step to study the repeatability of the edge detector.
“Using images that exhibited a 15-dB signal-to-noise ratio, a 2-D region with a width of 7 pixels and an edge-detection kernel of size 3, the accuracy of the edge detector is to within 1/10 of a pixel,” says Nair. “However, if the width of the 2-D region is increased to 15 pixels, then the accuracy increases to approximately 6/100 pixel.” By incorporating this edge-detection tool within the NI Vision 8.5 Development Module, the user can quickly analyze and display the results of such edge analysis.
“Of course,” says Nair, “in many applications it is necessary to incorporate calibration information associated with the image within the edge detection process.” In many of these implementations, the calibration information is computed offline using a calibration grid and then attached to the image,” says Nair. The image is then corrected to remove any distortion before the edge detection process. Unfortunately, this is not computationally efficient because the entire image needs to be corrected. Rather, it is better to simply extract the corrected 2-D region of the image where edge detection is required, to perform edge detection and then to return calibrated edge locations to the original image. This, more computationally efficient process has been implemented in NI Edge Tool.
To evaluate the stability and reliability of the edge detector over a sequence of real images, system integrators also need to understand the noise information associated with edges within the image. “Because noise magnitude associated with the best edge is the value of the strongest peak after the edge,” says Nair, “this data can be used to compute the SNR of the pixel data associated with the edge (see Fig. 2).” Thus, although the strength of the edges may be very large, the noise associated with the second best edge may result in a very poor signal-to-noise ratio. The noise associated with each detected edge point and the signal-to-noise ratio information returned by the edge detector can be easily visualized and analyzed using NI Vision Assistant 8.5.
Figure 2. Because noise magnitude associated with the best edge is the value of the strongest peak after the edge, this data can be used to compute the signal-to-noise ratio of the pixel data associated with the edge. Thus, although the strength of the edges may be very large, the noise associated with the second best edge may result in a very poor signal-to-noise ratio.
To extend this edge-detection process, NI software also incorporates 1-D edge detection along multiple lines in a rectangular search region. This so-called rake edge detector is a very fast method used to detect distinct straight edges within a specific angle range. In the latest version of the Vision Development Module, NI has extended this concept to include a Hough-based rake detector, and a projection-based edge detector. Based on the 1962 patent of Paul Hough, the Hough transform is designed to extract straight lines, circles, and ellipses from image data.
In NI’s implementation, a 2-D rake is first used to find all the edge points within an image. Each of these points is then transformed into Hough space and the intersection of these points in Hough space computed. “These intersection points,” says Nair, “then represent a line within the original image.” By using statistical analysis the best fit of these points can then be used to compute and display the edges within the original image (see Fig. 3).
With images that contain objects with low contrast in the background, however, the edge-detection transforms may not be as useful as projection-based methods. In these types of transforms, pixels that are perpendicular to the search direction are averaged and the best edge on the average profile computed. The position of this edge is used to find the resulting straight edge and the process repeated so that the best edge at angles specified by angle range is found.
“First and best rake methods are useful to find well-defined lines,” summarizes Nair, “and first and best projection based methods are best in finding very low contrast lines. Although slower than other methods, Hough-based transforms are most useful method for analyzing images with intersecting or overlapping lines.”