Camera features programmable tilt, pan, and zoom
Andrew Wilson, Editor, [email protected]
Many machine-vision systems use a number of cameras to image large objects. In barcode inspection, for example, large packages may need to be imaged to locate a single barcode or postal inspection sticker located within an image. While multiple cameras can be used to image such packages, a more-effective and lower-cost method may be to use a single programmable camera capable of pan, tilt, and zoom. “In this way,” says Christian Demant, managing director of NeuroCheck (Stuttgart, Germany; www.neurocheck.com), “multiple cameras can be eliminated, dramatically reducing the cost of these systems.”
Student Axel Springhoff, working with NeuroCheck software, has developed such a camera based on the EVI-D100 pan/tilt/zoom video camera from Sony (Park Ridge, NJ, USA; www.sony.com/videocameras). To attain maximum resolution, images that need to be inspected within the field of view of the camera must be relocated within the center of the image frame. After the region of interest (ROI) has been located within the image, the pan-and-tilt functions are used to move the camera so that the ROI falls within the center of the image. Of course,” says Demant, “calibration of the camera is required to calculate the correct pan-and-tilt data that are sent to the camera.”
Using a pan, tilt, and zoom camera, graduate student Axel Springhoff has demonstrated how high-resolution regions of interest can be captured in a postal sorting application by locating and centering the ROI and then zooming.
After the selected portion of the image is located in the center of the frame, images are zoomed to increase the amount of data captured by the camera. Because the zoom function in the Sony camera is not linear, a calibration chart is used to apply the correct zoom factor to the camera. After the image is zoomed, the camera must be properly focused to ensure that a sharp image is captured.
A characteristic curve was developed that plots the amount of focusing as a function of the camera’s zoom factor at a certain distance. To estimate the distance, two methods are used. First, the object has to be in a plane. Then characteristics are measured for several points on that plane. The curve measured next to the position of the current object is used to adjust the camera’s focus based on the zoom factor used.
The second method benefits from the effect that images from objects located further away are smaller than images from closer objects. By measuring a length in the image and the original length of the object, the distance of the object can be estimated. By using calibration tables based on several distance measurements, the camera’s focus can be adjusted based on the zoom factor.
“Research provides NeuroCheck with additional software expertise,” says Demant, “and, after the work is complete, it will be offered as an optional module for the company’s NeuroCheck software package. Whether it will be incorporated into later releases of the software will depend on customer demand.”