Frame grabbers were developed in the early days of machine vision to provide a connection from analog cameras providing NTSC and PAL output signals to minicomputers requiring digital signals placed directly on data buses for digital memory storage.
Even after the switchover to digital cameras there was still a disconnect between digital video outputs and computer-input bus requirements. Something was needed to feed video data streaming from the camera into the computer’s memory. That need was met via a piece of hardware called a frame grabber, which plugged directly into the computer’s motherboard and provided a physical port for connecting the machine vision camera.
At one time many experts thought frame grabbers would be replaced by direct-to-PC-connect cameras. That has not been the case, however. The reason, perhaps most significantly, is that image sensors continue to produce higher resolution images at higher frame and line rates which far exceed the 120 MB/s serial-interface limit.
Cameras with 4 million pixels capable of running at 60 or 120 frames per second are widely available. With increased data rate comes the need to buffer images for processing and the reduction of time available for processing those images, the two tasks that frame grabbers are best adapted to performing.
In addition to buffering images, frame grabbers also offload image-reconstruction and image-enhancement tasks from the host CPU. Frame grabbers can pre-process images for data reduction or add additional details to image data to help reduce processing time.