Home

SYSTEM PERFORMANCE: Exposing jitter and latency myths in Camera Link and GigE Vision systems

The importance of jitter and latency to those deploying standard cameras and frame grabbers is application dependent, according to Eric Carey, R&D director of DALSA.
Jan. 1, 2011
4 min read

Many camera and frame grabber manufacturers use specifications such as jitter and latency in an attempt to convince possible customers of the benefits of one standard interface over another. But how important is jitter and latency when considering whether to choose a system based on the Camera Link or GigE Vision standards? Apparently, the answer for those deploying standard cameras and frame grabbers is application dependent, according to Eric Carey, R&D director of DALSA (Waterloo, ON, Canada; www.dalsa.com).

At the VISION 2010 trade fair in Stuttgart, Germany, Carey showed test results obtained from Camera Link and GigE Vision-based systems that demonstrated the latency and jitter obtained using a camera and a camera interface.

“In real-time machine-vision applications,” says Carey, system latency is defined as the time taken between the start and completion of a task.” For vision system integrators, this latency figure will take into account the time between the camera being triggered, the image exposed, the readout and transfer time of the image data, the time taken to process an image, and the time required to actuate a response.

With varying camera readout times, transfer rates, processing, and system response times, many camera and frame grabber manufacturers simply define this latency as the time from which the camera is triggered to the time at which the image is received by the network card or frame grabber in the host computer.

“In Camera Link-based systems, this latency is extremely low since the frame grabber is used to trigger the camera on a dedicated camera control line,” says Carey. “Similarly, if a hardwired trigger is used in a GigE system, the latency will be the same.” However, many GigE-based systems use a software trigger command that is sent over the GigE packet and, in such cases, this triggering adds a network latency. “For a typical GigE Vision system, this can range from 100 to 500 μs depending on the quality of the implementation of the host software and camera,” Carey says.

In GigE systems, this latency can be approximated by using one half of the round-trip time of a command/acknowledge packet pair using a packet sniffer such as WireShark (www.wireshark.org).

Like latency, jitter—the time variation between when similar tasks such as clock pulses are executed multiple times—is also used by camera and frame grabber manufacturers to present system performance data to their customers. In tests performed at DALSA, Carey used oscilloscope test points attached to both Camera Link and GigE Vision cameras and frame grabbers and GigE network cards to measure the jitter from trigger to the time the image starts to be processed. Transmitting jumbo GigE data packets using a 1400 × 1024-pixel GigE camera running at 64 frames/s, the worst case jitter measured was 2.7 ms. For the 1400 × 1024-pixel Camera Link system running at 100 frames/s, this jitter was 1.2 ms (see figure).

Hardware trigger signals sent to GigE Vision and Camera Link camera signals have been used by DALSA to measure worst-case jitter for images to be ready for processing. These results show (a) a worst-case jitter for GigE Vision transmitting jumbo packets of 2.7 ms and (b) a worst-case jitter for Camera Link of 1.2 ms.

“Overall latency and jitter are important considerations for developers of machine-vision systems as they dictate the rate at which parts can be inspected,” says Carey. “This is how system integrators can determine at which speed the system can run (i.e., the part inspection rate).

“A camera running at 100 frames/s may have a frame-rate readout time of 10 ms, that will add 10 ms to the latency of the system—and this is before any system overhead due to operating systems such as Windows is taken into effect,” says Carey. “Such overhead typically adds up to 100 µs to the overall system latency without taking into account any image-processing functions that may need to be performed. But the worst-case overhead can easily shift into milliseconds range when a non-real-time operating system is used.”

Similarly, for GigE Vision systems, the worst-case jitter of 2.7 ms is far less than can be tolerated in systems where the data transfer time of a single image is 10 ms. A more detailed explanation of the tests performed by DALSA can be downloaded in the company’s white paper “GigE Vision for Real-Time Machine Vision” at www.dalsa.com.

More Vision Systems Issue Articles
Vision Systems Articles Archives

Sign up for Vision Systems Design Newsletters

Voice Your Opinion!

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!