Latency and determinism in GigE Vision systems

March 1, 2007
PC interface boards can decrease jitter.

PC interface boards can decrease jitter.

By Dwayne Crawford

Two of the most important factors in determining whether a GigE-based camera system should be used in a machine-vision application are latency and determinism. In this context, latency can be defined as the time it takes to complete an event. When occurrences of a specific event are repeated, the time to complete the event becomes variable; this is known as jitter.

When a maximum guaranteed response time for the event is required-as in most machine-vision applications (that is, processing needs to be completed before the part passes the ejector mechanism)-a deterministic system exists. This determinism is a measure of confidence that the event will be completed in a prescribed amount of time.

In GigE Vision applications, four major categories contribute to latency and determinism. These are the network and protocol architecture, type of host computer, and resource sharing and I/O mechanisms.

While most applications use a simple point-to-point connection from camera to NIC port, they will not be subject to the effects of switches, hubs, and routers. These architectures have relatively low latency and low jitter. In contrast, networks that rely on switches, hubs, and routers to interconnect several devices introduce collision domains. These exist at any point in the network where multiple devices must share a common resource such as a single Ethernet cable.

When one device uses the resource, the other devices must wait for the resource to be freed. This waiting period will increase the amount of jitter on the latency required for the event to complete. The magnitude of the increase is a function of the Ethernet architecture and directly proportional to the number of devices present on the network. As the jitter increases, maximum guaranteed response time for the system increases (or determinism of the system decreases).

Different protocol architectures also impact latency and determinism. With standard Internet traffic, TCP is commonly implemented to ensure data are not corrupted or lost during the transmission event between devices. When the system detects corruption, it initiates a process to re-attempt the initial event. This significantly increases the amount of time required to complete the initial event, resulting in much higher jitter. Since GigE Vision implements the UDP protocol, the re-attempt process can be made optional, which keeps the jitter lower and permits better determinism for short maximum guaranteed response times.

Host computer and resource sharing have the greatest impact on latency and determinism in any GigE Vision system. Several resources share the host computer, including buses, memory, CPUs, the operating system, and the image-processing application. Each of these shared resources can be viewed as a collision domain where other processes or events will have a significant impact on the jitter for any other given event through the interrupt and task switching process.

READY FOR PROCESSING

The ultimate goal of GigE Vision is to get the constructed image into host memory and ready for processing. This goal can be subdivided into several smaller tasks including processing the IP, UDP, and image-reconstruction portions of the protocol. Good-quality standard NICs have excellent resources dedicated for processing the IP and UDP portions of the protocol. They reduce the interrupt load on the host system but are incapable of reconstructing the data into an entire image-the image-reconstruction task is left to the host CPU and it is here that jitter threatens the application because the CPU is constantly interrupted.

For a CPU that is not heavily loaded, the interrupt process will have a minimal effect on the overall system jitter, but as CPU usage increases, the resulting jitter will increase, creating longer maximum response times and/or decreased determinism. Interrupt throttling is one method that can help reduce the interrupt load to reduce the jitter for the image processing application, but only at a cost of adding latency to build the interrupt queue. This can even increase the jitter in completing the image reconstruction process.

For this reason, PC interface boards, such as the Matrox Solios GigE, extend the standard NIC functions to include image reconstruction, relieving the CPU of heavy interrupt loads. Performing the image reconstruction on the Solios GigE allows the CPU to spend more cycles on the image-processing application, decreasing both application and image-reconstruction jitter. Decreasing the jitter reduces the maximum guaranteed response time and/or improves the determinism of the system (see figure).

In packet-based protocols such as GigE Vision, the image is subdivided into a large number of smaller packets. On a standard network adapter, the host CPU is responsible for handling much of the protocol stack. Packet and protocol management tax the host CPU resources, ultimately interfering with the image processing tasks. With extensive on-board resources, PC-based boards such as the Matrox Solios GigE can perform protocol management tasks and image reconstruction without interrupting the host CPU so it can complete the image processing in less time, with less jitter.

Click here to enlarge image

The fourth mechanism that impacts system jitter and determinism is the location of the auxiliary I/O. On most standard GigE Vision cameras, the I/O has been included on the camera and is accessible through the GigE Vision protocol. To communicate the result of the image-processing application on the host, the application must first share host resources to create the GigE Vision packet, which will increase jitter for all processes sharing these resources. Then the resultant output is passed through the GigE Vision protocol to the camera and is subjected to the sources of latency and jitter from the network and protocol architecture before it can provide the result on the output pin. This process will increase total system jitter and increase the maximum guaranteed response time (and/or decreasing determinism). In the Solios GigE, I/O is provided onboard, thereby eliminating any additional latency and jitter that would otherwise be added by the network and protocol architecture.

Today, many companies are using standard NICs to transfer image data from the camera into the host memory or directly to a display controller with low latency and jitter. However, when image processing is added, sharing system resources such as the buses, CPU, and memory, the overall jitter for every process in the system is significantly increased, which has a detrimental effect on maximum guaranteed response time and/or determinism. To realistically determine GigE Vision’s capabilities, unbiased comparisons of GigE Vision drivers must be performed with the jitter and/or determinism examined under a typical system load with the desired system architecture.

DWAYNE CRAWFORD is product manager at Matrox Imaging, Dorval, QC, Canada; www.matrox.com/imaging.

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!