Smart cameras look for smarter uses
The market is fragmented among processors, available software, and time required to develop a system.
By Andrew Wilson, Editor
With embedded CCD or CMOS imagers, processors, and software, smart cameras offer system developers a way to rapidly implement simple machine-vision systems in a single unit. Networked to industrial manufacturing systems, these cameras lower the cost of integration and relieve the design burden of choosing individual cameras, frame grabbers, and software.
While the price of these sophisticated smart cameras remains higher than cameras that do not feature general-purpose programming capability, the trend is clear. Developers can or will be able to offer cameras that, because of their programmability, can be tailored to meet a number of applications in machine vision, traffic surveillance, security, and military applications. Currently, the price of the cameras may be out of reach for low-cost applications in traffic surveillance, but the continuing drive to integrate sensors, processors, and I/O on a single device will eventually lower the price to a point at which they are competitive for such applications.
Vendors of these cameras may be ahead of the curve of companies that offer less- complex products. By gaining an understanding of the design of what is essentially a complete vision system embedded in a camera, these vendors will be better able to tailor their products to meet the needs of multiple applications.
In their designs, many vendors have taken a modular approach that enables them to offer a number of different camera modules based on a variety of image sensors. Indeed, while this table lists many manufacturers that now offer smart cameras, it does not represent all of the products available. Imperx, for example, offers many different camera models based on sensors that range from VGA to 11M-pixel imagers.
CCD vs. CMOS
Several trends are clear in the smart cameras that are now available. To address the machine-vision market, very few vendors have adopted CMOS imagers in their designs. Those that do tend to address niche applications within this market. Intevac’s CMOS-based E1100, for example, is firmly positioned as a camera for night-vision applications, while Neuricam’s NC-5300 PCCam, which is also CMOS-based, targets automatic license-plate recognition, traffic monitoring, and security and surveillance markets.
“Cognex uses both CMOS and CCD imagers in its vision sensors,” says Blake DeFrance, Cognex product marketing specialist for In-Sight vision sensors. “While the majority of these products use CCD imagers to provide the sensitivity and uniform pixelization needed, in some cases the accuracy and sensitivity of a CCD is not needed, and CMOS imagers can be used. For example, when a simple presence/absence inspection is being performed under very consistent light conditions a CMOS-based sensor may be acceptable.”
“In many cases CMOS imagers offer significant benefits in reduced integration time, faster acquisition rates due to random pixel access, and lower power consumption and generally cost 15%-25% less than comparable CCD imagers,” says Conner Henry, Cognex product marketing specialist for DVT vision sensors. “The DVT line of vision sensors, for example, has used a CMOS imager for more than two years and has just recently released its DVT 515 integrated with the CMOS imagers from Micron Technology (see Fig. 1). Comparison tests of the image quality of this new CMOS imager and the current Sony CCD show similar dynamic ranges.”
FIGURE 1. DVT line of vision sensors has used a CMOS imager for more than two years. Now, Cognex has released the latest version of the sensor, the DVT 515, which uses a CMOS imager from Micron Technology.
Others remain skeptical of the performance of CMOS imagers. “CMOS sensors will certainly gain their market share,” says Michael Engel, founder of Vision Components, “but for professional applications in the machine-vision industry, it may be wiser to choose CCD-based systems for the time being.”
Embedded processors
Embedded microcontroller-like x86-compatible devices such as AMD Geode SC2200 include an on-board 32-bit x86-compatible processor, a display processor, serial and parallel ports, and a real-time clock. These processors provide smart cameras with x86 compatibility and peripheral functions such as display control and I/O. Designed for thin client applications, the less than 1-W power consumption of the Geode processor has been used by Neuricam in its NC-5300.
For the sophisticated OEM, the choice of operating system and the development tools supplied by the camera vendors can have an impact on system performance. Microsoft, for example, offers two embedded operating systems, Windows CE .NET and Windows XP embedded (XPe), that are targeted toward developers of embedded systems. While XPe is based on XP, Windows CE .NET is designed for developers of small-footprint devices. By using x86-based CPUs in their designs, camera vendors can support both Windows CE .NET and Windows XPe.
Interestingly, Cognex draws a distinction between the software used to communicate with the vision sensor and the operating system (OS) residing on the device. “Our primary concern is selecting the most efficient platform relative to handling and transferring inputs from devices such as trigger sensors,” says DeFrance. “The Cognex In-Sight product family, for example, uses a real-time operating system that achieves the same performance of an XPe or CE,” he says.
See sharply
Windows-based operating systems may provide other benefits. These include easy-to-use PC-based development tools, making the transition from PC-based systems to smart-camera implementations much faster. To program machine-vision cameras such as the Matrox Iris P-Series under Windows CE .NET, for example, development tools installed on a PC workstation are used for coding and compiling an application (see Fig. 2). Once generated, the application is downloaded to the camera through an Ethernet link, where it can then be executed and remotely debugged from the PC workstation.
FIGURE 2. To program machine-vision cameras such as the Matrox Iris P-Series under Windows CE .NET, development tools installed on a PC workstation are used for coding and compiling an application. Once generated, the application is downloaded to the camera through an Ethernet link, where it can be executed and remotely debugged from the PC.
Microsoft supplies a variety of application programming interfaces (APIs) for CE .NET that include Win32, Microsoft Foundation Classes (MFC), and .NET Compact Framework. “To use the Matrox series of smart cameras,” says Fabio Perelli, product marketing manager at Matrox, “system developers need to program their applications using the Win32 API, since this produces the smallest and fastest applications (EXE and DLL) for a Windows CE .NET platform.”
According to Matrox, however, Microsoft’s MFC and .NET Compact Framework APIs cannot be used to program the Matrox Iris P-Series, since this is not yet supported by the Matrox Imaging Library. “To overcome this,” says Perelli, “Matrox is using C# internally for developing its Design Assistant graphical IDE for the Matrox Iris E-Series.”
Says Cognex’s DeFrance, “Although some vendors are rewriting their code in C# for CE.NET, many of the improvements benefit the PC, which configures or communicates to the smart camera-not the camera itself.” He says attempting to program in a C# language within a smart camera presents challenges not found in a PC environment. One hurdle inherent to embedded systems is memory management. Because real-time systems such as smart cameras need large blocks of memory for images, it becomes difficult to efficiently deal with the speed burden placed on the system due to a Common Language Runtime in a C# environment. “For this reason we have avoided using C# for CE.NET as a programming language for our smart cameras,” he adds.
PC-based software
While some camera vendors support just one operating system, a number have chosen to support one or more operating systems in their products. Sony, for example, incorporates an x-86 processor in its SCI-SX1 that allows the camera to run both built-in Linux or loadable Windows XPe operating systems. The reason for such multiple operating system support is clear. Doing so allows these camera vendors to rapidly port existing PC-based software such as MVTec’s Halcon, Euresys’ e-Vision, and FDS’ Imaging Software to their products. At the same time, developers of lower-cost Linux-based systems can take advantage of the number of free software packages and development tools currently available.
In this manner, companies can offer their cameras with a range of application software while reducing the developer’s time to market. Third-party software developers can also add their own functionality to the cameras, offering unique products that serve specific markets. One company, General Vision, for example, has used the Matrox Iris camera to develop its own product known as the CogniSight Iris.
By embedding General Vision’s CogniSight neural-network-based image-recognition engine, the system maps information such as contrast, shape, and motion and recalls it for real-time image recognition. Already, CogniSight sensors are being used to sort fish with high accuracy (see p. 29).
For embedded, low-cost applications, companies such as Cognex and Vision Components offer cameras based on digital signal processors (DSPs). Using standard development tools such as Texas Instruments (TI) Code Composer then allows application software to be written, compiled, and debugged. Many cameras that use DSPs feature the DSP-vendors’ recommended operating system. In the case of TI, Code Composer Studio includes a DSP/BIOS configuration tool to set DSP/BIOS functions.
Of course, with added programming complexity comes the need to offer the integrator a set of easy-to-configure machine-vision tools that require little or no programming expertise. For its part, Cognex accomplishes this by allowing the developer access to DVT Intellect Software. Vision Components offers its own VC Lib image-processing library and RTOS, the VC/RT 2.3.
Input/output
Today, the smart camera market is fragmented among the processors that are used to perform image processing, the software that is available, and the amount of time required to develop a functioning machine-vision system. Because the applications in which they are being used vary considerably, special attention must be paid when benchmarking the cameras for a particular application. In choosing a camera, one of the most important considerations may be whether the camera can be triggered from external events and whether it can control external devices such as programmable logic controllers.
To accommodate these requirements, both the PPT Vision Impact T23 and Siemens VS710 incorporate camera restart and reset modes, as well as a number of digital I/O interfaces (see Fig. 3). PPT’s T23, for example, allows developers to trigger the camera, interface to external peripherals using I/O lines, and transfer data over Ethernet or serial interfaces. For its part, Siemens offers similar capabilities, as well as allowing captured images to be displayed on an SVGA display.
In the future, solutions to challenges to deploying such smart cameras in niche applications may not emerge from the camera vendors themselves but from third-party developers. Already, for example, General Vision is offering a system using Matrox’s Iris.
Operating system vendors such as Microsoft are already touting .NET as an integral Windows component for building and running software by sharing data and functionality over a network through protocols such as XML, simple object access protocol, and HTTP. And these software applications will not only run on Windows CE. The Mono Project (www.mono-project.com), an open development initiative sponsored by Novell, aims at providing software to develop and run .NET client and server applications on Linux, Solaris, Mac OS X, Windows, and Unix.
While these developments will, no doubt, lead to the proliferation of machine-vision software that runs across multiple PCs and operating systems, smart-camera developers will still need to tailor machine-vision packages for applications. While running software packages on stripped-down versions of PC-like cameras may provide the functionality needed to address many machine-vision markets, leaner more sophisticated hardware will be required to reduce the cost of future smart-camera designs.
null