Security, Surveillance, Transportation

SECURITY AND SURVEILLANCE - Detection systems target scene awareness applications

Several different methods currently exist to perform effective surveillance across a wide field of view (FOV).
June 1, 2009
4 min read

Several different methods currently exist to perform effective surveillance across a wide field of view (FOV). Perhaps the simplest of these methods uses a fish-eye lens to provide an extremely wide, hemispherical image. After the image is captured, image-processing software is used to deconstruct it so that a panoramic image of the complete scene can be displayed. Although this can be computationally intensive, the system only require the use of a single camera and lens to image the panoramic scene.

One example of this type of vision system has been developed by ImmerVision (Montreal, QC, Canada; www.immervision.com). In the development of its IMV1-1/3 wide-angle anamorphic lens—known as a panamorphic lens—the company allows images with a 180° FOV to be captured with a standard CS-mount CCTV video camera. According to Simon Thibault, ImmerVision’s chief optical engineer, existing fish-eye lenses produce blind zones and circular or annular light areas called footprints that limit their resolution. By using the lens/camera combination in conjunction with the company’s panoramic video viewing library, image distortion from cameras equipped with the company’s panamorphic lens is considerably reduced.

Of course, there are other methods to achieve similar results. Rather than use a single camera, multiple low-cost cameras positioned at different angles can be used to capture a wide FOV. Image-processing software is then used to stitch the images together to provide a panoramic view. At the April 2009 SPIE Defense, Security & Sensing conference in Orlando, FL, Scallop Imaging (Boston, MA, USA; www.scallopimaging.com) debuted a multiple-camera system called the Digital Window D7 camera.

Using five modular 1.3-Mpixel imagers positioned equi-angularly across the 180° FOV, the camera can stream a 1280 × 320-pixel sampled image of the complete scene at 15 frames/s, a full-resolution image of the same scene at 1 frame/s, and a repositionable 640 × 480-pixel zoom window within the scene over the camera’s Ethernet interface. To achieve this, the camera’s on-board FPGA is programmed to stitch these images together before image transmission, lowering the CPU’s processing overhead.

Taking this concept further, Lucid Dimensions (Louisville, CO, USA; www.luciddimensions.com) is now developing a spherical sensor configuration that will combine multiple positional IR sensors with image data to provide tracking and identification of objects within a 360° × 360° FOV of the sensor. Before embarking on the final design, Lucid Dimensions has developed a prototype that uses thirty 2M thermopile sensors from Dexter Research (Dexter, MI, USA; www.dexterresearch.com), separated angularly by 12° and mounted on a 16-in.-diameter ring (see figure).

Lucid Dimensions’ 2-D image sensor uses 30 thermopile sensors mounted on a ring to detect IR energy. By calculating the positional angle of the source, the target tracking information generated can be used to automatically cue and pan, tilt, and zoom countermeasure or imaging systems.

Click here to enlarge image

These sensors are then used to detect 8–12-μm IR energy in a 360° radial FOV. Data from each of these sensors is then digitized using two 16-channel PC/104 data acquisition cards from Diamond Systems (Mountain View, CA, USA; www.diamondsystems.com) under control of a MSM800 Geode LX 800 500MHz PC/104 CPU, also from Diamond Systems.

Because sensors facing an IR source produce a maximum response, sensor results can be plotted as a histogram that plots the angle of the sensor on the ring and sensor response. By calculating the maximum of the peak, the corresponding positional angle of the source is then computed and its position output using the CPU’s Ethernet or serial interface. This target tracking information can be used to automatically pan, tilt, and zoom imaging or countermeasure systems.

To extend this idea to provide a 360° × 360° FOV, Lucid Dimensions is also developing a 3-D spherical prototype that uses 500 separate IR detectors and lenses. Because of the large amount of associated analog channels required, the company is studying the use of fiberoptics to bring incoming radiation into a single bundle that can then be transmitted onto a single CCD or focal plane array. To accommodate 500 analog channels, a custom Xilinx FPGA is being designed to sample and transfer the data over an Ethernet connection to a PC for analysis.

Sign up for Vision Systems Design Newsletters

Voice Your Opinion!

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!