Image sensors

IMAGE PROCESSING: Lightfield cameras enable image refocusing and 3-D imaging

To overcome limitations in conventional fixed-focus-lens-based cameras, a few manufacturers have developed lightfield cameras that, like traditional cameras, gather light using a single lens but place an array of lens elements at the image plane.
Jan. 5, 2011
4 min read

Leveraging the increasing number of pixels, resolution, and speed now available from today’s commercial image sensors, manufacturers now offer a variety of megapixel cameras with a number of interfaces for machine-vision and image-processing applications. In most of these applications, cameras use single fixed-focus lenses that focus an image onto the image sensor plane of the camera.

Placing an array of spherical lens elements at the image plane, arbitrary focus planes can be refocused and the captured view recreated to generate objects captured by the camera in 3-D.

While useful, such cameras have limited depth of field and no stereo capability. To overcome these limitations, a few manufacturers have developed so-called lightfield cameras that, like traditional cameras, gather light using a single lens. However, by placing an array of lens elements at the image plane, the structure of light impinging on different sub-regions of the lens aperture can be captured.

By capturing data from these multiple sub-regions, software-based image-processing techniques can be used to select image data from any sub-region within the aperture. In this way, images can be recreated that represent views of the object from different positions, recreating a stereo effect. Better still, if the scene depth is calculated from the raw image, each pixel can be refocused individually to give an image that is in focus everywhere.

“Although the first lightfield-based capturing devices were described in 1908 by Nobel Prize winner Gabriel Lippmann,” says Christian Perwaß, technical director of Raytrix (Kiel, Germany; www.raytrix.de), “the technology has only recently been commercialized.” Indeed, only within the last few years have commercial implementations of these been developed (see “Speeding up the Bus,” Vision Systems Design, January 2008).

While those devices developed at Stanford University and Point Grey Research’s ProFUSION25 use an array of custom cameras instead of microlens arrays in front of a photosensor array, a lightfield camera from Adobe achieves an effective 1-Mpixel resolution using a 16-Mpixel CCD imager and a hexagonal array of lenses (see “New Light Field Camera Designs,” Todor G. Georgiev).

Like the lightfield camera designs from Stanford and Adobe, the R5 4D lightfield camera from Raytrix, on display during VISION 2010 in Stuttgart, is also based on an array of lens elements that are placed in front of the image plane of the camera. By bonding an array of 200 × 200 of these spherical lens elements to a 2560 × 1920-pixel CMOS imager from Aptina (San Jose, CA, USA; www.aptina.com), multiple image sub-regions within the image plane can be captured by the image sensor.

“In this design,” says Perwaß, “the aperture of the main lens has to equal the aperture of each of the microlenses. Otherwise, either the micro-images overlap or there are gaps between neighboring micro-images.” While the aperture of all of the microlenses is equal, the focal length of the microlenses used is different.

By using a triad array of spherical lenses, each with a different focal length, the camera design differs from both the Stanford and Adobe versions and provides six times the depth of field of a standard CCD or CMOS camera. To recreate both 3-D images and images focused at different points in the scene requires ray-tracing techniques to generate images by tracing the path of light through the pixels in the image plane.

To accomplish this at rates as fast as 5 frames/s with an effective image resolution of 1 Mpixel, Raytrix offers a software package that can be used to refocus arbitrary focus planes, transform the captured view from the camera to simulate a stereo image, and generate objects captured by the camera in 3-D (see figure). “In applications such as rendering 3-D views of microscopic images, this is particularly useful since only one camera is required,” says Perwaß.

More Vision Systems Issue Articles
Vision Systems Articles Archives

Sign up for Vision Systems Design Newsletters

Voice Your Opinion!

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!