Non-Factory

Laser backpack creates indoor 3-D building models

A portable, laser backpack for 3-D mapping has been developed at the University of California, Berkeley, to produce fast, automatic, and realistic maps of difficult interior environments using sensor fusion algorithms,  cameras, laser range finders, and inertial measurement units.
Sept. 13, 2010
3 min read

A portable, laser backpack for 3-D mapping has been developed at the University of California, Berkeley, to produce fast, automatic, and realistic maps of difficult interior environments. Under the direction of Avideh Zakhor, professor of electrical engineering, the researchers have developed novel sensor fusion algorithms that use cameras, laser range finders, and inertial measurement units to generate a textured, photo-realistic, 3-D model that can operate without GPS input, which has been a major hurdle.

The backpack is the first of a series of similar systems to work without being strapped to a robot or attached to a cart. Its fast data acquisition speed enables it to collect data while the human operator is walking, in contrast with most existing systems in which data are collected in a stop and go fashion.

To advance the backpack system, the researchers developed four scan-matching-based algorithms to localize the backpack and compare performance and tradeoffs. In addition, the researchers showed how the scan-matching-based localization algorithms combined with an extra stage of image-based alignment can be used for generating textured, photorealistic 3-D models. A technical paper on indoor localization algorithms describes their work.

The researchers have generated 3-D models of two stories of the electrical engineering building at UC Berkeley, including the stairway, and say this modeling effort could be a good complement to Google Maps, which photographs and models the exteriors of building. Such models are important in a variety of civilian and military applications, including gaming, training and simulation, counterterrorism, virtual heritage conservation, and mapping of hazardous sites. In this case, the research leading to the development of a reconnoitering backpack was funded by the Air Force Office of Scientific Research (AFOSR) and the Army Research Office.

There have been many basic research issues to address in developing a working system, including calibration, sensor registration, and localization. Using multiple sensors facilitates the modeling process, although the data from various sensors need to be registered and precisely fused with each other to produce coherent, aligned, and textured 3-D models. Localization has been a challenge since without it, it is not possible to line up scans from laser scanners to build the 3-D point cloud, which is the first step in the modeling process.

Program manager Jon Sjogren from the AFOSR noted that what is left for others is to examine the approach that was taken, and extend the techniques that were brought in to a wider context. "We are gratified to see how technology can drive science in a domain of critical relevance to practical defense implementations," he said.

SOURCE: Video and Image Processing Lab at UC Berkeley

Posted by Conard Holton
Vision Systems Design

Sign up for Vision Systems Design Newsletters

Voice Your Opinion!

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!