Engineers at Microsoft Research (Cambridge, UK) have created a system that takes live depth data from a moving Kinect camera and creates geometrically accurate 3-D models.
The so-called KinectFusion system allows a user holding a Kinect camera to move quickly within any indoor space, and rapidly scan and create a fused 3-D model of the whole room and its contents within seconds.
Small camera movements due to camera shake can provide new views of the scene and refinements of the 3-D model, similar to the effect of image super-resolution, a method of creating a high-resolution image by fusing information from low-resolution images.
As the camera is moved closer to objects in the scene, more detail can be added to the acquired 3-D model. To achieve this, the system continually tracks the six degrees of freedom of the camera.
The researchers say that they have developed novel GPU-based software for both camera tracking and surface reconstruction that allow the system to run at interactive real-time rates.
-- Posted by Vision Systems Design