Robotics

MIT robot navigates using Microsoft’s Kinect

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL; Cambridge, MA, USA) have developed a robot that uses Microsoft’s Kinect to navigate through its surroundings
Feb. 22, 2012
2 min read

Researchers at the at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL; Cambridge, MA, USA) have developed a robot that uses Microsoft’s Kinect to navigate through its surroundings.

While a large amount of research has been devoted to developing one-off maps that robots can use to navigate around an area, such systems cannot adjust to changes in the surroundings over time.

The MIT approach, based on a technique called Simultaneous Localization and Mapping (SLAM), will allow robots to constantly update a map as they learn new information over time.

The MIT team has previously tested the approach on robots equipped with expensive laser-scanners, but in a paper to be presented this May at the International Conference on Robotics and Automation (St. Paul, MN), they have now shown how a robot can locate itself in such a map with a Kinect camera.

As the robot travels through an unexplored area, the Kinect sensor’s visible-light video camera and infrared depth sensor scan the surroundings, building up a 3-D model of the walls of the room and the objects within it. Then, when the robot passes through the same area again, the system compares the features of the new image it has created -- including details such as the edges of walls, for example -- with all the previous images it has taken until it finds a match.

At the same time, the system constantly estimates the robot’s motion using on-board sensors that measure the distance its wheels have rotated. By combining the visual information with this motion data, it can determine where within the building the robot is positioned. Combining the two sources of information allows the system to eliminate errors that might creep in if it relied on the robot’s on-board sensors alone.

Once the system is certain of its location, any new features that have appeared since the previous picture was taken can be incorporated into the map by combining the old and new images of the scene.

More information is available here.

-- by Dave Wilson, Senior Editor, Vision Systems Design

Sign up for Vision Systems Design Newsletters

Voice Your Opinion!

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!