Kinect system tracks household objects
Two researchers from the Department of Computer Science at the University of Virginia (Charlottesville, VA, USA) have developed algorithms that can be used with Microsoft Kinect hardware to locate and track household objects.
Tracking algorithms are computationally intensive, and as the number of objects in a scene grows, the computation time becomes comparable to the frame rate, making it unfeasible to track many objects at once.
Shahriar Nirjon and John Stankovic's so-called Kinsight system, however, makes the assumption that the location of the objects only changes due to the action of an individual. By tracking individuals in a scene and detecting and recognizing objects from the way that the individual interacts with them, the computational burden is lower.
To help further, the system also makes the assumption that the number of objects that a person interacts with during a specific activity such as cooking will be restricted to a limited time window. It also takes advantage of the fact that objects are more than likely to stay in a limited number of areas.
The system partitions the processing tasks to optimize the use of the computational resources that are available. While tracking individuals and detecting the way that they interact with objects is performed in real time, image processing, classification and data mining tasks are performed when the system is idle.
The researchers have conducted a number of experiments to determine the performance of their Kinect-based system. So far, they have used only one Kinect sensor, but they believe that using several Kinect based systems might be beneficial in some environments where parts of a scene might be subject to occlusion.
A technical article detailing the design of the system, the algorithms that have been developed and the results from the experiments can be found here.
-- by Dave Wilson, Senior Editor, Vision Systems Design