Robotics

Security software automates image analysis tasks

A team of US researchers has developed a system that could enhance the security of airports by  automating the analysis of video feeds from security cameras.
June 6, 2012
3 min read

Security teams guarding airports, docks and border crossings from terrorist attack or illegal entry need to know immediately when someone enters a prohibited area, and who they are.

A network of surveillance cameras is typically used to monitor such locations 24 hours a day, but these can generate too many images for human eyes to analyze. Although computer vision systems have been developed to automate these tasks, they can be fairly slow.

Now a system being developed by Christopher Amato, a postdoc at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and colleagues Komal Kapoor, Nisheeth Srivastava and Paul Schrater at the University of Minnesota, can perform image analysis -- using artificial intelligence -- more accurately and in a fraction of the time it would take a human operator.

For camera-based surveillance systems, operators typically have a range of different computer vision algorithms they could use to analyze the video feed. These include skin hue detection, face detection and background detection algorithms that detect unusual objects, or when something is moving through a scene.

To decide which of these algorithms to use in a given situation, Amato's system first carries out a learning phase in which it assesses how each piece of software works in the type of setting in which it is being applied, such as an airport. To do this, it runs each of the algorithms on the scene, to determine how long it takes to perform an analysis, and how certain it is of the answer it comes up with. Then, for any given situation the system decides which of the available algorithms to run on the image, and in which sequence, to give it the most information in the least amount of time.

In practice, the process has been automated through the deployment of what is known as Partially Observable Markov Decision Process (POMDP) models, which optimize decision-making while taking into account the uncertainty about the performance of each computer vision algorithm in different situations.

The system can also take context into account when analyzing a set of images. If the system is being used at an airport, it could be programmed to identify and track particular people of interest, and to recognize objects that are strange or in unusual locations. It could also be programmed to sound an alarm whenever there are any objects or people in the scene, when there are too many objects, or if the objects are moving in ways that give cause for concern.

In addition to airport security applications, the system could monitor video information obtained by a fleet of unmanned aircraft. It could also be used to analyze data from weather monitoring sensors to determine where tornados are likely to appear, or information from water samples taken by autonomous underwater vehicles, he says. The system would determine how to obtain the information it needs in the least amount of time and with the least possible number of sensors.

Amato and his colleagues will present their system in a paper entitled "Using POMDPs to Control an Accuracy-Processing Time Trade-off in Video Surveillance" at the 24th IAAI Conference on Artificial Intelligence in Toronto in July. A copy of that paper can be found here.

Those interested in learning more about POMDPs can find a tutorial on the subject here.

-- by Dave Wilson, Senior Editor, Vision Systems Design

Sign up for Vision Systems Design Newsletters

Voice Your Opinion!

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!