Neural networks optimize traffic-flow efficiency and safety
Transportation planners and engineers need more-powerful traffic-monitoring tools to manage traffic and improve roadway utilization. Accurate data on vehicle volumes, speeds, and other traffic measures are required to better understand patterns of traffic flow to design roadway and traffic-control strategies that reduce congestion, accidents, and travel time. Real-time collection and transmission of these data can make possible automated traffic-incident-detection systems to reduce response time to traffic incidents, improve management of emergency resources, and quickly restore traffic flow.
To date, most traffic monitoring has been done with in-ground loops whose installations require cutting roadway surfaces and traffic-disrupting lane closures. The limited traffic information they provide is inadequate to support the more-sophisticated traffic monitoring and control envisioned by transportation planners and required for all but the simplest automated enforcement systems. An alternative to loops is a video-based traffic-monitoring system that involves mounting cameras along roadways to detect vehicles. The video vehicle-detection technology used to date has basically emulated traffic loops, using simple motion-detection algorithms to detect vehicle presence. Such algorithms do not provide continuous tracking of vehicles, are easily fooled by shadows, and miss vehicles that are traveling close together.
To overcome these limitations, Nestor Traffic Systems (Providence, RI) has developed a neural-network-based video traffic-monitoring technology and applied it to two products. TrafficVision provides real-time video-based traffic monitoring and automated incident detection. CrossingGuard is a video-based automated enforcement system that records red-light violations and provides a collision-avoidance feature to help reduce the risk of crashes at intersections
TrafficVision and CrossingGuard process video images from standard, off-the shelf NTSC cameras such as those provided by Sony Electronics (Montvale, NJ). Video images are captured and digitized by Nestor`s own PCI-based image processor. In operation, images are preprocessed using frame subtraction and other operations to localize areas of interest within the image. Once these regions have been determined, image data from these clusters are passed to the host CPU where features such as image-intensity profiles and edges are detected. These features are then transferred over the PCI bus to the Nestor PCI4000 recognition accelerator, a PCI add-in card that uses up to four Ni1000 neural-network processors, jointly developed by Nestor and Intel (Sunnyvale, CA). Capable of performing 12.4 billion operations/s, the PCI4000 can classify tens of thousands of patterns per second in the video data.
Both systems use Nestor`s neural-network technology to detect and track vehicles in video images based on characteristics of motion and shape in the image. Part of developing the products included training the neural network on real-world video data. Sequences of video images were presented to the system, and video regions of interest were classified using Nestor`s PC-based neural network technology. To track and identify vehicles effectively, the system is trained on images of cars, trucks, and other vehicles. Additionally, images of multiple clustered objects such as groups of cars, roadways, and distant vehicles are presented to the network so that it can avoid false-positive detections.
Using a probabilistic version of the Restricted Coulomb Energy neural-network model originally developed by researchers at Nestor, including Leon Cooper and Charles Elbaum of Brown University (Providence, RI) and Douglas Reilly, the neural network classifies objects within the images as either cars, trucks, shadows, or roadways. Objects that are identified as roadways or other objects that the system fails to recognize are not tracked further. Objects that are tracked but have insufficient data within them are then further tracked and reclassified. This is performed by determining the trajectory of the tracked object through a number of video frames determining the most likely position of the image in the upcoming frame. To classify the object again, the CPU and neural network reprocess video data of these objects.
TrafficVision detects and tracks vehicles from a user-defined "entry zone" in the camera field of view until they leave through an "exit zone." The system can report a range of traffic data in real time, including vehicle counts in each lane, vehicle classifications, average vehicle speeds, average spatial distance between vehicles, and lane occupancy. A graphical user interface provides ease of setup as well as the ability to define alarms, based on one or more user-defined traffic-measurement criteria that can be used to automatically detect traffic incidents.
In operation, the Crossing Guard System can generate an accurate velocity/acceleration profile of vehicles as they approach an intersection. With this information, the system can identify vehicles that are at risk for entering the intersection after the red-light phase has been initiated. This information is used by the system to generate a violation-alert signal that a traffic controller system can use to briefly extend the red phase for cross traffic, reducing the chance of a possible collision with the red-light running vehicle. In addition, the system can log the vehicle responsible for the violation.