Neural networks help identify license plates for traffic control

Feb. 14, 2018
Combining off-the-shelf cameras to a PC running neural network software allows Singapore authorities to perform traffic monitoring and enforcement in an intelligent transportation system application.

Combining off-the-shelf cameras and a PC running neural network software allows Singapore authorities to perform traffic monitoring and enforcement.

Andrew Wilson, European Editor

Automatic number plate recognition (ANPR) or license plate recognition (LPR) is a challenging task to perform in real-time. This is due to a number of reasons including the different types of license plates that need to be recognized, the varying lighting conditions encountered, and the need to capture fast-moving objects at night with high-enough contrast.

"Typically," says Richard Goh, Founder of Optasia Systems (Tembeling, Singapore; www.optasia.com.sg), "low-cost image sensors are used in IP cameras, resulting in long exposure times and often blurred images. Such systems also tend to encompass standards such as H.264/5 encoding that uses a hybrid of motion-compensated inter-picture prediction and spatial transform coding using the discrete cosine transform (DCT). Because this technique is not lossless, it results in compression artifacts appearing in transmitted images which makes it more difficult to interpret the alphanumeric characters associated with license plates."

Figure 1: Optasia Systems ANPR systems include the IMPS ANPR Model AIO that employs a GigE camera interfaced to an off-the-shelf embedded computer housed in a single unit.

To overcome this, Optasia Systems offers a number of ANPR systems based on high-resolution GigE cameras and neural networks that do not use lossy digital compression standards. These include the Integrated Multi-Pass System (IMPS) ANPR Model AIO from Optasia Systems that incorporates a GigE camera interfaced to an off-the-shelf embedded computer housed in a single unit (Figure 1) and the company's IMPS AOI-M3 system that incorporates visible or near-infrared (NIR) 850nm LED-based illumination systems. Both units feature an acA 1920-40gm camera from Basler (Ahrensburg, Germany; www.baslerweb.com) interfaced to an embedded host PC over a GigE interface although other cameras that employ Sony's IMX249 Pregius CMOS image sensor can be used.

While the Basler camera is capable of capturing images as fast as 42 fps at full resolution, by strobing the illumination system for 1/1000s every 50ms, in the systems from Optasia, stationary images are captured at speeds of approximately 40 fps with no motion blur. Furthermore, since these images are not compressed, they are of high enough contrast and clarity with which to extract alphanumeric license plate information.

"Cameras, such as the Basler acA 1920-40gm camera use the IMX249 Pregius CMOS image sensor from Sony Semiconductor Solutions (Tokyo, Japan; www.sony-semicon.co.jp)," says Goh, "feature a 1/1.2in CMOS area sensor, a pixel size of 5.86μm and an array of 1936 x 1216 pixels. With a relatively-high quantum efficiency (QE) of 65% at 530 nm and a good QE in the near-infrared range of approximately 850nm, they have high dynamic range, are very tolerant of exposure variations and are especially suited for low-light applications such as ANPR."

Recognizing characters

"One of the major design challenges in developing these systems," says Goh, "was that they needed to be capable of recognizing alphanumeric characters on both retro-reflective and non-retro-reflective license plates. This needs to be accomplished either while the system is mounted to a moving vehicle to detect, for example, unauthorized cars in parking lots or while vehicles are moving past fixed-positioned systems such as those found at tunnel entrances or airport parking lots."

Figure 2: (a) An image taken in bright daylight may appear of low contrast and to extract the salient features of the license plate, smart predictive exposure bracketing is used. (b) Images taken at night can be enhanced by illuminating the subject with NIR illumination at a wavelength of 850nm. In this case, the white-colored alphanumeric data on the license plate appears dark, making the process of feature extraction easier.

While retro-reflective license plates return a large portion of the illumination from the system to the camera, most retro-reflective license plates are highly reflective to NIR and, when strobed with 850nm wavelengths produce a high-contrast image. Under similar lighting conditions, non-retro-reflective license plates will return less visible light to the camera, although under NIR illumination may return high enough contrast images. While systems such as the IMPS AOI-M3 camera-synchronized NIR illuminator can capture non-reflective plates and reflective plates at 35 and 50m distances respectively, using the same camera exposure time for each type of plate will result in either reflective plates properly exposed while non-reflective plates are over-exposed or vice versa.

One of the ways to overcome this is the use of high dynamic range (HDR) imaging. In this method, the same image is exposed at different exposure times resulting in images that appear bright being captured using a high-speed exposure and those that appear dark captured with a lower-speed exposure. Algorithms such as the Merge Mertens fusion function of the Open Source Computer Vision Library (OpenCV) can then be used to merge the exposure sequences into a single image (see "High Dynamic Range (HDR)"; http://bit.ly/VSD-OPCV). Indeed, this is the method used by Reflex Technologies (Burbank, CA, USA; www.reflextechnologies.com) in the design of its film scanner (see "Film scanning: Preserving the past for posterity," by Luc Nocente, President of Norpix (Montreal, Quebec, Canada; www.norpix.com) at http://bit.ly/VSD-RFLX.)

Exposure bracketing

"While this can be accomplished at approximately 2-3 fps on such systems, the data rate and processing power required to perform such merging algorithms at rates as fast as 20 fps would be prohibitively expensive for ANPR systems," says Goh. "Also, because automobiles within the camera's field of view are moving so quickly, performing such HDR fusion would require a very high-speed (and expensive) camera. Instead, both the IMPS ANPR Model AIO and AIO-M3 systems use a photographic technique known as smart predictive exposure bracketing to highlight the plates hidden in dark shadows or bright sunlight.

In this method, explains David Peterson in his article "What is Exposure Bracketing?" that can be found on the website of Digital Photo Secrets (www.digital-photo-secrets.com) at http://bit.ly/VSD-EXBR, exposure bracketing is the process of taking an image at one exposure value higher and another at an exposure value lower than the one the camera is set for. Every image then consists of three frames, each of which can be individually processed to extract relevant data.

In the system designed by Optasia Systems, the gain and the shutter speed of the camera are adjusted in a secular, sequential manner to attain these three images in real-time. A series of images may then consist of a standard exposed image at 1/1000s-an exposure value (eV) standardized as eV-0, while the next two images may have eVs of eV+1 or 1/500s and an eV of eV+1 or 1/2000s. "This," says Goh, "can be accomplished since the Basler acA 1920-40gm camera can change the exposure on a frame-by-frame basis."

Before each of these three images is processed by the host computer, the data within each frame is reduced. To accomplish this, a select region or regions of interest (ROI) are predefined. Where a system may be mounted on a motor bike to inspect license plates in parking lots, for example, areas only 0.8m above ground level may be selected. For systems positioned on multi-lane highways, only selected regions within those lanes need to be examined. To reduce processing overhead further, the ROIs are processed on the host Intel 17 4C/8T-based PC running Linux such that any spurious edge transitions due to, for example, automobile grills and railings, are extracted and then disregarded.

Processing images

At the heart of the IMPS ANPR Model AIO and the company's IMPS AOI-M3 system is a proprietary multilayer perceptron (MLP). This software-based, feed-forward artificial neural network consists of a number of layers that uses a supervised learning technique called back propagation for training and distinguishing data that is not linearly separable. Before ROIs within multiple images can be applied to the MLP, the neural network classifier is trained with thousands of images of individual alphanumeric charters from known, good license plates.

Information from OCR

However, the system is more sophisticated than one that simply captures image data, processes it with a neural network to perform optical character recognition (OCR) and returns a result. Rather, the system is dynamic in that the OCR/neural net engine feeds back pixel data such as the contrast, background value and foreground value of the pixels of the ROI containing the license plate. This data is then used to adjust the exposure value of the camera in real-time such that the contrast of captured images then increases. Once individual alphanumeric characters are captured, they are compared to those in a database.

Figure 3: The systems graphical user interface (GUI) shows the results of license plate recognition. Green lines in the top left image represent the bounding boxes of the ROIs from which image data is extracted while the red boxes show the location in which the car license plate appeared.

While an image taken in bright daylight may initially appear of low contrast (Figure 2a), by using smart predictive exposure bracketing in conjunction with feedback from the OCR engine, the salient features of the license plate can be extracted. Similarly, images taken at night (Figure 2b) can be enhanced by illuminating the subject with NIR illumination at a wavelength of 850nm. In this case, the white-colored alphanumeric data on the license plate appears dark, making the process of feature extraction easier.

More information

To provide government bodies with additional information regarding the automobiles in the scene, speed radar systems can be interfaced to the system over a TCP/IP network. Devices, such as the IControl System from ATT Systems (Singapore; www.attsystems.com.sg), for example, can be used to track the speed of a vehicle. This data can then be associated with a specific license plate number. A graphical user interface (GUI) can then display the results of the OCR and automobile license plates (Figure 3). Here, the green lines in the top left image represent the bounding boxes for the ROIs from which image data is then extracted while the red boxes show the location in which the car license plate appeared.

Smaller images (shown under the large image top left) represent - from right to left: a scaled version of the full 1936 x 1216-pixel image captured by the camera, a 700 x 300-pixel image that is processed by the OCR/neural network, a cropped image of the license plate and an alphanumeric image generated by the system. "By time-stamping these images along with speed data from the ATT Systems radar system using the Unix time (epoch time) format, authorities are provided with the exact time and location that any violations may have occurred."

Costing approximately $10,000 per system, both the IMPS ANPR Model AIO and IMPS AOI-M3 systems can be networked to provide simultaneous coverage of numerous locations. At present, Optasia's systems are in operation at Singapore's Marina Coastal Expressway (MCE) Tunnel and Macau International Airport.

Companies mentioned

ATT Systems
Singapore
www.attsystems.com.sg

Basler
Ahrensburg, Germany
www.baslerweb.com

Norpix
Montreal, QC, Canada
www.norpix.com

Optasia Systems
Geylang, Singapore
www.optasia.com.sg

Reflex Technologies
Burbank, CA, USA
www.reflextechnologies.com

Sony Semiconductor Solutions
Tokyo, Japan
www.sony-semicon.co.jp

The Open Source Computer Vision Library (OpenCV)
https://opencv.org

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!