GPU Inference Analysis: Handling Automotive Big Data for AI-Powered Machine Vision

March 23, 2022

Automotive big data? It’s here and spans the five levels of autonomous driving, which offer increasingly more automation and decision making by the vehicle itself. Taking in massive amounts of data—from systems like high-resolution cameras, radar, Lidar, GPS, ultrasonic sensors, and other types of sensors—vehicles “see” and safely navigate their paths with either full automation or automated driver assist systems (ADAS). Teaching a vehicle to see is a data-intensive process. Massive amounts of data creation and transmission are required to use deep learning effectively. The data gathered are used to train deep learning or machine learning models that are then deployed on-vehicle as artificial intelligence (AI).

It’s a branch of AI fueled by a genuine galaxy of Internet of Things (IoT) devices and new data inputs. Here, rugged edge computers are playing a critical role—both durable and high performance, ultimately deployable closer to their sources of data to alleviate latency and bandwidth restrictions. This hardware intervention preserves system responsiveness and enables deep learning at the edge. Developers can train their deep neural networks through repeated exposure to varied inputs, achieving a cost-efficient deep learning model capable of independently learning and making accurate predictions from limited information.

Refining AI with Smarter Hardware Strategies

This hardware-based enhancement of the deep neural network is called machine learning inference. It is a critical process for in-vehicle computers responsible for safely operating a motor vehicle semi or fully-autonomously. Robust computations run in parallel to process video data from roadways to determine road conditions, assess hazards, and anticipate maneuvers by other drivers. Such low latency allows vehicle control systems to predict future outcomes that facilitate passenger safety. Machine learning continually refines system operability and responsiveness, preparing it to negotiate situations beyond its specific training. Inbound data from Intelligent Transportation Systems (ITS)—regarding weather, travel, and traffic conditions—can be processed concurrently with GPS to optimize fuel consumption. And, speech recognition parameters allow voice-activated controls for vehicle systems.

The AI algorithms used in these operations pose a growing challenge for system developers. A vast amount of real-time information must be gathered, stored, and managed effectively. AI edge inference computers are the hardened computing solutions developed for this process, able to withstand exposure to dirt and dust, shock and vibration, and extreme temperatures. These systems are specifically designed for in-vehicle deployment because of their tolerance to a variety of power input scenarios, including vehicle batteries. Outfitted with high-performance storage to effectively house data from cameras and sensors, edge computing solutions also feature advanced processing power beyond the CPU cores found in traditional systems. Their GPU acceleration capabilities effectively process incoming sensor data and accelerate the inference analysis required as part of the vehicle’s AI algorithm. It’s a development process that demands a blended software and hardware strategy, tapping into data and performance for more intelligent and more powerful AI performance.

Consider that autonomous driving is ranked by five levels, each with increased automation beyond level zero where no automation is deployed, and a human driver is fully in charge of the vehicle. Level one expands to include driver assistance features such as adaptive cruise control integrating sensors and cameras to maintain safe distances—the options assist but do not automate driving. Level two is recognized as partial automation. Blending acceleration and steering assist so drivers can take their feet off the pedal and hands off the wheel during long commutes is an example of level two. Here, automation provides support rather than full automation, and the driver is still responsible for all driving tasks. Level three is where it gets interesting. The driver must be in the vehicle but is not required to monitor the vehicle at all times. This is known as conditional automation. For instance, a driver could command the vehicle to go to a specific destination and then occupy themselves browsing a smartphone but must be ready to take control should conditions warrant intervention. High automation occurs in level four, where the vehicle can perform all driving functions under certain conditions. A driver is still required, although the vehicle can travel from point A to point B without intervention. At level five automation, a driver is no longer required; the vehicle performs all driving tasks, even eliminating the need for a steering wheel or gas and brake pedals. In this highest level of automation, the vehicle drives itself without even a driver in the driver’s seat.

Creating Deep Learning Models with Deep Data

For successful vehicle automation, deep learning models are formed by inputting batches of specialized data. The goal is to train the artificial neural network to classify and process certain aspects or properties. Deep learning models are instructed to process inputs similarly to human learning processes. Information is transformed at each neural layer before passing to the next, where it is further transformed. The accumulation of these transformations allows the neural network to comprehend many layers of non-linear characteristics like edges and shapes. Relationships between layers are weighted according to the sum of the inputs, improving the accuracy of the intended outputs. Feedback for incorrect outcomes is then routed back throughout the layers, adjusting the weighted connections appropriately to reduce the probability of repeat errors, further honing the deep learning model.

This is highly costly in terms of compute needs but is well-handled by GPUs based on their ability to carry out parallel computations more efficiently than CPUs. GPUs break up computations into smaller processes that can be performed simultaneously and then reunited for the final output. With this advantage, deep learning models can more effectively break down input data and compute individual aspects that allow them to quickly recognize visual or audio characteristics.

By training the neural network via adjustments to the connection weights, the deep learning model is streamlined, in turn reducing the compute cost for the GPU. Deployed at a hardware level, the trained deep learning model can act independently, exercising cognitive inference analysis based on finely-tuned experience reinforcement. Machine learning inference can further improve the deep learning models deployed at the edge, allowing better accuracy and efficiency to ease system workload cost. Optimization can be customized by pruning unneeded processing parameters and reducing network complexity. The system can perform according to the resource availability of each edge node, maximizing functionality at each endpoint.

Inference Analysis Informs the Intelligent Rugged Edge

Once trained, a deep learning model can be integrated into rugged edge computers to perform inference analysis of input data. Performing inference analysis at the source of the data generation, rugged edge systems deliver robust processing capabilities in challenging, mobile, volatile, or otherwise unstable environments. Rugged edge computer GPUs power the deep learning models, leveraging their training to produce accurate outputs with great efficiency and speed and requiring less input and resources to achieve their goals. And by using rugged edge devices for machine learning inference analysis tasks instead of looping in cloud resources, the system avoids latency and cloud availability issues that can hamper critical goals. Depending on the type of deep neural network integrated, these goals can vary greatly.

A complex neural network delivers great speed and accuracy to recognize video input shapes, features, and textures. While this advantage is streamlining and accelerating vehicle automation, it also has vast implications for anything from machine vision quality inspections to biometric processing to barcode scanning. Extended short-term memory networks are improved forms of recurrent neural networks. These have excellent processing agility that can be applied toward speech recognition, robot controls, and predictive forecasting. Such networks can enable unsupervised learning, allowing systems to generate new knowledge autonomously to appropriately process scenarios and inputs outside the realms of training. These models and others are capable of processing the same data inputs and working in conjunction to enable an intelligent rugged edge system.

A Race to the Finish

The competition around designing driver-assist and more sophisticated autonomous driving systems has become fierce. But in a fast-moving industry, those who can’t keep pace will surely be left behind. ADAS developers, once obsessed with improving the algorithms that impact features and performance, have come to realize the critical role hardware plays in effective data handling. Specialized hardware that includes inference computers designed to manage AI at the edge is the impetus to gathering, processing, and storing the immense amount of data that comes from multiple sources. Without hardware, the algorithms can’t move forward. But together, intelligent software and hardened, rugged edge hardware provide the elements necessary for the smarter, safer and increasingly innovative designs that fuel ADAS growth and excitement for future advancements.

About the Author

Dustin Seetoo

Dustin Seetoo is the director of product marketing at Premio (City of Industry, CA, USA).  Seetoo crafts technical product marketing initiatives for industries focused on the hardware engineering, manufacturing, and deployment of industrial internet of things (IIoT) devices, and x86 embedded and edge computing solutions. 

 

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!