Embedded Vision

February 2018 snapshots: Autonomous vision, machine learning, and artificial intelligence

In the February 2018 snapshots, learn about a company developing machine learning software for autonomous vision, a partnership between Intel and Warner Bros. for in-cabin autonomous vehicle entertainment, a microscope using artificial intelligence algorithms for malaria detection, and a 5K panoramic video camera.
Feb. 14, 2018
10 min read

Machine learning enables autonomous vision

Machine learning startup Algolux has their eye toward the next wave of vision. This past November, I visited with Algolux CEO Allan Benchetrit, who told me that many of today's vision systems are sub-optimal for several applications his company sees in themarket.

"Typical architecture may include a lens, camera, and image signal processor (ISP). The ISP gets hand tuned to provide pleasing images for humans to be able to see. The problem is that the ISP takes the raw data off the sensor and strips away much of the data, to create a visually pleasing image."

He continued, "Another problem exists because in a vision architecture, it is the same process being applied. If you are taking these visually-pleasing images and feeding them into a neural network, some of the critical data may be missing from a computer vision standpoint. Algolux aims to address this opportunity by two means, the first of which is its CRISP-ML desktop tool that uses machine learning to automatically optimize a full imaging and visionsystem.

CRISP-ML features an optimal way to tune existing architectures according to Benchetrit. The tool "effectively combines large real-world datasets with standards-based metrics and chart-driven key performance indicators to holistically improve the performance of camera and vision systems." This, according to Algolux, can be done across combinations of components and operating conditions previously deemed as unfeasible.

While CRISP-ML is used for camera preparation, Algolux's CANA product is far more disruptive as it is embedded in the actual camera, according to the company. Benchetrit explained that, as of now, there is a big race for automotive companies to get to level 3 autonomy by 2020. The problem is that the baseline for level 3 is pretty low, he said, for example in places with sunshine, dry roads, no weather distortions, during the day, etc. When these systems encounter difficult conditions, they are not able to process results with the desired level of confidence and consistency, he said.

"What we are doing is focusing on the hard cases, by supporting low light, darkness, distortion caused by weather like rain or fog, and so on; which is the reason that we are breaking away from the legacy architectures of vision systems," said Benchetrit. "We are doing this by replacing the most problematic component (image processor) with a deep learning framework. We are able to discard the image processor and use the data off of the image sensor to train our model to be able to perform in sub-optimalconditions."

He added, "We are doing something that everyone in the automotive industry needs and cannot necessarily work on right now because they are focused on achieving baseline standards of level 3. We are addressing the evolution of vision systems using machinelearning."

Together, CRISP-ML and CANA are enabling the design and implementation of a full software stack for image processing in a component-agnostic manner; this provides better results while reducing cost and enables a faster time-to-market, according to the company. The company has several customers of CRISP-ML while the CANA technology is in pilots, as the company is focusing on customers' short-term needs.

"We are telling our customers to take an existing example of something challenging, show us the baseline, and let us prove to you that we can do better in the exact same conditions. Once we have proven this, let's take on some more use cases, and from there, run it on your platform and go from there. Eventually, we will take on more and more cases, until we have proven that it can work on the clear majority of difficult cases," saidBenchetrit.

For now, the ultimate goal of the company is to deliver on the promise of multi-sensor fusion, assimilating multiple data points from various sensors in real-time.

"We are not encumbered by legacy architecture or components. We believe we will be the first to be able to support multi-sensor fusion in level 3 autonomous driving and beyond," hesaid.

Looking toward possible applications outside of autonomous vehicles, Benchetrit indicated the software packages could be used in security and surveillance, border patrol, or in "any embedded camera that is going to be put through a variety of unpredictable use cases in changing conditions," including mobile devices, drones, and augmented reality/virtual reality.

Intel and Warner Bros. announce partnership on in-cabin autonomous vehicle entertainment

While much of the recent coverage of autonomous vehicles tends to focus on the enabling technology, safety, investments, and product developments; one of the aspects of a real driverless car society that is not discussed is what people may be able to do with their time without having to drive.

Intel-a company with a massive, ambitious vision for autonomous vehicles (AV)-has announced a partnership with Warner Bros. to develop in-cabin, immersive experiences in AV settings. This partnership was announced on November 29 during Automobility LA in Los Angeles, where Intel CEO Brian Krzanich shed further light on the project.

"Called the AV Entertainment Experience, we are creating a first-of-its-kind proof-of-concept car to demonstrate what entertainment in the vehicle could look like in the future," he wrote. "As a member of the Intel 100-car test fleet, the vehicle will showcase the potential for entertainment in an autonomous driving world."

He added, "The rise of the AV industry will create one of the greatest expansions of consumer time available for entertainment we've seen in a long time. As passengers shift from being drivers to riders, their connected-device time, including video-viewing time, will increase. In fact, recent transportation surveys indicate the average American spends more than 300 hours per year behind thewheel."

With all of this time available, Intel and Warner Bros. imagine "significant possibilities inside the AV space," including content such as movies and television programming, but also new, immersive experiences courtesy of in-cabin virtual reality (VR) and augmented reality (AR) innovations.

"For example, a fan of the superhero Batman could enjoy riding in the Batmobile through the streets of Gotham City, while AR capabilities render the car a literal lens to the outside world, enabling passengers to view advertising and other discovery experiences," explainedKrzanich.

These possibilities, he explained, are fun to imagine, but the ultimate test for the future of autonomous cars is going to be winning over passengers. The technology will ultimately not matter if there are no riders who trust and feel comfortable using it. Intel is ultimately working toward this, as its Mobileye ADAS (advanced driver assistance system) technology on the road has proven to reduce accidents by 30%, save 1,400 lives, prevent 450,000 crashes, and save $10 billion in economic losses, according to the company.

The long-term goal of course, has to be zero driving-related fatalities, wrote Krzanich, who said that, "to reach this goal, we need standards and solutions that will enable mass production and adoption of autonomous vehicles. For the long period when autonomous vehicles share the road with human drivers, the industry will need standards that definitively assign fault when collisionsoccur."

To this end, Intel is collaborating with the industry and policymakers on how safety performance is measured and interpreted for autonomous cars. Already, Intel and Mobileye have proposed a formal mathematical model called Responsibility-Sensitive Safety (RSS) to ensure, from a planning and decision-making perspective, the autonomous vehicle system will not issue a command leading to anaccident.

"From entertainment to safety systems, we view the autonomous vehicle as one the most exciting platforms today and just the beginning of a renaissance for the automotive industry," saidKrzanich.

View the Intel autonomous driving press kit - http://bit.ly/VSD-INT.

Intelligent vision platform provides 180° field of view

Altia Systems' PanaCast 2s is a software-defined panoramic 5K video camera system that produces 78% higher resolution than panoramic 4K and provides a 180° field of view and real-time stitching for use in medium-to-large conference rooms for video collaborationapplications.

PanaCast 2s, which was recognized as a CES 2017 Innovation Awards Honoree, implements the PanaCast video processor as a PanaCast Computer Vision Engine (PCVE), which the company calls an industry-first, software-scalable implementation with an OpenCL pipeline. The PCVE uses CPU and GPU processing to achieve high performance, low latency video and reportedly delivers 4x lossless digital zoom up to 16 ft. at 720p and 30 fps. Additionally, the platform enables the integration of artificial intelligence technologies such as machine vision using convolutional neural networks for video collaboration and distance learningapplications.

The camera itself features a 7.4 MPixel CMOS image sensor and a frame rate of up to 30 fps and can be used for "intelligent vision capabilities," including people counting and facial recognition, according to Altia Systems. PanaCast 2s is recommended to be used with an Intel Core i7 processor-based NUC mini-PC. With Intel's initial guidance on the OpenCL pipeline, Altia Systems noted that it developed the PCVE software for PanaCast 2s on such adevice.

"Intel and Altia Systems have collaborated to optimize and deliver an industry leading OpenCL based real-time, scalable, ultra-high definition panoramic camera pipeline for PanaCast 2s on the Intel Core i-series processor family to enable more immersive collaboration and computer vision usages," said Praveen Vishakantaiah, VP and GM of Intel Client Computing Group R&Ddivision.

PanaCast 2s supports Altia Systems' software products, including Intelligent Zoom and PanaCast Vivid. Soon, the device will support PanaCast Whiteboard, people recognition and API for artificial intelligence and machine learning-based applications. The platform is available as a bundled offering through the company's Authorized Intelligent Vision Partners.

Artificial intelligence software helps microscope detect malaria in blood samples

Advanced microscope designer and manufacturer Motic has partnered with the Global Good Fund to create and distribute the EasyScan GO, which is a microscope equipped with artificial intelligence technology that will be used to identify and count malariaparasites.

As part of the collaboration, the software from Global Good-a collaboration between Intellectual Ventures and Bill Gates to develop technologies for humanitarian impact-is integrated into an existing Motic microscope to be used for diseasescanning.

Malaria kills almost half a million people each year, and researchers estimate that nearly half the world's population is at risk of contracting it, according to Motic. Accurate detection of severe and drug-resistant cases requires analysis of a blood smear by a World Health Organization-certified expert microscopist, which takes roughly 20 minutes per slide. By automating the process, this can alleviate the shortfall of trained personnel in under-resourcedcountries.

Based on machine learning and neural networking, EasyScan GO's software module is trained by feeding it thousands of blood smear slides annotated by experts. The microscope works through a combination of digital slide scanning and the software module, which runs captured images through a machine learning algorithm for counting and detection. Field tests of an early prototype of the microscope presented at the International Conference on Computer Vision (ICCV) showed that the machine learning algorithm developed by Global Good is as reliable as an expert microscopist, according toMotic.

"Our goal in integrating Global Good's advanced software into Motic's high-quality, affordable digital slide scanner is to simplify and standardize malaria detection," said Richard Yeung, Vice President of Motic China. "Success with the most difficult-to-identify disease paves the way for the EasyScan product line to excel at almost any microscopy task and to detect other major diseases that affect developed and emerging marketsalike."

Now, the EasyScan GO is being trained to recognize all species of malaria and other parasites and traits commonly found on a blood film, including Chagas disease, microfilaria, and sickle cell. Additionally, the team will reportedly explore application on other sample types, such as sputum, feces and tissue, as well some forms of cancer.

Sign up for Vision Systems Design Newsletters

Voice Your Opinion!

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!