Live from VISION 2018: What I saw on day two of machine vision's largest trade show
On November 7—day two of VISION 2018—I packed my day full of appointments with machine vision companies specializing in a range of technologies, from deep learning and industrial cameras to FPGAs and polarization imaging. While it would be a laboriously long task for me to write and for you to read about the fine details of my meetings, I will provide an overview of the products, technologies, and trends I saw today.
Machine vision software and deep learning
Let’s start with my first appointment of the day, which was with MVTec Software. Version 18.11 of HALCON machine vision software, which will be released at the end of November, will bring forth two major deep learning features, which are the classification of images, and the identification and location of images, for purposes of object detection and semantic segmentation. This is what Johannes Hiltner, HALCON Product Manager, explained to me at the company’s booth.
"We see deep learning as an additional technology in a wide range of existing machine vision software tools," said Hiltner. "With our new deep learning functions, we now have a full range of tools, as we cover classification, detection, and segmentation."
"However," he added, "Deep learning alone is not enough to solve a complex application. For this, we provide a large toolset within our machine vision software solutions to enable the deployment of full imaging applications."
New and important for MVTec is the fact that its HALCON software is now available as HALCON Progress, which is released on a six-month cycle, according to Hiltner.
Dr. Olaf Munkelt, Managing Director, commented: "Development periods are usually approximately every two years. During this time, engineers like to have the latest tools with which to work. This release cycle will enable them to do so, with periods of minor adjustments instead of major."
The upcoming release of MERLIC version 4.2 (February 2019) was also a topic of discussion, as were some of the new features in HALCON. These features include improved data code reading, additional support for communication protocols, and additional support for various interfaces.
Later in the day, I met with some of the team from Matrox Imaging, another company that has released tools for deep learning. Previously, deep learning features had been released in Matrox Imaging Library (MIL), but now, they are available in the latest version of Matrox Design Assistant, Design Assistant X. Here, the code was adapted from MIL into Design Assistant, which will provide users deep learning image classification tools in Matrox Imaging’s flowchart-based software.
Pierantonio Boriero, Director, Product Management, commented on the deep learning tools as well as some additional new features.
"For Design Assistant, the biggest challenge was to implement something that customers have waited a long time for, which is the ability to run multiple independent projects simultaneously. Additionally, we’re excited to offer native support for 3D imaging in Design Assistant, making the technology more accessible, which is our ultimate goal."
"Regarding deep learning," he continued, "we’ve employed the strategy of first getting the image classification portion down pat before moving onto other technologies within deep learning. The challenge here is reconciling user expectations and what the technology can do. Our job is to bridge those two."
Moving from software to hardware, FLIR Systems showcased its brand-new Firefly camera for deep learning, which features an Intel Movidius Myriad 2 vision processing unit (VPU) that enables users to deploy trained neural networks directly onto the camera and conduct inference on the edge. The camera features a monochrome 1.58 MPixel Sony IMX296 CMOS image sensor, which offers a 3.45 µm pixel size and frame rates up to 60 fps through a USB 3.1 Gen 1 interface.
Additionally, at less than $300, the camera weighs just 20g and is less than half of the volume of the standard "ice cube" camera. It also consumes just 1.5W of power while imaging and performing inference.
Damian Nesbitt, Vice President of Engineering, commented: "Deep Learning is the biggest change in our lifetime. It is only a matter of time before it disrupts machine vision. The Firefly camera is our way of enabling deep learning for our customers. We chose the Movidius VPU for its unbeatable price performance, small size, and low power consumption."
Polarization imaging and other hot imaging topics
As many now know, polarization imaging has become quite the topic in machine vision as of late. This is reflected heavily on the show floor, as many companies have now released polarization cameras. One such company, and likely the company most responsible for the current trend, is Sony. At the Sony Europe Image Sensing Solutions booth, the first technology we looked at was polarization imaging. As a result of the development of Sony’s IMX250MZR/MYR global shutter CMOS image sensor, which is a 5.1 MPixel monochrome/color sensor that features a four-way polarized filter design, many camera companies now offer polarization cameras, including Sony itself.
"This is a new machine vision technology to be explored," suggested Stéphane Clauss, Senior Sales and Business Development Manager Europe, Sony Image Sensing Solutions. "Having released the sensor itself, it was on us to differentiate from some of the other new polarization cameras that have been newly introduced. To do so, we took this further on the software side."
With Sony’s software development kit (SDK), users can quickly develop new applications. Features include the calculation of values, stress measurement, reflection management, and surface inspection.
Also shown at the Sony booth were a number of their latest releases, including its machine vision camera line and block camera line, including an embedded vision setup utilizing a 12 MPixel camera with a Camera Link interface that can acquire images at 13 fps. Sony also had a fun augmented reality setup on display at the show. Its 4K Mitene product—which is fully available and is deployed in many retail shops in Japan—features a depth sensor and 4K camera that enables the development of interactive customer engagement setups.
"This is not just for amusement, however, as it also collects data on customers for marketing purposes" said Clauss. "This is currently sold in Japan, but we are developing sales channels in Europe and North America."
LUCID Vision Labs—the first within the machine vision market to deploy Sony’s polarization image sensor in its camera—was another company I visited on Wednesday. In addition to their Phoenix polarization camera, the company demonstrated a prototype of its new 3D Time of Flight camera, the Helios. This camera—which is reportedly also the first to deploy Sony's new DepthSense IMX556PLR back-illuminated Time of Flight image sensor—will become available in the summer of 2019.
Rod Barman, President, LUCID Vision Labs, commented on the company's booth setup and foot traffic at the show, which is its first:
"It is humbling to see that customers recognize the products we’ve developed in 22 months since we were founded," he said. "VISION is a fantastic venue for us to showcase our new camera technologies and we are extremely happy with the interest and attention we’ve received."
3D, embedded vision, and an emerging image sensor company
IDS Imaging Development—a company that for more than 20 years has developed machine vision cameras—recently launched its IDS NXT camera line, which was on display the show. This is a brand new platform for future cameras, which the company hopes "redefines the industry camera," according to Heiko Seitz, Technical Writer, IDS.
These app-based sensors cameras enable customers to execute their own applications on the camera itself.
"These cameras are versatile and give users the ability to get more creative for their solution," said Seitz. "They can be used as a standard machine vision camera, or it can be used on its own, via the image processing engine."
One new feature of the camera line is the ability to run pretrained neural networks on the camera for deep learning applications through the camera’s System on Chip (SoC). Additional features of the camera include the ability to run several applications at once on the same camera. New models, the Rome and Rio, have twice the processing power of the previously-launched Vegas model. (Rome is the same as the Rio but offers IP65 protection.) Future plans for the NXT series include models with new image sensors, as well as a focus on the software side of things, to enable customers to develop their own apps for the cameras via the SDK, according to Jeremy Bergh, Sales Director, North America.
Also on display at the IDS booth were the company’s 10GigE cameras and its Ensenso XR 3D camera, which offers onboard processing that enables users to export 3D point clouds and offers the ability to communicate via Wi-Fi
3D imaging experts LMI Technologies were another booth I had the chance to visit on Wednesday. Here, the company highlighted a number of its latest products, including its Gocator 2510 and 2520 3D laser line profilers and its Gocator 3504 smart 3D snapshot sensor, which is a 3D camera based on stereo vision. All of these products, according to Kassandra Sison, Marketing Manager, were designed for consumer electronics and small parts inspection applications.
"There is an overwhelming need for high-speed and high-resolution options for inline production and inspection," she said. "When you are inspecting parts and products such at these, inspecting for shape is necessary, and 3D imaging is required for this."
What Sison was most excited about, if I had to guess, was the company’s new GoMax Smart Vision Accelerator product. Featuring an NVIDIA Jetson TX2 module, the GoMax is designed to accelerate any Gocator 3D smart sensor to meet inline product speed, without the need for an industrial PC. This product, explained Sison, was developed to handle critically-large bandwidths of data, and can be used to crunch this data without the use of PCs or external controllers.
"People are running into data problems and solutions today aren’t scalable," she said. "This can accelerate multiple sensors at one unit and can almost be looked at as an edge computing device, though we see it as a ‘smart vision accelerator,’" she said.
Perhaps known within the machine vision market for FPGAs, Xilinx is a Silicon Valley-based company that I had the chance to meet with as well. Here, I learned about the company’s involvement in machine vision, as well as its investment into artificial intelligence technologies. Dale Hitt, Director, Strategic Market Development, explained that the company has invested $1.5 billion in AI, looking to make a big leap. One step in this process was the release of the Versal adaptive compute acceleration platform (ACAP). This product—which was released in early October—combines scalar processing engines, adaptable hardware engines, and intelligent engines with memory and interfacing technology to deliver heterogeneous acceleration for any platform, according to the company.
Additionally, Xilinx recently announced the acquisition of DeePhi Tech, a Beijing-based startup company specializing in machine learning, and specifically, deep compression, pruning, and system-level optimization for neural networks. DeePhi Tech had been developing its machine learning solutions on Xilinx platforms, and the two companies had been working closely since DeepPhi Tech was founded in 2017.
"DeePhi Tech offers neural network optimization. It automatically optimizes the precision of networks for minimal cost and maximum power," according to Hitt.
Xilinx also showcased its ALVEO PCI Express accelerator cards, which help to accelerate time to market for customers, according to Hitt.
Lastly, I had the chance to visit a company that many more people may know about soon—if they don’t already. Gpixel, a Chinese image sensor company, recently announced the establishment of Gpixel NV, a subsidiary company in Antwerp, Belgium that—according to Tim Baeyens, Co-Founder and CEO, and previous Co-Founder of CMOSIS—will act as a gateway from China to other countries for the company’s image sensor products.
"Our goals with Gpixel Inc. in China are aligned, which is to be a portal to the west for our existing products and for custom image sensor solutions."
A new product line of standard image sensors will soon be released, and these will complement the existing line, he noted.
"Six months ago, we weren’t as well known," he said. "Now, sensor after sensor is being released to the market. People will start to become much more aware."
James Carroll
Former VSD Editor James Carroll joined the team 2013. Carroll covered machine vision and imaging from numerous angles, including application stories, industry news, market updates, and new products. In addition to writing and editing articles, Carroll managed the Innovators Awards program and webcasts.