Integrating multiple vision modes onto one integrated circuit
In the next 20 years, micro-sized cameras will evolve to look at every angle, depth, mode or change in a scene. They will report to the viewer or application in real-time with all relevant information. Surgeon's operating theaters will be enhanced by systems that visualize critical tissue masses, imaged from inside the patient's body. Cars will move through traffic based on an understanding of the environment. What will drive this innovation will be the integration of many vision and sensing modalities into small affordable systems that have the computational power to both image and understand image data.
Today's digital cameras collect mostly information that can be seen and understood by the human eye. Industrial cameras may add some extensions to the spectrum range, such as near-UV or IR. Hyperspectral imaging used to be the domain of expensive lab equipment but is now emerging to allow more complex data sets to be analyzed. The first hyperspectral systems are available that use a single IC sensor, integrated with image processing capability in cameras that can capture and process images in real-time. Other examples of such more complex systems that are coming to the market are those that employ depth sensing techniques for indoor navigation, X-ray sensors for food sorting, short-range radar imaging for collision prevention in vehicles or lens-free holographic microscopy.
Here, the data sets that are acquired are no longer images that may be directly viewed or interpreted. This begs the questions: 'what is machine vision really, and where is it headed?' The answer: towards applications with full perception.
In many applications, such as food inspection, medical examination or smart agriculture, developers require a contactless way of monitoring surfaces, material composition and temperature and perform such tasks continuously and without delay. Taken separately, all of these parameters can be measured but they require the developer to install and operate separate applications and systems, each with their own complexities and incompatible data streams.
In the next 20 years, vision and measurement systems will provide a single environment to meet the needs of such applications. Smart cameras will integrate multiple viewing and sensing modalities into inexpensive systems that recognize and scan objects through radar, x-ray, visible and hyperspectral sensors. These will have the intelligence to acquire and merge the various data streams that allow the system to extract the data required by the application.
In medical applications, for example, these is a need for surgeons to combine live visual information with pre-operation x-ray or CT scans. In future, such live visual information will be enriched with spectral and ultrasound information to visualize such parameters as oxygen levels, blood circulation, 3D surface information, depth and tissue type.
At imec's labs, hyperspectral ICs have already been integrated in cameras. Radar ICs are also being developed. Once these vision modes can be integrated on chip separately, engineers can start to combine them in heterogeneous packages.