Home

Stereo camera system flags potentially contaminated surfaces

With Deep Clean, software engineer, electronics hobbyist, and inventor Nick Bild offers an imaging method for automatically flagging potentially contaminated surfaces that require additional cleaning.
June 15, 2020
4 min read

By now, the world knows how easily COVID-19 spreads. Individuals infected with contagious diseases like novel coronavirus can pass the illness to others in several ways, including touching objects that others have come in contact with, up to several days before. With Deep Clean, software engineer, electronics hobbyist, and inventor Nick Bild offers an imaging method for automatically flagging potentially contaminated surfaces that require additional cleaning.

Deep Clean (https://bit.ly/2VSD-DCL) watches a room and flags all surfaces as a person touches them for special attention on the next cleaning. The idea behind Deep Clean involves providing a cleaning crew with information on which areas need extra attention, such as in a hospital room, to help prevent disease spreading.

Used as the main processing unit, a Jetson AGX Xavier embedded System on Module (SoM) from NVIDIA (Santa Clara, CA, USA; www.nvidia.com) triggers two Raspberry Pi 3 Model B+ single-board computers and a custom stereo vision camera captures two simultaneous images which transfer to the Jetson for processing.

Related: Could vision-guided robots be key to keeping the restaurant industry afloat?

Camera options for the Xavier tend to be on the expensive side for hobbyists, so this system uses a pair of Raspberry Pi cameras, says Bild. Since the Xavier does not have camera serial interface connections for this type of camera, the system uses the Raspberry Pi boards to capture images when triggered via general-purpose input/output from the SoM.

To keep the cameras in the proper relative positions, Bild 3D printed a base and stabilizer to exactly match the specifications of the camera module dimensions. Without a means to keep the image sensors tightly locked into relative alignment and spacings, the depth readings would not be accurate, so Bild calibrated the cameras both individually and in pair with OpenCV using a printed chessboard pattern.

The OpenPose (https://bit.ly/VSD-OPP) library helps detect hand location (x, y coordinates), while the stereo camera detects the depth (z-coordinate). Researchers at Carnegie Mellon University’s Robotics Institute (Pittsburgh, PA, USA; www.ri.cmu.edu) developed OpenPose and suggest that it represents the first real-time, multi-person system to jointly detect human body, hand, facial, and foot key points (in total, 135 key points), on single images. (View more information here: https://bit.ly/VSD-OPP2).

With the x, y, and z position of the hand tracked in real-time, Deep Clean compares that data to the x, y, and z coordinates of objects in the room. Upon detecting an overlap, the x and y coordinates are stored in memory and can be used to construct an image of the room, with all touched surfaces visually highlighted. From a privacy standpoint, Deep Clean does not need to store any image data and only retains coordinates of touched objects. These coordinates can be used to annotate an image of the unoccupied room on demand.

With a higher budget, one could create a more optimal version with an off-the-shelf stereo camera interfaced directly with the processing unit, suggests Bild, who says that using a custom-built stereo camera makes the system slightly buggy and capturing images on separate computers substantially slows the frame rate. Furthermore, room for adding additional detection techniques exists, says Bild.

Related: Thermal imaging: Learn the limits of elevated body temperature screening

“While the current system is quite useful as is, it currently cannot detect a cough or sneeze, for example, which also contaminate surfaces. I’m presently exploring options for further development of the technology, but with its 32 trillion operations/second of processing power, the Xavier SoM is well suited for the machine learning inference algorithms that would be required for such tasks."

He adds, “The core device itself need not change; it is only a matter of development time required to achieve the goal.” 

About the Author

James Carroll

Former VSD Editor James Carroll joined the team 2013.  Carroll covered machine vision and imaging from numerous angles, including application stories, industry news, market updates, and new products. In addition to writing and editing articles, Carroll managed the Innovators Awards program and webcasts.

Sign up for Vision Systems Design Newsletters

Voice Your Opinion!

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!