Researchers Develop 3D Model System for Underwater Mapping Applications
Scientists at the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB (Karlsruhe, Germany) have developed a system that could revolutionize the process of underwater mapping.
The process of precisely measuring the depth of a body of water such as a lake, river, or harbor, is difficult and expensive, because such surveys are currently only done manually. That means they are labor-intensive, and they are generally conducted between lengthy time periods, at most once a year—and the results may not be optimally comprehensive or accurate.
Nonetheless, this information is important, even vital, not only to such business sectors as the shipping industry but also government and other regulatory entities, who are required to provide up-to-date, accurate maps on a regular basis, as they can be held liable for accidents caused by loose armour stones, infrastructure remnants, shifted riverbed loads, and other such hazards.
The Fraunhofer team, therefore, wanted to develop an automated waterway monitoring system using autonomous platforms with obstacle avoidance and traffic awareness, providing both underwater and above-water mapping technology, says Janko Petereit, Fraunhofer Institute’s Group Leader of Autonomous Robot Systems.
Machine Vision Components
They equipped a commercial unmanned surface vessel, known as a UPS, with a system to allow it to move autonomously; this included a GPS, acceleration and yaw sensors, and a Doppler Velocity Log (DVL) that allows the boat to “feel” its way along the bottom of a body of water.
The team used two ruggedized GV-5200FA-C-HQ GigE cameras from Imaging Development Systems GmbH (IDS; Obersulm, Germany) with 8 mm lenses for the image-based 3D mapping of the coastline, resulting in a field of view of 85.7° horizontally and 67.5° vertically. The cameras were equipped with 1.1 in Sony (Tokyo, Japan) IMX304 CMOS color sensors with global shutters and 4,096 x 3,000-pixel image resolution, which allowed for a small sampling distance, necessary for a high-quality inspection of the coastal infrastructure, Petereit says.
“The cameras are mounted on a mast, one facing port and the other starboard,” Petereit says. The cameras do not have an overlapping field of view as they are designed to capture the shoreline above the water as they move past it.”
In addition to the cameras, the system had a NORBIT (Trondheim, Norway) iWBMSe multibeam sonar for underwater mapping and an Ouster (San Francisco, CA, USA) OS1 64-beam LiDAR sensor for real-time obstacle detection and collision avoidance, Petereit says.
How it Works
The team used an image capturing program that triggers image capture on one or both cameras as soon as they are facing a predefined area of interest, which can be set by the operator. The program uses the position and orientation of the platform, derived from GNSS and IMU data, and checks whether the area of interest is seen by one or both cameras. In addition, to reduce the amount of data while still providing enough data for photogrammetric reconstruction, a new image is only captured if the image footprints of the new and previous images overlap by 80% or less. As the images are acquired, they are fused with the current GNSS position data needed to later geo-reference the resulting 3D model.
The image data is collected and stored on a computer onboard the vessel while it is operating on the water. During operation, this computer is already processing data that is critical for the boat to detect and avoid obstacles. The data used for underwater and surface mapping, i.e., sonar data and camera imagery, is also stored on this computer during data acquisition before being transmitted via WiFi to a standard notebook computer and processed at the ground control station on-site after the platform returns from the mission.
“We rely on the photogrammetric toolbox COLMAP (colmap.github.io),” Petereit explains. “It uses discriminative image features to first align the input images, compute their relative poses, and compute a sparse 3D model. It then performs dense image matching to compute a dense 3D point cloud, which is then geo-referenced using the GNSS positions of the images.”
Testing and Results
The team conducted tests, including data acquisition, mapping of the lake bottom and part of the shoreline, and testing autonomous functions, on an excavation lake near Karlsruhe, Germany, in 2022. Overall, the tests have been successful, Petereit says.
“The tests proved that the platform can autonomously perform a mapping task given by the operator on shore by selecting an area to be mapped using an intuitive human-machine interface,” Petereit says. “In autonomous operation, the platform successfully navigated around detected obstacles and efficiently covered the area of interest by calculating the shortest path through the area while ensuring complete coverage.”
The resulting underwater 3D map has a computed resolution of up to 30 cm in the horizontal direction and 10 cm in the vertical direction, and the water photogrammetric reconstruction can produce 3D point clouds of the shoreline with an average of about 580 points per square meter, he says.
Next steps
The team will continue to work to refine the vessel’s autonomous navigation capabilities and integrate advanced data analytics into the post-processing pipeline. This will not only create maps, but also help generate actionable insights such as detecting changes in waterway environments over time, predicting potential obstructions or risks, and offering recommendations for safe navigation routes, Petereit says. The team also hopes to expand its work to coastal waters.
About the Author
Jim Tatum
Senior Editor
VSD Senior Editor Jim Tatum has more than 25 years experience in print and digital journalism, covering business/industry/economic development issues, regional and local government/regulatory issues, and more. In 2019, he transitioned from newspapers to business media full time, joining VSD in 2023.