Cameras and Accessories

Vision-guided robot helps make new discoveries under the ice in Antarctica

A team of researchers funded by the National Science Foundation and NASA have developed and deployed a vision-guided robotic vehicle that was used to study polar environments under the ice in Antarctica.
Jan. 28, 2015
3 min read

A team of researchers funded by the National Science Foundation and NASA have developed and deployed a vision-guided robotic vehicle that was used to study polar environments under the ice in Antarctica.

Along with Frank Rack, executive director for the ANDRILL (Antarctic Drilling Project) Science Management Office and the University of Nebraska-Lincoln’s (UNL) principal investigator for the project, the robot was built by Bob Zook, an ROV engineer recruited by UNL, and Justin Burnett, UNL mechanical engineering graduate student and ANDRILL team member.

Known as "Deep SCINI" (submersible capable of under-ice navigation and imaging), the remotely-operated robot was deployed after the team used a hot water drill to bore through the ice. Aboard Deep SCINI are three cameras (upward, downward, and forward-looking), a conductivity-temperature sensor, LEDs from VideoRay, a gripper that can grasp objects, and a syringe sampler used for collecting water samples. A 300-foot tether includes a powerline that provides Ethernet for data communications while also keeping the remotely-operated vehicle (ROV) from getting lost.

The cameras used on the robot are Elphel NC353L-369 cameras, which are open source hardware and software IP cameras. Featuring a 5 MPixel CMOS image sensor, the cameras are housed inside of a custom 3000psi tested 2.5"diameter x 7" long pressure housing, and suit the vehicle well, in terms of what the team is looking for, according to Bob Zook, who said that then the robot was built, the team decided to use streaming jpegs instead of video.

"We ended up with a folder of more than 700,000 jpegs in it at the end of the dive, and there are a lot of advantages to that," he said.

With this method, the team is able to store high amounts of data in the excess headers (metadata of the jpegs). This way, if the team ever distributes a photograph taken by the robot, the data collected is in that photograph. In other methods, someone might have an Excel spreadsheet with a timestamp where you can look up what the data on a particular sensor was at a particular time, but that’s a lot of extra work, suggested Zook.

In addition, using this method, the camera captures as many images as possible.

"We also do not collect at a fixed frame rate; we collect images as fast as we can With the cameras that we are using, that can be anywhere from 2 fps to 300 fps, but this is mostly based on the amount of light we have, which is a continuously varying commodity on the underwater vehicle," he said.

Page 1 | Page 2

About the Author

James Carroll

Former VSD Editor James Carroll joined the team 2013.  Carroll covered machine vision and imaging from numerous angles, including application stories, industry news, market updates, and new products. In addition to writing and editing articles, Carroll managed the Innovators Awards program and webcasts.

Sign up for Vision Systems Design Newsletters

Voice Your Opinion!

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!