Non-Factory

Robot with artificial bee brain navigates via external stimuli

A team of researchers from Freie Universität Berlin, Bernstein Fokus Neuronal Basis of Learning, and the Bernstein Center Berlin have developed a small vision-enabled robot which perceives environmental stimuli and “learns” to react to them.
Feb. 10, 2014
3 min read

A team of researchers from Freie Universität Berlin, Bernstein Fokus Neuronal Basis of Learning, and the Bernstein Center Berlin have developed a small vision-enabled robot which perceives environmental stimuli and “learns” to react to them.

The robot uses the nervous system of a honeybee as a model for its working principles in the sense that it links certain external stimuli with behavioral rules. In order to "see," the robot is equipped with a CMOS camera and an ATMEGA8 microcontroller that performs image processing. The vision system is connected to a computer, and a program run on the computer replicates the sensorimotor network of the insect brain. Data captured by the camera enables the robot to learn by simple principles, according to the press release.

"The network-controlled robot is able to link certain external stimuli with behavioral rules," said Professor Martin Paul Nawrot, head of the research team and professor of neuroscience at Freie Universität Berlin. "Much like honeybees learn to associate certain flower colors with tasty nectar, the robot learns to approach certain colored objects and to avoid others."

In an experiment conducted by the team, the robot was placed in the center of a small room with red and blue objects on the walls. When the robot’s camera focused on a red object, the scientists would trigger a light flash and the signal would activate a "reward sensor nerve cell" in the artificial network. The simultaneous processing of red and the reward would cause changes in those parts of the network, which exercised control over the robot wheels. As a result, whenever the robot “saw” red, it would move toward it. Conversely, when it saw blue objects, it would move backwards.

The work carried out by the team was published in an academic paper for the 6th International IEEE/EMBS Conference on Neural Engineering (NER). The team is planning to expand the neural network by supplementing more learning principles for the robot, which would make it more autonomous.

View the press release.

Also check out:
Google and the robot revolution
(Slideshow) 10 innovative current and future robotic applications
Robotics Industries Association to host first collaborative robots workshop at The Vision Show

Share your vision-related news by contacting
James Carroll,Senior Web Editor, Vision Systems Design
To receive news like this in your inbox,
click here.

Join our LinkedIn group | Like us on Facebook | Follow us on Twitter | Check us out on Google +

About the Author

James Carroll

Former VSD Editor James Carroll joined the team 2013.  Carroll covered machine vision and imaging from numerous angles, including application stories, industry news, market updates, and new products. In addition to writing and editing articles, Carroll managed the Innovators Awards program and webcasts.

Sign up for Vision Systems Design Newsletters

Voice Your Opinion!

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!