Robots and Vision: two disparate technologies?
Robots and Vision: two disparate technologies?
Andy Wilson
Editor at Large
Last month`s International Robots & Vision Show provided attendees and journalists with a glimpse of what`s new in robot and vision technologies and products. One innovation was demonstrated in the Barrett Technology (Cambridge, MA) booth. There, researchers showed a development model of the company`s BA7-310 robotic arm--the result of work led by professor Robert Howe at the Harvard University Division of Engineering and Applied Sciences (Cambridge, MA), in collaboration with other universities.
This model allows a robotic arm to apply various degrees of force to objects through feedback, attempting to emulate a human arm. Howe and his colleagues attached a force sensor with a semi-hemispherical cap at the end of the robotic arm to probe the environment. Essentially, the arm is able to sense whether it is in contact with a soft or hard object and then to apply an appropriate force.
Howe says, "Current industrial robotic arms are position-controlled. They go where they`ve been told to go, even if there`s something in the way. We are developing robots with the BA7-310 arm that can interact safely with unstructured environments. If the arm finds an unexpected object in the environment and its initial contact is soft, the robot can explore the object without damaging it."
Several companies exhibited industrial robots that, through computer programming and some imaging data, could perform functions such as pick and place, food inspection, and automated parts sorting. However, the current state of industrial robot technology owes more to the programming models of the last three decades than to forward-looking interdisciplinary approaches.
As a supporter of forward-looking approaches, Maja Mataric, neuroscience program director at the Robotics Research Laboratory of the University of Southern California (Los Angeles, CA), says, "The study of intelligent behavior requires such an approach. These systems must integrate perception, representation, and learning and span such topics as artificial intelligence, machine learning, and robotics while drawing from cognitive science and neuroscience." While Mataric is studying how to generate complex adaptive group behaviors from simple local interactions between individuals (machines), other researchers are studying how to develop systems that learn through modeling in the way that human beings perform tasks.
At the University of Electro-Communications (Tokyo, Japan), for example, researchers have developed a vision-based robot that can autonomously recognize human motions in real time and then emulate these motions without traditional computer-programming methods. Of course, unlike traditional methods, such systems must perform complex three-dimensional human motion analyses, interpret these actions, and produce task models that can then control a robot. Subsequently, after the learning process, such systems could be used to model many manual manufacturing methods without computer programming.
Financing the endeavor
Unfortunately, novel robotic research necessitates a large investment. And many OEM companies cannot afford this investment. But through collaborative donations to organizations such as the University of Electro-Communications and clever licensing agreements, such systems could be developed within a few years.
To aid in this endeavor, robotic- and vision-related companies should join forces and encourage the support of advanced robotic research. Otherwise, separate technologies might take decades of development efforts at great expense, and the words "Robots" and "Vision" will remain isolated, not united.