Skip to content ↓

Team building humanoid robot

Press Inquiries

Press Contact:

Elizabeth A. Thomson
Phone: 617-258-5563
MIT Resource Development
Close

About 20 reporters covering the National Innovation Summit visited the Artificial Intelligence Laboratory last Friday to see the lab's humanoid robot as well as demonstrations and commerical spinoffs of several other research projects. See the March 11 issue of MIT Tech Talk for stories about the AI Lab's intelligent room, machine-vision image browsers, a catching/throwing robot, and virtual-reality touch devices.

Rodney Brooks's ultimate goal is to build Commander Data.

When pressed, he admits that achieving a robot akin to the human-like Star Trek character "probably won't happen in my lifetime��������������������������� [but] that's what drives me -- the dream of having a thinking robot."

And along the way Professor Brooks, director of the Artificial Intelligence Laboratory, believes his work will help us understand human intelligence. "The act of creating a thinking robot forces us to ask the right questions with respect to how intelligence works, since we have to have all the parts there to make it happen," he said. The resulting robot could also be used to test various theories about human mental development.

Meet Cog, the humanoid robot that's kicking off this grand adventure, and the only machine of its kind in the United States. Now four years old, it is a testbed for Professor Brooks's and colleagues' ideas about how to build an intelligent robot.

Cog, which physically approximates the upper torso of a large man, has already "learned" some basic skills. For example, it can reach its hand to a visual target -- something that children learn very early on -- and it can shake its head back and forth or nod up and down in imitation of a researcher.

In getting this far, the researchers have tackled and solved some serious engineering problems. For example, Cog's arms are safe to interact with, unlike those of conventional robots. As a result, some of the robot's components are patented and available for licensing through the Technology Licensing Office.

A HUMAN FORM

Cog's creators base its design on "embodiment": the theory that people -- and robots -- need the form of the human body to shape human thoughts. "Our experience of the world is very constrained by our bodies," said Dr. Brooks, the Fujitsu Professor of Computer Science and Engineering.

A robot that looks like a human is also easier for people to interact with, and the researchers believe that such interactions are critical to Cog's development. "A child doesn't learn by sitting in a black box -- he learns by interacting with parents, caregivers and other people," said Brian M. Scassellati, a graduate student in the Department of Electrical Engineering and Computer Science (EECS).

"People walk up to Cog, talk to it, look it in the eye," Professor Brooks said. That provides "a rich source of data for it to learn from."

In keeping with embodiment, Cog was built with perceptual abilities such as vision and some basic motor skills (it can move its limbs). This architecture allows the robot to interact with its environment in human-like ways. The idea is that simple interactions will lead to simple behaviors that in turn will build on each other to make more complicated behaviors easier.

"That's what nature does. Children first learn to look at objects, then to reach for them," said Mr. Scassellati, whose work on Cog focuses on social interactions.

The researchers have already "taught" Cog some skills related to hand-eye coordination, such as how to shake its head and reach for an object. "It's not clear what other tasks will be made easier for Cog now that it knows how to reach for something, but we have some data that say this overall approach could be successful," Mr. Scassellati said. For example, in earlier work Professor Brooks and colleagues built insect robots that learned to walk using this "behavior-based" approach to intelligence.

HOW COG LEARNS

Cog learns via software programs that process the information flowing in from the robot's interactions with its environment. Cog is not, however, preprogrammed for a given task. "That's fine for robots used in things like factory automation, where a car door appears in exactly the same location and there is exactly the same welding sequence," Mr. Scassellati said. "But it's not as useful if you're in an environment that changes, because once it changes, all the mathematics you went through for a specific task won't work."

The Cog team does in fact want the robot to work in many different situations. As a result, the researchers use a variety of standard "learning" programs that mimic development. These programs essentially allow Cog to learn by trial and error. As a result, when it first tries to learn a given task, it isn't very good at it. But every time it fails, it learns from its mistakes and makes incremental improvements as it goes along. The robot can then apply what it's learned to completing the same task under different conditions.

For example, about two years ago, the researchers gave Cog a new head. With its old head, however, they had taught the robot to saccade its eyes, or move them very rapidly to center an object in the field of view (humans do this two to three times per second). "When we switched to the new head, we just took the same piece of software and let it run again, and the robot basically relearned how to saccade even though the mechanics of the new head were completely different," Mr. Scassellati said.

THE ROBOT'S BRAIN

Key to all these efforts is Cog's brain, which is composed of many sets of computers spread out around the robot's body and down its back. "Placing most of the computers offboard relieves us of many engineering problems," explained EECS graduate student Matthew M. Williamson.

Each set of computers represents a different part of Cog's "nervous system." For example, the computers in one refrigerator-sized rack handle motor control. A part of the brain under construction -- Cog's "associative cortex" -- will allow the robot to do more complex, interactive behaviors.

Cog's eyes and arms are also based on those of humans. The robot's visual system includes the same basic controls that we have (such as the ability of its eyes to track a moving object and saccade).

Recently, Mr. Williamson put a third generation of arms on the robot that "exploit the natural, inherent properties of arms themselves to make motor control tasks easier," he said. Picture putting a container of milk back in the fridge. "You use the pendulum properties of your arm to do that in a smooth motion," Mr. Williamson said. Compare that to "rigidly moving the arm, which is the conventional robot-like way to do the task."

Conventional robot arms are also dangerous for humans to work around. They're programmed to move a certain way with a certain force, and if they hit an object that's in the way, they'll keep pushing. That can break the arm. In addition, "if the object in the way is you -- a human -- it could get messy," Professor Brooks said.

Consequently, the arms developed by the Cog team are reactive to their environment. Among other things, if something gets in the way of an arm, it will stop moving. Depending on the task at hand, the arms can also be made rigid or floppy.

Central to this work are a physical spring system and "virtual spring" software developed by Mr. Williamson and Professor Gill A. Pratt of the AI Lab and EECS.

"With the physical spring the arms can be compliant, stable, and withstand collisions and shock loads without damage," Mr. Williamson said. The software allows the arms to work more like those of a human than those of a conventional robot. There is a patent on one part of the system, with a second patent pending.

Other parts of the robot that are currently under development include pressure-sensitive skin for the body and head, and the ability for the face to be expressive (that work is being conducted by EECS graduate student Cynthia Breazeal Ferrell).

Cog, however, will never have legs. "Just solving the engineering of legs -- and getting it to stand -- would take all of our time," Mr. Williamson said. "Besides, we have enough things to study even without legs. Lots of interesting behavior occurs in infants long before they walk."

So Cog continues to grow and develop. "We have no idea where this will go," said Mr. Scassellati, "but I think that when it's over, we'll all have earned associate degrees in philosophy!"

Other members of the Cog team are EECS graduate students Robert E. Irie, Matthew J. Marjanovic and Charles C. Kemp. The work is funded in part by the Office of Naval Research and in part from unrestricted gift funds.

A version of this article appeared in MIT Tech Talk on March 18, 1998.

Related Topics

More MIT News