Skip to content ↓

MERS tackles the human-robot divide

CSAIL Professor Brian Williams is on the cutting-edge of developing control algorithms that enable successful human-robot coordination.
Caption:
CSAIL Professor Brian Williams is on the cutting-edge of developing control algorithms that enable successful human-robot coordination.
Credits:
Photo: Jason Dorfman

Thanks to work underway at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), you might not have to wait until 2062 to travel to work in an aerocar like George Jetson. An autonomous personal air taxi capable of ferrying you to Paris by 5 p.m. may sound futuristic, but it is a current project of CSAIL principal investigator Brian Williams and his Model-based Embedded and Robotic Systems (MERS) group.

Williams’ group is developing control algorithms that enable successful human-robot coordination. Williams strongly believes that robots and autonomous systems can play a major role in the coming years, assisting the elderly, leading search-and-rescue operations and even exploring outer space. To achieve successful integration of robots into the human workforce requires systems that can execute simple, low-level reasoning quickly and efficiently, Williams explains; in that sense, MERS’ methodology is a departure from the typical goal in artificial intelligence of creating systems able to tackle complicated tasks such as chess.

Having a lifelong interest in space exploration — and after hearing of the loss of Mars Observer, which mysteriously vanished in the early 1990s — Williams became intrigued by the idea of creating an explorer that could not only operate in unknown environments, but also possess the ability to think, diagnose and repair itself under varying circumstances. Williams went on to co-invent Remote Agent, an autonomous system that could reason like an engineer by performing mission-planning diagnosis and repair from engineering models. Remote Agent controlled the NASA Deep Space 1 asteroid-encounter mission in 1999.

Through his work, Williams says, he grapples with the question, “How do we make explorers that can think, establish their own goals and also deal with all the things that go wrong along the way?”

The answer, for Williams, comes in the form of model-based autonomy, a new automated reasoning approach developed by MERS. Model-based autonomy allows humans to impart common-sense knowledge to autonomous systems through strategic guidance. Coupled with knowledge of its hardware and surrounding environment, systems using model-based autonomy have the ability to plan and execute actions to achieve goals, specified by humans, by reasoning from models of themselves and their environment.

“Part of the idea is to program autonomous systems using what looks like a traditional programming language, but the programs are really specifying a common-sense description of how we want the system to behave, not the actions that it should perform. This is often a description of what we want the system to do, assuming that things don’t go wrong,” Williams says. “Then the autonomous system needs to figure out what things could go wrong, by reasoning from common sense, and needs to figure out how to recover. This involves significant low-level reasoning about how things break and common ways to repair these breaks.”

The autonomous personal air taxi, or PT, being simulated by MERS in collaboration with Boeing Research and Technology, is one example of Williams’ work with model-based autonomy. When traveling in the PT, a passenger would interact with the vehicle as if it were a cab driver, offering information on the destination, desired arrival time and route. PT would check the weather, plan a safe itinerary and select alternative landing sites in case of emergency. In the event of a weather disruption or other unforeseen event, PT will diagnose the situation, alert the passenger and present alternatives, such as a new landing site or skipping a desired landmark, while explaining its reasoning along the way.

Through his work with model-based autonomy, Williams also explores human-robot coordination under varying and uncertain circumstances. For instance, the group has taught Athlete, a large rover being developed to support human exploration of lunar surfaces, how to perform tasks such as grasping through human demonstration and interaction.

Additionally, MERS and Boeing are researching the potential of increasing collaboration between humans and robots in the workplace. One joint effort focuses on how a human-robot team can work together fluidly using shared goals and plans, while taking each worker’s different capabilities into account. In one research example, students are acting as visual sensors to support the work of a robot team that consists of the whole-body robot PR2 and two single-armed manipulators. Williams is also working on developing robots with a strong basic instinct for risk, in an effort to instill greater trust in humans for using robots within the workforce.

Search and rescue is another area where Williams sees robots proving especially useful. “The idea is to have a set of robotic scouts that look at a mission plan, figure out areas where the plan is risky and then go off and take pictures of the area to make sure that it’s safe,” Williams explains. “The robots should be able to figure out where to go without anyone telling them. But if the robots assess that their actions are too risky, then the idea is that they will call humans and ask them for help.”

In dealing with risk, Williams and his group have developed risk-sensitive control algorithms, which allow systems to increase risk to a user-specified level while maximizing the benefit of the risk that is taken. Risk-sensitive control applications that Williams and his group have explored include deep-sea exploration using autonomous vehicles — specifically, underwater mapping of a deep-sea canyon and controlling a hovering deep-sea vehicle. Williams is also applying this work to controlling a grid of sustainable homes, as he feels that computer science can have a major impact in moving mankind towards greater energy efficiency.

Along with CSAIL postdoc J. Zico Kolter and assistant professor Youssef Marzouk, Williams is co-organizing a seminar series for fall 2011 and designing a graduate course for spring 2012, both of which are focused on computational methods for sustainability.  He is also involved in designing a new, flexible undergraduate engineering degree that allows students to create their own field of concentration. Possibilities include the environment, energy, transportation and computational sustainability.

“Many problems of societal need are really computational problems. Their solution often involves modeling the environment and involves making decisions about how to improve the environment or make better use of resources,” Williams says. “This very much involves machine learning, optimization and control, together with higher-level decision making, which is a lot of what CSAIL is about.”

For more on Williams’ work, check out http://groups.csail.mit.edu/mers/.

Related Links

Related Topics

More MIT News