MIT News - Control theory
https://news.mit.edu/topic/mitcontrol-theory-rss.xml
MIT news feed about: Control theoryenMon, 08 Jun 2020 13:55:01 -0400Professor Emeritus Michael Athans, pioneer in control theory, dies at 83
https://news.mit.edu/2020/professor-emeritus-michael-athans-control-theory-pioneer-dies-0608
Longtime professor of electrical engineering was also a transformative director of the MIT Laboratory for Information and Decisions Systems.Mon, 08 Jun 2020 13:55:01 -0400https://news.mit.edu/2020/professor-emeritus-michael-athans-control-theory-pioneer-dies-0608Laboratory for Information and Decision Systems<p>MIT electrical engineering and computer science Professor Emeritus Michael Athans died peacefully on May 26 at his home in Clearwater, Florida, at the age of 83. </p>
<p>Athans was born in Drama, Macedonia, Greece in 1937. He came to the United States in 1954 for a one-year exchange visit under the auspices of the American Field Service (AFS), and attended the Tamalpais High School in Mill Valley, California. His year in the AFS was a defining one. He fell in love with America during this time and decided to stay in the United States when his AFS year was over. He then went on to attend the University of California at Berkeley from 1955 to 1961, where he received his BS in 1958 (with highest honors), MS in 1959, and PhD in control in 1961.</p>
<p>Athans had a remarkable career in academia. A pioneer in the field of control theory, he helped shape modern control theory and spearheaded, together with students and colleagues, the field of multivariable control system design and the field of robust control. These fundamental contributions were made in the course of Athans’s long and deeply accomplished tenure at MIT, as a member of the technical staff at Lincoln Laboratory from 1961 to 1964, and as a Department of Electrical Engineering and Computer Science faculty member from 1964 to 1998.</p>
<p>According to John Tsitsiklis, the Clarence J. Lebel Professor of Electrical Engineering and Computer Science, who was also a student of Athans: “It is hard to overstate the impact and influence Mike had on the field of systems control theory. He led the development of central methodologies. He broadened the scope of the field. And he amplified his impact by supporting and nurturing a whole generation of researchers, including myself.”</p>
<p>He further influenced the field as a transformative director of the Laboratory for Information and Decision Systems (LIDS), which was called the Electronic Systems Laboratory when Athans took the helm in 1974. He recognized the promise of the systems and control field in a vast array of domains, the need for new methodologies geared toward large-scale systems, and the confluence of control and communications. In this spirit, Athans changed the lab’s name to LIDS in 1978, which remains the lab’s name today. This forward-looking choice reflected the lab’s intellectual expansion and the embrace of new domains, ranging from transportation to energy to economics and more.</p>
<p>Key to this expansion was groundbreaking work by Athans and colleagues that made multivariable control design into a practical engineering methodology that could be applied to complex, large-scale, distributed systems — which Athans saw, correctly, as the future of system design. This adaptation of control methodology, combined with ideas from communications, networks, optimization, and control, helped chart the lab’s course, and was a central achievement of Athans’s intellectual and professional leadership during his tenure as director, which ended in 1981.</p>
<p>Athans was not only a highly-accomplished researcher, but also an award-winning and dedicated educator: In what was the aspect of his work he cherished most, he mentored and supervised the theses of more than 100 graduate students over the course of his career; he developed a course on modern control theory, producing nearly 70 videotaped lessons that were critical to the training of hundreds of practicing engineers; and he co-authored three books, most notably “Optimal Control” (with Peter Falb), a foundational text that influenced generations of students. In addition, he transitioned his research by co-founding, in 1978, ALPHATECH in Burlington, Massachusetts, where he served as chair of the board of directors and chief scientific consultant.</p>
<p>Described by friends as a vital force, Athans guided his students with care and often touched the lives of his friends and colleagues in profound ways. “Mike was immensely important to me, especially at the start of my time on the faculty at MIT,” says Alan Willsky, the Edwin Sibley Webster Professor of Electrical Engineering and Computer Science and a former LIDS director. “He was exceptionally generous, providing me with travel support and opportunities to lead research programs early in my career.”</p>
<p>Willsky’s remembrance of Athans as a strong, generous presence has been echoed by many colleagues and former students, including Nils Sandell PhD '74, who says “Michael was my PhD supervisor, ran LIDS when I was an MIT faculty member, and he was the founding chairman of ALPHATECH, where I was president. He was a great teacher with a highly entrepreneurial spirit and was a steadfast friend. I will miss him greatly.”</p>
<p>Upon retirement, Athans moved to Lisbon, Portugal, for 15 years and received a honoris causa doctorate from the Universidade Técnica de Lisboa in 2011. Upon returning from Portugal, Athans chose to live in Clearwater, Florida to be near his sons Brett and Sean, as well as his grandson Michael. </p>
<p>Michael Athans was predeceased by his son John Athans Spodick. He is survived by his loving partner Lena Corsentino; sons Stephen Athans Spodick and his wife Kathleen of Holden, Massachusetts, Brett Athans of St. Petersburg, Florida, Sean Athans of St. Pete Beach, Florida, and Stavros Valavanis, of New York, New York; as well as four grandchildren — Ryan and Christopher Spodick of Burlington, Vermont, Nicholas Spodick of Holden, Massachusetts, and Michael Athans of St. Petersburg, Florida. He is also survived by his brother Sotiris Athanasiadis and wife Sofia of Thessaloniki, Greece, and their two children, Chrysa and Yannis.</p>
<p>He will be remembered for his leadership, kindness, sharp wit, strong will, and stories of growing up in Greece. Services are private.</p>
Michael Athans at MIT in the 1970sPhoto: Calvin Campbell/MITLaboratory for Information and Decision Systems (LIDS)Electrical engineering and computer science (EECS)Lincoln LaboratoryObituariesControl theoryFacultyMIT Schwarzman College of ComputingRobot meets world
https://news.mit.edu/2013/robot-limb-collisions-0321
A new way of reasoning about what happens when a robot’s limb strikes an object could lead to more efficient and reliable robotic-control systems.Thu, 21 Mar 2013 04:00:02 -0400https://news.mit.edu/2013/robot-limb-collisions-0321Larry Hardesty, MIT News OfficeWhen a robot is moving one of its limbs through free space, its behavior is well-described by a few simple equations. But as soon as it strikes something solid — when a walking robot’s foot hits the ground, or a grasping robot’s hand touches an object — those equations break down. Roboticists typically use ad hoc control strategies to negotiate collisions and then revert to their rigorous mathematical models when the robot begins to move again.<br /><br />Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory are hoping to change that, with a new mathematical framework that unifies the analysis of both collisions and movement through free space. The work could lead to more efficient controllers for a wide range of robotic tasks, but it could also help guarantee the stability of control algorithms developed through trial and error — or of untried, but promising, new algorithms. <br /><br />In a pair of recent papers, the researchers demonstrate both applications. At last year’s International Workshop on the Algorithmic Foundations of Robotics, they showed how their technique can improve trajectory planning in complex robots like the experimental Fast Runner, an ostrich-like bipedal robot being built at the Florida Institute for Human and Machine Cognition. <br /><br />And in a paper that has been short-listed for the best-paper award at this year’s Hybrid Systems: Computation and Control conference in April, they use their framework to establish stability conditions for some simple mechanical systems undergoing collisions.<br /><br />According to associate professor of computer science and engineering Russ Tedrake, whose group did the new research, Fast Runner offers a good illustration of the problems posed by collision. Ordinarily, Tedrake says, a roboticist trying to develop a controller for a bipedal robot would assume that the robot’s foot makes contact with the ground in some prescribed way: say, the heel strikes first; then the forefoot strikes; then the heel lifts. <br /><br />“That doesn’t work for Fast Runner, because there’s a compliant foot that could hit at any number of points, there’s joint limits in the leg, there’s all kinds of complexity,” Tedrake says. “If you look at all the possible contact configurations the robot could be in, there’s 4 million of them. And you can’t possibly analyze them all independently.”<br /><br /><strong>The mysterious table</strong><br /><br />Even that combinatorial explosion, however, doesn’t do justice to the complexity of the problem. “Not only do you have this immense number of potential contacts and trajectories, but you also have things like non-uniqueness of solutions,” says Michael Posa, a graduate student in Tedrake’s group and lead author on both new papers. “Given the laws that we would normally write down that describe the evolution of the system, there may be multiple trajectories that are going to satisfy that because of the oddities of friction laws.”<br /><br />To illustrate this idea, Tedrake uses the analogy of a four-legged table resting on the ground. “If you give the table a push, we don’t have any models that will predict what that table’s going to do,” Tedrake says. <br /><br />In Newtonian physics, Tedrake explains, the table would be modeled as an aggregate mass. But that model leaves open an infinite number of possibilities for the distribution of mass across the table’s legs. Since the effects of friction depend on the specifics of the distribution, the classical model underdetermines the behavior of the table when shoved.<br /><br />In order to prove the stability of a control system for a robot that’s colliding with the world, then, it’s necessary to evaluate not only every possible configuration of the point of the contact, but also every possible solution of the resulting equations. That’s precisely what Posa and Tedrake — together with Mark Tobenkin, another grad student in Tedrake’s group, and Cecilia Cantu, an undergraduate major in mechanical engineering — have found a way to do.<br /><br /><strong>Expression compression</strong><br /><br />The key to their approach is to describe opposed possibilities for the state of a robotic system using simple algebraic expressions. For instance, as the foot of a bipedal robot approaches the ground, either the force exerted by the ground — call it <i>F</i> — or the distance to the ground — call it <i>d</i> — is equal to zero. So the equation <i>Fd</i> = 0 holds whether the robot’s foot is moving through free space or touching the ground. Just a few such equations give the researchers enough mathematical purchase on the problem of collision that they can draw boundaries around the whole space of solutions.<br /><br />The result is not a precise description of how a robot will behave in any given instance, but it is enough to offer guarantees of stability. Again, Tedrake explains by invoking the table analogy. “Given all the things I know about the frictional forces on the legs, I can’t tell you where the table’s going to go,” Tedrake says. “But I can tell you that it won’t hit the wall.”<br /><br />“The hardest thing about robots, especially if you want to get them to do something very dynamic, is when these contact points with the world change,” says Aaron Ames, an assistant professor of mechanical engineering at Texas A&M University and head of the A&M Bipedal Experimental Robotics Lab. “If you’re trying to assess some stability notion with all those things changing, it’s this huge complexity explosion that most people just haven’t wanted to deal with. It’s too much to wrap your head around, so very few people have been brave enough to attack it.” <br /><br />Ames acknowledges that, so far, the MIT researchers have applied their analytic techniques only to simple systems. But “the way their stuff is framed is in a general context that would be applicable to more complex systems,” Ames says. “The pieces are there. At least the starting point is there. And it’s a very good one.”Image: Allegra Boverman; Christine Daniloff/MITComputer Science and Artificial Intelligence Laboratory (CSAIL)Control theoryRoboticsRobotsCan control theory make software better?
https://news.mit.edu/2013/can-control-theory-make-software-better-0319
Techniques used to ensure that airplanes won’t stall out in flight could be adapted to prove that computer programs won’t divide by zero.Tue, 19 Mar 2013 04:00:02 -0400https://news.mit.edu/2013/can-control-theory-make-software-better-0319Larry Hardesty, MIT News Office“Formal verification” is a set of methods for mathematically proving that a computer program does what it’s supposed to do. It’s universal in hardware design and in the development of critical control software that can’t tolerate bugs; it’s common in academic research; and it’s beginning to make inroads in commercial software.<br /><br />In the latest issue of the journal <i>IEEE Transactions on Automatic Control</i>, researchers from MIT’s Laboratory for Information and Decision Systems (LIDS) and a colleague at Georgia Tech show how to apply principles from control theory — which analyzes dynamical systems ranging from robots to power grids — to formal verification. The result could help computer scientists expand their repertoire of formal-verification techniques, and it could be particularly useful in the area of <a href="/newsoffice/2012/loop-perforation-0522.html" target="_self">approximate computation</a>, in which designers of computer systems trade a little bit of computational accuracy for large gains in speed or power efficiency.<br /><br />In particular, the researchers adapted something called a Lyapunov function, which is a mainstay of control theory. The graph of a standard Lyapunov function slopes everywhere toward its minimum value: It can be thought of as looking kind of like a bowl. If the function characterizes the dynamics of a physical system, and the minimum value represents a stable state of the system, then the curve of the graph guarantees that the system will move toward greater stability.<br /><br />“The most basic example of a Lyapunov function is a pendulum swinging and its energy decaying,” says Mardavij Roozbehani, a principal research scientist in LIDS and lead author on the new paper. “This decay of energy becomes a certificate of stability, or ‘good behavior,’ of the pendulum system.”<br /><br />Of course, most dynamical systems are more complex than pendulums, and finding Lyapunov functions that characterize them can be difficult. But there’s a large literature on Lyapunov functions in control theory, and Roozbehani and his colleagues are hopeful that much of it will prove applicable to software verification.<br /><br /><strong>Skirting dangers</strong><br /><br />In their new paper, Roozbehani and his coauthors — MIT professor of electrical engineering Alexandre Megretski and Eric Feron, a professor of aerospace software engineering at Georgia Tech — envision a computer program as a set of rules for navigating a space defined by the variables in the program and the memory locations of the program instructions. Any state of the program — any values for the variables during execution of a particular instruction — constitutes a point in that space. Problems with a program’s execution, such as dividing by zero or overloading the memory, can be thought of as regions in the space.<br /><br />In this context, formal verification is a matter of demonstrating that the program will never steer its variables into any of these danger zones. To do that, the researchers introduce an analogue of Lyapunov functions that they call Lyapunov invariants. If the graph of a Lyapunov invariant is in some sense bowl shaped, then the task is to find a Lyapunov invariant such that the initial values of the program’s variables lie in the basin of the bowl, and all of the danger zones lie farther up the bowl’s walls. Veering toward the danger zones would then be analogous to a pendulum’s suddenly swinging out farther than it did on its previous swing.<br /><br />In practice, finding a Lyapunov invariant with the desired properties means systematically investigating different classes of functions. There’s no general way to predict in advance what type of function it will be — or even that it exists. But Roozbehani imagines that, if his and his colleagues’ approach catches on, researchers will begin to identify algorithms that lend themselves to particular types of Lyapunov invariants, as has happened with control problems and Lyapunov functions. <br /><br /><strong>Fuzzy thinking</strong><br /><br />Moreover, many of the critical software systems that require formal verification implement control systems designed using Lyapunov functions. “So there are intuitive reasons to believe that, at least for control-system software, these methods will work well,” Roozbehani says.<br /><br />Roozbehani is also enthusiastic about possible applications in approximate computation. As he explains, many control systems are based on mathematical models that can’t capture all of the complexity of real dynamical systems. So control theorists have developed analytic methods that can account for model inaccuracies and provide guarantees of stability even in the presence of uncertainty. Those techniques, Roozbehani argues, could be perfectly suited for verifying code that exploits approximate computation.<br /><br />“Computer scientists are not used to thinking about robustness of software,” says George Pappas, chair of the Department of Electrical and Systems Engineering at the University of Pennsylvania. “This is the first work that is formalizing the notion of robustness for software. It’s a paradigm shift, from the more exhaustive, combinatorial view of checking for bugs in software, to a view where you try to see how robust your software is to changes in the input or the internal state of computation and so on.”<br /><br />“The idea may not apply in all possible kinds of software,” Pappas cautions. “But if you’re thinking about software that implements, say, controller or sensor functionality, I think there’s no question that these types of ideas will have a lot of impact."The oscillation of a pendulum offers the simplest example of a Lyapunov function, a central concept in control theory. The pendulum's loss of energy with each swing guarantees that it won't lurch into a less
stable state.
Control theoryLaboratory for Information and Decision Systems (LIDS)Moving past trial and error
https://news.mit.edu/2012/profile-braatz-0215
Richard Braatz applies math to design new materials and processes for drug manufacturing.Wed, 15 Feb 2012 05:00:00 -0500https://news.mit.edu/2012/profile-braatz-0215Jennifer Chu, MIT News Office<div class="video_captions"><img src="/sites/default/files/images/inline/newsofficeimages/braatz.jpg" border="0" alt="Richard Braatz" /><br /> <strong>Richard Braatz</strong><br /> <i>Photo: Dominick Reuter</i><br /><br /></div>
Trial-and-error experimentation underlies many biomedical innovations. This classic method — define a problem, test a proposed solution, learn from failure and try again — is the main route by which scientists discover new biomaterials and drugs today. This approach is also used to design ways of manufacturing these new materials, but the process is immensely time-consuming, producing a successful therapeutic product and its manufacturing process only after years of experiments, at considerable expense.<br /> <br />Richard Braatz, the Edwin R. Gilliland Professor of Chemical Engineering at MIT, applies mathematics to streamline the development of pharmaceuticals. Trained as an applied mathematician, Braatz is developing mathematical models to help scientists quickly and accurately design processes for manufacturing drug compounds with desired characteristics. Through mathematical simulations, Braatz has designed a system that significantly speeds the design of drug-manufacturing processes; he is now looking to apply the same mathematical approach to designing new biomaterials and nanoscale devices. <br /> <br />“Nanotechnology is very heavily experimental,” Braatz says. “There are researchers who do computations to gain insights into the physics or chemistry of nanoscale systems, but do not apply these computations for their design or manufacture. I want to push systematic design methods to the nanoscale, and to other areas where such methods aren’t really developed yet, such as biomaterials.”<br /> <br /><strong>From farm to formulas</strong><br /> <br />Braatz’s own academic path was anything but systematic. He spent most of his childhood on an Oregon farm owned by his grandfather. Braatz says he absorbed an engineer’s way of thinking early on from his father, an electrician, by examining his father’s handiwork on the farm and reading his electrical manuals.<br /> <br />Braatz also developed a serious work ethic. From the age of 10, he awoke early every morning — even on school days — to work on the farm. In high school, he picked up a night job at the local newspaper, processing and delivering thousands of newspapers to stores and the post office, sometimes until just before dawn. <br /> <br />After graduating from high school in 1984, Braatz headed to Alaska for the summer. A neighbor had told him that work paid well up north, and Braatz took a job at a fish-processing facility, driving forklifts and hauling 100-pound bags of fishmeal 16 hours a day. He returned each summer for four years, eventually working his way up to plant operator, saving enough money each summer to pay for the next year’s tuition at Oregon State University.<br /> <br />As an undergraduate, Braatz first planned to major in electrical engineering. But finding the introductory coursework unstimulating — given the knowledge he’d absorbed from his father — he cast about for another major. <br /> <br />“There was no Internet back then, so you couldn’t Google; web searches didn’t exist,” Braatz says. “So I went to the library and opened an encyclopedia, and said, 'OK, what other engineering [is] there?'”<br /> <br />Chemical engineering caught his eye; he had always liked and excelled at chemistry in high school. While pursuing a degree in chemical engineering, Braatz filled the rest of his schedule with courses in mathematics.<br /> <br />
<div class="video_captions" style="float: right; width: 368px; margin: 0pt 0pt 10px 10px;"><img src="/sites/default/files/images/inline/newsofficeimages/braatz-small.jpg" border="0" alt="Richard Braatz" /><br /><i>Photo: Dominick Reuter<br /></i></div>
After graduation, Braatz went on to the California Institute of Technology, where he earned both a master’s and a PhD in chemical engineering. In addition to his research, Braatz took numerous math and math-heavy courses in electrical engineering, applied mechanics, chemical engineering and chemistry. The combination of real applications and mathematical theory revealed a field of study Braatz had not previously considered: applied mathematics.<br /> <br />“This training was a very good background for learning how to derive mathematical solutions to research problems,” Braatz says.<br /> <br /><strong>A systems approach</strong><br /> <br />Soon after receiving his PhD, Braatz accepted an assistant professorship at the University of Illinois at Urbana-Champaign (UIUC). There, as an applied mathematician, he worked with researchers to tackle problems in a variety of fields: computer science, materials science, and electrical, chemical and mechanical engineering. <br /> <br />He spent eight years on a project spurred by a talk he attended at UIUC. In that talk, a representative of Merck described a major challenge in the pharmaceutical industry: controlling the size of crystals in the manufacture of any given drug. (The size and consistency of crystals determine, in part, a drug’s properties and overall efficacy.) <br /> <br />Braatz learned that while drug-manufacturing machinery was often monitored by sensors, much of the resulting data went unanalyzed. He pored over the sensors’ data, and developed mathematical models to gain an understanding of what the sensors reveal about each aspect of the drug-crystallization process. Over the years, his team devised an integrated series of algorithms that combined efficiently designed experiments with mathematical models to yield a desired crystal size from a given drug solution. They worked the algorithms into a system that automatically adjusts settings at each phase of the manufacturing process to produce an optimal crystal size, based on a “recipe” given by the algorithms.<br /> <br />“Sometimes the recipes are very weird,” Braatz says. “It might be a strange path you have to follow to manufacture the right crystals.” <br /> <br />The automated system, which has since been adopted by Merck and other pharmaceutical companies, provides a big improvement in efficiency, Braatz says, avoiding the time-consuming trial-and-error approach many drug manufacturers had relied on to design a crystallization process for a new drug. <br /> <br />In 2010, Braatz moved to MIT, where he is exploring mathematical applications in nanotechnology and tissue engineering — in particular, models to help design new drug-releasing materials. Such materials have the potential to deliver controlled, continuous therapies, but designing them currently takes years of trial-and-error experiments. <br /> <br />Braatz’s group is designing mathematical models to give researchers instructions, for example, on how to design materials that locally release drugs into a body’s cells at a desired rate. Braatz says approaching such a problem from a systematic perspective could potentially save years of time in the development of a biomedical material of high efficacy. <br /> <br />“Anything is a win if you could reduce those experiments from 10 years to several years,” Braatz says. “We’re talking hundreds of millions, billions of dollars. And the effect on people’s lives, you can’t put a price tag on that.” <br /><br /> <iframe frameborder="0" height="315" src="http://www.youtube.com/embed/xG0NU97EO8k?rel=0" width="560"></iframe><br /> Video: Melanie GonickRichard BraatzPhoto: Dominick ReuterBiotechnologyChemistry and chemical engineeringControl theoryFacultyManufacturingMathematicsNanoscience and nanotechnologyPharmaceuticalsThe too-smart-for-its-own-good grid
https://news.mit.edu/2011/too-smart-grid-0803
New technologies intended to boost reliance on renewable energy could destabilize the power grid if they’re not matched with careful pricing policies.Wed, 03 Aug 2011 04:00:01 -0400https://news.mit.edu/2011/too-smart-grid-0803Larry Hardesty, MIT News Office<div class="video_captions" style="width: 368px; float: right; margin: 0px 0px 15px 15px;"><img src="/sites/default/files/images/inline/newsofficeimages/too-smart-grid.jpg" border="0" width="368" /><br /><strong>A 'heat map' depicting the rates charged by electricity producers on the Eastern seaboard and across the Midwest — in which colors at the red end of the spectrum represent high prices and colors at the blue end low prices — demonstrates how drastically the wholesale-energy market can change in as little as five minutes. | <a href="/sites/default/files/images/inline/newsofficeimages/too-smart-grid.jpg" target="_blank">Enlarge image</a></strong><br /><i>Image courtesy of Mardavij Roozbehani</i></div>
In the last few years, electrical utilities have begun equipping their customers’ homes with new meters that have Internet connections and increased computational capacity. One envisioned application of these “smart meters” is to give customers real-time information about fluctuations in the price of electricity, which might encourage them to defer some energy-intensive tasks until supply is high or demand is low. Less of the energy produced from erratic renewable sources such as wind and solar would thus be wasted, and utilities would less frequently fire up backup generators, which are not only more expensive to operate but tend to be more polluting, too.<br /><br />Recent work by researchers in MIT’s Laboratory for Information and Decision Systems, however, shows that this policy could backfire. If too many people set appliances to turn on, or devices to recharge, when the price of electricity crosses the same threshold, it could cause a huge spike in demand; in the worst case, that could bring down the power grid. Fortunately, in <a href="http://www.mit.edu/~mardavij/publications_files/Volatility.pdf">a paper presented</a> at the last IEEE Conference on Decision and Control, the researchers also show that some relatively simple types of price controls could prevent huge swings in demand. But that stability would come at the cost of some of the efficiencies that real-time pricing is intended to provide.<br /><br />Today, customers receive monthly electrical bills that indicate the cost of electricity as a three- to six-month average. In fact, however, the price that power producers charge utilities fluctuates every five minutes or so, according to market conditions. The electrical system is thus what control theorists call an open loop: Price varies according to demand, but demand doesn’t vary according to price. Smart meters could close that loop, drastically changing the dynamics of the system.<br /><br /><strong>Taking control</strong><br /><br />Research scientist Mardavij Roozbehani and professors Sanjoy Mitter and Munther Dahleh assumed that every consumer has a “utility function” describing how inconvenient it is for him or her to defer electricity usage. While that function will vary from person to person, individual utility functions can be pooled into a single collective function for an entire population. The researchers assumed that on average, consumers will seek to maximize the difference between the utility function and the cost of electricity: That is, they’ll try to get as much convenience for as little money as possible.<br /><br />What they found was that if consumer response to price fluctuation is large enough to significantly alter patterns of energy use — and if it’s not, there’s no point in installing smart meters — then price variations well within the normal range can cause dangerous oscillations in demand. “For the system to work, supply and demand must match almost perfectly at each instant of time,” Roozbehani says. “The generators have what are called ramp constraints: They cannot ramp up their production arbitrarily fast, and they cannot ramp it down arbitrarily fast. If these oscillations become very wild, they’ll have a hard time keeping track of the demand. And that’s bad for everyone.”<br /><br />The researchers’ model, however, also indicates that at least partially shielding consumers from the volatility of the market could tame those oscillations. For instance, Roozbehani explains, utilities could give consumers price updates every hour or so, instead of every five minutes. Or, he says, “if the prices in the wholesale market are varying very widely, I pass the consumer a price that reflects the wholesale market conditions but not to that extent. If the prices in the wholesale market just doubled, I don’t give the consumer a price that is double the previous time interval but a price that is slightly higher.” According to Roozbehani, the same theoretical framework that he and his colleagues adopt in their paper should enable the analysis and development of practical pricing models.<br /><br /><strong>The trade-off</strong><br /><br />But minimizing the risks of giving consumers real-time pricing information also diminishes the benefits. “Possibly, when you need an aggressive response from the consumers — say the wind drops — you’re not going to get it,” Roozbehani says.<br /><br />One way to improve that trade-off, Roozbehani explains, would be for customers to actually give utilities information about how they would respond to different prices at different times. Utilities could then tune the prices that they pass to consumers much more precisely, to maximize responsiveness to fluctuations in the market while minimizing the risk of instability. Collecting that information would be difficult, but Roozbehani’s hunch is that the benefits would outweigh the costs. He’s currently working on expanding his model so that it factors in the value of information, to see if his hunch is right.<br /><br />“As far as I know, very, very few people are analyzing the dynamics of electricity markets with experience from control theory,” says Eugene Litvinov, senior director of business architecture and technology at ISO New England, the organization that oversees the operation of the electrical grid in the six New England states. “I think we should encourage these kinds of studies, because regulatory bodies and government are pushing for certain things, and they don’t realize how far they can push. For example, they want to have 30 percent wind penetration by 2020, or something like this, but that could cause serious issues for the grid. Without that kind of analysis, the operators would be very uncomfortable just jumping over the cliff.” <br /><br />But, Litvinov adds, an accurate model of the dynamics of energy consumption would have to factor in consumers’ responses, not only to changing electricity prices, but also to each other’s responses. “It’s like a game,” Litvinov says. “People will have to start adopting more sophisticated strategies. That whole dynamic is itself a subject for study.” Roozbehani agrees, pointing out that he, Dahleh, Mitter, and colleagues have already <a href="http://www.mit.edu/~mardavij/publications_files/EEM2011.pdf">published research</a> that begins to examine exactly the questions that Litvinov raises.<br /><br />Control theoryElectrical engineering and electronicsGridLaboratory for Information and Decision Systems (LIDS)After almost 20 years, math problem falls
https://news.mit.edu/2011/convexity-0715
MIT researchers’ answer to a major question in the field of optimization brings disappointing news — but there’s a silver lining.Fri, 15 Jul 2011 04:00:00 -0400https://news.mit.edu/2011/convexity-0715Larry Hardesty, MIT News OfficeMathematicians and engineers are often concerned with finding the minimum value of a particular mathematical function. That minimum could represent the optimal trade-off between competing criteria — between the surface area, weight and wind resistance of a car’s body design, for instance. In control theory, a minimum might represent a stable state of an electromechanical system, like an airplane in flight or a bipedal robot trying to keep itself balanced. There, the goal of a control algorithm might be to continuously steer the system back toward the minimum.<br /><br />For complex functions, finding global minima can be very hard. But it’s a lot easier if you know in advance that the function is convex, meaning that the graph of the function slopes everywhere toward the minimum. Convexity is such a useful property that, in 1992, when a major conference on optimization selected the seven most important outstanding problems in the field, one of them was whether the convexity of an arbitrary polynomial function could be efficiently determined.<br /><br />Almost 20 years later, researchers in MIT’s Laboratory for Information and Decision Systems have finally answered that question. Unfortunately, the answer, which they reported in May <a href="http://mit.edu/~a_a_a/Public/Publications/convexity_nphard.pdf">with one paper</a> at the Society for Industrial and Applied Mathematics (SIAM) Conference on Optimization, is no. For an arbitrary polynomial function — that is, a function in which variables are raised to integral exponents, such as 13x<sup>4</sup> + 7xy<sup>2</sup> + yz — determining whether it’s convex is what’s called <a href="/newsoffice/2009/explainer-pnp.html" target="_self">NP-hard</a>. That means that the most powerful computers in the world couldn’t provide an answer in a reasonable amount of time.<br /><br />At the same conference, however, MIT Professor of Electrical Engineering and Computer Science Pablo Parrilo and his graduate student Amir Ali Ahmadi, two of the first paper’s four authors, showed that in many cases, a property that can be determined efficiently, known as <a href="/newsoffice/2010/parrilo-convergence.html" target="_blank">sum-of-squares</a> convexity, is a viable substitute for convexity. Moreover, they provide an algorithm for determining whether an arbitrary function has that property. <br /><br /><strong>Downhill from here</strong><br /><br />On the first paper, Parrilo and Ahmadi were joined by John N. Tsitsiklis, the Clarence J. LeBel Professor of Electrical Engineering, and Alex Olshevsky, a former student of Tsitsiklis’ who’s now a postdoc at Princeton University. According to Etienne de Klerk, a professor in the Department of Econometrics and Operations Research at Tilburg University in the Netherlands, the revelation that determining convexity is NP-hard is “not only interesting but quite surprising.” <br /><br />"If you take any textbook of optimization that we use to teach undergrads, it will typically start, 'Let the convex optimization be given,'" de Klerk says. "All the functions are convex functions." The MIT researchers' work, he adds, will "make you view the world of optimization in a slightly different way."<br /><br />To get a sense of why convexity is so useful, imagine an airplane in flight, and suppose that you have a function that relates its altitude and speed to the amount of fuel it will consume — naturally, a quantity you’d want to minimize. If the function is convex, its graph — which is three-dimensional — looks like a big bowl, and at the bottom is the combination of altitude and speed that minimizes fuel consumption. All the plane’s control algorithm has to do is find a new altitude and speed that are down the slope of the bowl, and it knows it’s heading in the right direction. But if the function isn’t convex, the graph might look like a mountain range. The minimum value might lie across several peaks and basins, and locating it could be too time consuming to do on the fly.<br /><br /><strong>Squaring off</strong><br /><br />Of course, the types of functions that real control algorithms have to deal with are much more complex. But that only heightens the advantage of convexity: It guarantees that you can make an informed decision using only limited, local information. “In control theory, there’s been a big shift in recent years toward doing online optimizing,” Parrilo explains. “Now, because we have such big computational power, you can afford controllers that do things that are a lot more complicated. They actually solve optimization problems on the fly.” But, Parrilo says, “if you want to do this, you need some kind of guarantee that the decision that you’re going to take is going to be optimal, or close to optimal, and also that you can do this in a certain period of time.” Convexity provides that guarantee.<br /><br />Since the circumstances surrounding the operation of an electromechanical system are constantly changing, so are the functions that describe the system’s optimal state. It would be nice to be able to tell in advance whether those functions are convex, but alas, the MIT researchers have shown that that’s not always possible.<br /><br />But Parrilo and Ahmadi also proved that, for polynomial functions with few variables or small exponents, convexity is the same thing as sum-of-squares convexity, which is easy to check. (“Sum of squares” just means that a polynomial, like x<sup>2</sup> – 2xy + y<sup>2</sup> + z<sup>2</sup>, can be rewritten as the sum of expressions raised to the power of two — in this case, (x – y)<sup>2</sup> + z<sup>2</sup>.)<br /><br />Moreover, Ahmadi points out, in order to prove that sum-of-squares convexity is not <i>always</i> the same thing as convexity, he and Parrilo had to come up with some bizarre counterexamples that are unlikely to arise in most engineering contexts. “If you have few enough variables, say less than five or six, it’s not easy to find examples where it doesn’t work,” Ahmadi says. “So it’s pretty powerful, at least for reasonable problems.”<br /><br />A convex function (top) is one whose graph slopes everywhere toward its minimum value, whereas a nonconvex function (bottom) may have many basins, or local minima.Image: Amir Ali AhmadiControl theoryInformation systemsHow to control complex networks
https://news.mit.edu/2011/network-control-0512
New algorithm offers ability to influence systems such as living cells or social networks.Thu, 12 May 2011 04:00:00 -0400https://news.mit.edu/2011/network-control-0512Anne Trafton, MIT News OfficeAt first glance, a diagram of the complex network of genes that regulate cellular metabolism might seem hopelessly complex, and efforts to control such a system futile.<br /><br />However, an MIT researcher has come up with a new computational model that can analyze any type of complex network — biological, social or electronic — and reveal the critical points that can be used to control the entire system. <br /><br />Potential applications of this work, which appears <a href="http://www.nature.com/nature/journal/v473/n7346/full/nature10" target="_blank">as the cover story</a> in the May 12 issue of <em>Nature</em>, include reprogramming adult cells and identifying new drug targets, says study author Jean-Jacques Slotine, an MIT professor of mechanical engineering and brain and cognitive sciences.<br /><br />
<div id="video_captions" style="padding-bottom: 15px; width: 560px; float: left;"><img src="/sites/default/files/images/inline/newsofficeimages/cactus.jpg" border="0" /><br /><strong>MIT and Northeastern University researchers devised a computer algorithm that can generate a controllability structure for any complex network. The red points are 'driver nodes,' which can control the rest of the nodes (green).</strong><br /><em>Image: Mauro Martino</em></div>
Slotine and his co-authors applied their model to dozens of real-life networks, including cell-phone networks, social networks, the networks that control gene expression in cells and the neuronal network of the C. elegans worm. For each, they calculated the percentage of points that need to be controlled in order to gain control of the entire system. <br /><br />For sparse networks such as gene regulatory networks, they found the number is high, around 80 percent. For dense networks — such as neuronal networks — it’s more like 10 percent. <br /><br />The paper, a collaboration with Albert-Laszlo Barabasi and Yang-Yu Liu of Northeastern University, builds on more than half a century of research in the field of control theory.<br /><br />Control theory — the study of how to govern the behavior of dynamic systems — has guided the development of airplanes, robots, cars and electronics. The principles of control theory allow engineers to design feedback loops that monitor input and output of a system and adjust accordingly. One example is the cruise control system in a car.<br /><br />However, while commonly used in engineering, control theory has been applied only intermittently to complex, self-assembling networks such as living cells or the Internet, Slotine says. Control research on large networks has been concerned mostly with questions of synchronization, he says.<br /><br />
<div id="video_captions" style="padding: 15px; width: 368px; float: right;"><img src="/sites/default/files/images/inline/newsofficeimages/slotine.jpg" border="0" /><br /><strong>Jean-Jacques Slotine</strong><br /><em>Photo: Patrick Gillooly</em></div>
In the past 10 years, researchers have learned a great deal about the organization of such networks, in particular their topology — the patterns of connections between different points, or nodes, in the network. Slotine and his colleagues applied traditional control theory to these recent advances, devising a new model for controlling complex, self-assembling networks.<br /> <br />“The area of control of networks is a very important one, and although much work has been done in this area, there are a number of open problems of outstanding practical significance,” says Adilson Motter, associate professor of physics at Northwestern University. The biggest contribution of the paper by Slotine and his colleagues is to identify the type of nodes that need to be targeted in order to control complex networks, says Motter, who was not involved with this research.<br /><br />The researchers started by devising a new computer algorithm to determine how many nodes in a particular network need to be controlled in order to gain control of the entire network. (Examples of nodes include members of a social network, or single neurons in the brain.) <br /><br />“The obvious answer is to put input to all of the nodes of the network, and you can, but that’s a silly answer,” Slotine says. “The question is how to find a much smaller set of nodes that allows you to do that.”<br /><br />There are other algorithms that can answer this question, but most of them take far too long — years, even. The new algorithm quickly tells you both how many points need to be controlled, and where those points — known as “driver nodes” — are located. <br /><br />Next, the researchers figured out what determines the number of driver nodes, which is unique to each network. They found that the number depends on a property called “degree distribution,” which describes the number of connections per node. <br /><br />A higher average degree (meaning the points are densely connected) means fewer nodes are needed to control the entire network. Sparse networks, which have fewer connections, are more difficult to control, as are networks where the node degrees are highly variable. <br /><br />In future work, Slotine and his collaborators plan to delve further into biological networks, such as those governing metabolism. Figuring out how bacterial metabolic networks are controlled could help biologists identify new targets for antibiotics by determining which points in the network are the most vulnerable. <br /><br />Brain and cognitive sciencesControl theoryMechanical engineeringNetworksSocial networksSpeeding swarms of sensor robots
https://news.mit.edu/2011/robot-algorithm-0503
A new algorithm ensures that robotic environmental sensors will be able to focus on areas of interest without giving other areas short shrift.Tue, 03 May 2011 04:00:00 -0400https://news.mit.edu/2011/robot-algorithm-0503Larry Hardesty, MIT News OfficeConcerns about the spread of radiation from damaged Japanese nuclear reactors — even as scientists are still trying to assess the consequences of the year-old Deepwater Horizon oil spill — have provided a painful reminder of just how important environmental monitoring can be. But collecting data on large expanses of land and sea can require massive deployments of resources.<br /><br />At the Institute of Electrical and Electronics Engineers’ International Conference on Robotics and Automation in May, MIT researchers will present a new algorithm enabling sensor-laden robots to focus on the parts of their environments that change most frequently, without losing track of the regions that change more slowly. At the same conference, they’ll present a second paper describing a test run of the algorithm on underwater sensors that researchers at the University of Southern California (USC) are using to study algae blooms.<br /><br />The work of Daniela Rus, a professor of computer science and electrical engineering, and postdocs Mac Schwager and Stephen Smith (now an assistant professor at the University of Waterloo in Ontario), the algorithm is designed for robots that will be monitoring an environment for long periods of time, tracing the same routes over and over. It assumes that the data of interest — temperature, the concentration of chemicals, the presence of organisms — fluctuate at different rates in different parts of the environment. In ocean regions with strong currents, for instance, chemical concentrations might change more rapidly than they do in more sheltered areas. <br /><br /><strong>Floor it</strong><br /><br />In its current version, the algorithm assumes that researchers already have a mathematical model of the rates at which conditions change in different parts of the environment. The algorithm simply determines how the robots should adjust their velocities as they trace their routes. For instance, given particular rates of change along a route, would it make more sense to make one pass in an hour, slowing down considerably in areas of frequent change, or to make four or five passes, collecting less detailed data but taking more regular samples?<br /><br />“From a practical point of view, it seems like an easy problem,” says Calin Belta, an assistant professor of mechanical engineering, systems engineering and bioinformatics at Boston University, who was not involved in the research. But it turns out to be a monstrously complex calculation. “It’s very hard to come up with a mathematical proof that you can really optimize the acquired knowledge,” he adds.<br /><br />The MIT researchers draw an analogy with dust accumulating on a floor — dust that’s cleared whenever a sensor passes nearby. Because environmental change occurs at different rates in different areas, the dust piles up unevenly. The researchers were able to show that, with their algorithm, the height of the piles of dust would never exceed some limit: Only so much change could occur in any area before the sensor would measure it.<br /><br /><strong>Ups and downs</strong><br /><br />Although the MIT researchers’ algorithm is designed to control robots’ velocity, the first robots on which it was tested don’t actually have velocity controllers. USC researchers have been studying harmful algae blooms using commercial robotic sensors designed by the Massachusetts company Webb Research. Because the sensors are intended to monitor ocean environments for weeks on end, they have to use power very sparingly, so they have no moving parts. Each sensor is shaped like an airplane, with an inflatable bladder on its nose. When the bladder fills, the sensor rises to the surface of the ocean; as the bladder empties, the sensor glides downward.<br /><br />The more rapidly the bladder fills and empties, the steeper the sensor’s trajectory up and down, and the longer it takes to traverse a given distance — so it’s possible to concentrate the sensor’s attention in a particular location. Working with colleagues in the USC computer science department, the MIT team developed an interface that allows ocean researchers to specify regions of interest by drawing polygons around them on a digital map and indicating their priority with a numerical rating. The new algorithm then determines a trajectory for the sensor that will maximize the amount of data it collects in high-priority regions, without neglecting lower-priority regions.<br /><br />At the moment, the algorithm depends on either some antecedent estimate of rates of change for an environment or researchers’ prioritization of regions. But in principle, a robotic sensor should be able to deduce rates of change from its own measurements, and the MIT researchers are currently working to modify the algorithm so that it can revise its own computations in light of new evidence. “That’s going to be a hard problem as well,” Belta says. “But they have the right background, and they’re strong, so I think they might be able to do it.”<br /><br />The researchers also envision that the algorithm could prove useful for fleets of robots performing tasks other than environmental monitoring, such as tending produce, or — in a more literal application of the vacuuming-dust metaphor — cleaning up environmental hazards, such as oil leaking from underwater wells.<br /><br />One of two Slocum gliders owned and operated by the USC Center for Integrated Networked
Aquatic PlatformS (CINAPS).Image: Smith et al.AlgorithmsComputer Science and Artificial Intelligence Laboratory (CSAIL)Control theoryElectrical engineering and electronicsArtificial intelligenceRobotsProdigy of probability
https://news.mit.edu/2011/timeline-wiener-0119
Norbert Wiener gained fame as the father of cybernetics, but his earlier work on statistical descriptions of complex systems may prove more important.Wed, 19 Jan 2011 05:00:01 -0500https://news.mit.edu/2011/timeline-wiener-0119Larry Hardesty, MIT News Office<em>‘150 years of MIT’ is a series that looks at specific people and moments from <a href="http://mit150.mit.edu">MIT’s 150-year history</a> and explains their lasting effect on the Institute, the nation and the world. See the <a href="http://mit150.mit.edu/timeline" target="_blank">full interactive timeline</a> at the MIT150 site.</em><br /><br />Norbert Wiener, the mathematician and former child prodigy who won the National Medal of Science in 1963, figures prominently in MIT lore. After entering Tufts University at 11 and getting his PhD from Harvard at 18, he joined the MIT faculty at 23 and spent much of the next 40 years rambling the Institute’s halls, depositing the ashes of his signature cigar in the chalk trays of his colleagues’ blackboards, volubly holding forth on a bewildering range of topics, and, along the way, helping create the pop-culture archetype of the absent-minded professor.<br /><br />In his lifetime, Wiener was best known for <em>Cybernetics</em>, a book he published in 1948, when he was in his mid-50s, which attempted to unify the study of biological and electromechanical systems through common principles of feedback, communication and control. The book’s title — Wiener’s own coinage, from the Greek for “steersman” — lives on in words like “cyborg” and “cyberspace,” and researchers in a host of disciplines drew inspiration from Wiener’s syncretic vision.<br /><br />But in the United States and Western Europe, cybernetics as an autonomous discipline never really got off the ground. (Interestingly, departments of cybernetics did spring up in several Eastern-Bloc states, and some of them persist today.) Wiener’s ideas ended up blending together with those of a number of his contemporaries to help create the intellectual backdrop against which engineering is done today. But it’s difficult to isolate a single strain of thought in <em>Cybernetics</em> that had a lasting influence on subsequent scientific research.<br /><br />Much of Wiener’s earlier work, however, did have such an influence. In the early ’20s, as a newly minted MIT professor, Wiener became interested in Brownian motion, the tendency of a small particle suspended on the surface of a fluid to meander about, buffeted by the vibration of the surrounding molecules. Brownian motion is the paradigm of a so-called stochastic process — one whose outcome is totally random. Wiener devised the first mathematical description of Brownian motion that allowed it to be quantified probabilistically. You can’t mathematically predict where a particle wandering around a Petri dish will wind up, but you can calculate the probability that it will, say, end up in some region of the dish after a specified amount of time.<br /><br />Wiener’s probabilistic description applies to more than just specks of dust floating in Petri dishes. It’s been used to characterize the random electromagnetic noise that corrupts radio signals, the quantum behavior of particles, and the fluctuations of the stock market. “It’s a fundamental building block in stochastic models and stochastic control,” says Sanjoy Mitter, a professor of electrical engineering in MIT’s Laboratory for Information and Decision Systems. Take, for instance, the famous Black-Scholes equation used to price stock options. “Without the Wiener measure, there’s no Black-Scholes,” says Mitter. “That might be a slight exaggeration, but not much.”<br /><br /><strong>Weighty problems</strong><br /><br />During World War II, Wiener received a government contract to help build a system that improved the accuracy of antiaircraft guns by predicting the future locations of aerial targets. Wiener envisioned a target’s flight path as a series of discrete measurements. Since airplanes don’t leap about the sky randomly, each new measurement is in some way correlated with the one that immediately preceded it, and, to a somewhat lesser degree, with the one preceding that, and so on, until you reach so far back in time that you come to measurements that have nothing to do with the target’s current position. Previous measurements thus offer some clues to future measurements; the trick is determining how much weight to give each of the previous measurements in calculating the next one. “What you want to do is minimize, in Wiener’s case, the mean square error in the prediction,” says Alan Oppenheim, Ford Professor of Engineering and head of MIT’s Digital Signal Processing Group. “That starts to get into mathematics, and then that starts to give you optimum weights. Getting those weights correct is what Wiener was doing.”<br /><br />The same type of correlation between discrete time measurements can also be used to filter noise out of a signal, and indeed, Wiener’s wartime work (together with simultaneous but independent work by the Russian mathematician Andrey Kolmogorov) gave rise to the field of statistical filtering, which today plays a role in radio transmission, computer vision and vehicle navigation, among other applications.<br /><br />The recognition that the same statistical techniques applied to problems of control — predicting how a system will respond to control signals — and communications — extracting a signal from the surrounding noise — was the foundational insight of cybernetics. “But to be honest, I don’t think Wiener had really worked it out,” says Mitter — who adds that much of his own research for the last 10 or 15 years has concentrated on making rigorous the connection that Wiener sketched.<br /><br />“An important contribution of cybernetics was to introduce engineering principles to life-science people,” says Robert Fano, a professor emeritus of electrical engineering and computer science, referring in particular to a series of seminars on cybernetics that Wiener hosted in the late 1940s, which were as well-attended by life scientists as by electrical engineers. <br /><br /><strong>Peripatetic professor</strong><br /><br />Fano credits his early interest in information theory, the <a href="/newsoffice/2010/explained-shannon-0115.html" target="_self">discipline established</a> by MIT alum (and future professor) Claude Shannon’s 1948 paper “A Mathematical Theory of Communication,” to Wiener, who during one of his characteristic perambulations around MIT appeared in the doorway of Fano’s office and said, “You know, information is entropy.” Trying to make sense of that cryptic comment (it turns out that standard measures of entropy from thermodynamics can be adapted to describe the probability of accurately reconstructing a corrupted communications signal) led Fano to develop, independently, the first theorem of Shannon’s theory. Shannon asked him to publish the work quickly, so that he could cite it in his groundbreaking paper.<br /><br />Fano also gives credence to some of the famous anecdotes about Wiener’s absentmindedness: the time he reported the theft of his car to the police, only to discover that he had driven it to Providence for a talk and taken the train back to Boston; the conversation in an MIT hallway he concluded by asking his interlocutor which way he had been heading when he stopped to chat, greeting the answer by saying, “Good! That means I’ve already had lunch.” Fano recalls driving to the MIT campus one rainy morning and spotting Wiener on the bridge between Boston and Cambridge, strolling along with his raincoat unbuttoned. Fano stopped to give him a ride, but, he says, once they pulled into the MIT parking lot, “I had a hell of a time getting him out of the car,” so absorbed was Wiener in his disquisition on whatever topic had struck his fancy.<br /><br />Wiener may have frequently seemed oblivious to the world around him, and he may have lived in an intellectual bubble from an early age. But David Mindell, the Dibner Professor of the History of Engineering and Manufacturing and director of MIT’s Program in Science, Technology and Society, whose book <a href="http://web.mit.edu/mindell/www/books.html" target="_blank"><em>Between Human and Machine</em> </a>traces the intellectual history of cybernetics, points out that “for a mathematician — and he was quite an accomplished mathematician — he had an unusual interest in engineering and the engineering applications of what he was doing. And that to me is a very MIT thing.”<br /><br />Norbert Wiener, the MIT mathematician best known as the father of cybernetics, whose work had important implications for control theory and signal processing, among other disciplines.Image courtesy of the MIT MuseumControl theoryInformation theoryTimelineMIT150A plane that lands like a bird
https://news.mit.edu/2010/perching-plane-0720
An innovative control system allows a foam glider to touch down on a perch or a wire like a pet parakeet.Tue, 20 Jul 2010 04:00:01 -0400https://news.mit.edu/2010/perching-plane-0720Larry Hardesty, MIT News OfficeEveryone knows what it's like for an airplane to land: the slow maneuvering into an approach pattern, the long descent, and the brakes slamming on as soon as the plane touches down, which seems to just barely bring it to a rest a mile later. Birds, however, can switch from barreling forward at full speed to lightly touching down on a target as narrow as a telephone wire. Why can't an airplane be more like a bird?<br /><br />MIT researchers have demonstrated a new control system that allows a foam glider with only a single motor on its tail to land on a perch, just like a pet parakeet. The work could have important implications for the design of robotic planes, greatly improving their maneuverability and potentially allowing them to recharge their batteries simply by alighting on power lines.<br /><br />Birds can land so precisely because they take advantage of a complicated physical phenomenon called "stall." Even when a commercial airplane is changing altitude or banking, its wings are never more than a few degrees away from level. Within that narrow range of angles, the airflow over the plane's wings is smooth and regular, like the flow of water around a small, smooth stone in a creek bed.<br /><br />A bird approaching its perch, however, will tilt its wings back at a much sharper angle. The airflow over the wings becomes turbulent, and large vortices — whirlwinds — form behind the wings. The effects of the vortices are hard to predict: If a plane tilts its wings back too far, it can fall out of the sky. Hence the name "stall."<br /><br />The smooth airflow over the wings of a normally operating plane is well-understood mathematically; as a consequence, engineers are highly confident that a commercial airliner will respond to the pilot's commands as intended. But stall is a much more complicated phenomenon: Even the best descriptions of it are time-consuming to compute.<br /><br /><strong>Reap the whirlwind</strong><br /><br />To design their control system, MIT Associate Professor Russ Tedrake, a member of the Computer Science and Artificial Intelligence Laboratory, and Rick Cory, a PhD student in Tedrake's lab who defended his dissertation this spring, first developed their own mathematical model of a glider in stall. For a range of launch conditions, they used the model to calculate sequences of instructions intended to guide the glider to its perch. "It gets this nominal trajectory," Cory explains. "It says, 'If this is a perfect model, this is how it should fly.'" But, he adds, "because the model is not perfect, if you play out that same solution, it completely misses."<br /><br />So Cory and Tedrake also developed a set of error-correction controls that could nudge the glider back onto its trajectory when location sensors determined that it had deviated from it. By using innovative techniques developed at MIT's Laboratory for Information and Decision Systems, they were able to precisely calculate the degree of deviation that the controls could compensate for. The addition of the error-correction controls makes a trajectory look like a tube snaking through space: The center of the tube is the trajectory calculated using Cory and Tedrake's model; the radius of the tube describes the tolerance of the error-correction controls.<br /><br />The control system ends up being, effectively, a bunch of tubes pressed together like a fistful of straws. If the glider goes so far off course that it leaves one tube, it will still find itself in another. Once the glider is launched, it just keeps checking its position and executing the command that corresponds to the tube in which it finds itself. The design of the system earned Cory <a href="/newsoffice/2010/cory-award.html" target="_blank">Boeing’s 2010 Engineering Student of the Year Award</a>.<br /><br />The measure of air resistance against a body in flight is known as the "drag coefficient." A cruising plane tries to minimize its drag coefficient, but when it's trying to slow down, it tilts its wings back in order to increase drag. Ordinarily, it can't tilt back too far, for fear of stall. But because Cory and Tedrake's control system takes advantage of stall, the glider, when it's landing, has a drag coefficient that's four to five times that of other aerial vehicles. <br /><br /> <embed width="560" height="228" src="http://groups.csail.mit.edu/locomotion/shadowbox/libraries/mediaplayer/player.swf" bgcolor="0x000000" allowscriptaccess="always" allowfullscreen="true" flashvars="&autostart=false&backcolor=0x000000&bandwidth=5000&dock=false&file=http%3A%2F%2Fgroups.csail.mit.edu%2Flocomotion%2Fperching_media%2Fvideo%2Fltvqr_perching_title.mp4&frontcolor=0xCCCCCC&level=0&lightcolor=0x557722&plugins=viral-2d"></embed>
<div class="video_captions"><strong>A high-speed video of the researchers' computer-controlled glider landing on a suspended string perch</strong>.<br /><i>Video courtesy of Russ Tedrake and Rick Cory (<a href="http://groups.csail.mit.edu/locomotion/perching.html" target="_blank">view more videos and images</a>)</i></div>
<br /> <strong>From spy planes to fairies</strong><br /><br />For some time, the U.S. Air Force has been interested in the possibility of unmanned aerial vehicles that could land in confined spaces and has been funding and monitoring research in the area. "What Russ and Rick and their team is doing is unique," says Gregory Reich of the Air Force Research Laboratory. "I don't think anyone else is addressing the flight control problem in nearly as much detail." Reich points out, however, that in their experiments, Cory and Tedrake used data from wall-mounted cameras to gauge the glider's position, and the control algorithms ran on a computer on the ground, which transmitted instructions to the glider. "The computational power that you may have on board a vehicle of this size is really, really limited," Reich says. Even though the MIT researchers' course correction algorithms are simple, they may not be simple enough.<br /><br />Tedrake believes, however, that computer processors powerful enough to handle his and Cory's control algorithms are only a few years off. In the meantime, his lab has already begun to address the problem of moving the glider's location sensors onboard, and although Cory will be moving to California to take a job researching advanced robotics techniques for Disney, he hopes to continue collaborating with Tedrake. "I visited the air force, and I visited Disney, and they actually have a lot in common," Cory says. "The air force wants an airplane that can land on a power line, and Disney wants a flying Tinker Bell that can land on a lantern. But the technology's similar."<br /><br /><br />MIT researchers from the Computer Science and Artificial Intelligence Laboratory have developed a control system that lets a foam glider land on a perch like a pet parakeet.Photo: Jason Dorfman/CSAILBiomimeticsComputer Science and Artificial Intelligence Laboratory (CSAIL)Computer science and technologyControl theoryElectrical engineering and electronicsInnovation and Entrepreneurship (I&E)Artificial intelligenceUnderactuated robots