MIT News - Computational complexity theory
https://news.mit.edu/topic/mitcomputational-complexity-theory-rss.xml
MIT news feed about: Computational complexity theoryenMon, 07 Apr 2014 00:00:02 -0400The complexonaut
https://news.mit.edu/2014/scott-aaronson-shapes-conventional-and-quantum-computing-0407
Scott Aaronson travels the far reaches of computational complexity, shaping conventional and quantum computing.Mon, 07 Apr 2014 00:00:02 -0400https://news.mit.edu/2014/scott-aaronson-shapes-conventional-and-quantum-computing-0407Larry Hardesty | MIT News Office<p>When he was in elementary school, Scott Aaronson, like many mathematically precocious kids of his generation, dreamed of making his own video games. He had only the foggiest notion of what that entailed, however.</p><p>“I could try to imagine making my own game — I could draw a picture of what it should look like — but how does it come to life?” Aaronson recalls. “Maybe there’s some factory where they do all kinds of complicated machining to make Mario move around in the right way. Then a friend showed me this spaceship game that he had on his computer, and he said, ‘Here’s the code.’ Well, what is this? Some kind of summary of the game? ‘No, no, this is the game. If you change the code, the spaceship will do something different.’”</p><p>“I like to say that for me, this was like learning where babies came from,” Aaronson adds. “It was a revelation. And I was incredibly upset at my parents that they hadn’t told me earlier that this exists. Because I was already 11, and other kids had known programming since they were 8, and how would I ever catch up to them?”</p><p>As that anecdote attests, Aaronson was a young man in a hurry. Also at 11, he taught himself calculus, because he was intrigued by the mysterious symbols in a babysitter’s calculus textbook. The next year, when Aaronson’s father — a science writer turned public-relations executive — was transferred from Philadelphia to Hong Kong to spearhead a new marketing push by AT&T, Aaronson enrolled in an English-language school that offered him the opportunity to skip a grade and leap several years ahead in math.</p><p>When he returned to the United States as a high-school freshman, however, Aaronson chafed at what he saw as the constricting dogmas of public education, getting poor grades and butting heads with teachers. So he enrolled in a yearlong program for gifted high-school students at Clarkson University, and, that winter, applied to colleges. In what would have been his junior year of high school — and over his mother’s objection that he’d have trouble fitting in socially — he entered Cornell University as a freshman.</p><p><strong>Theoretical attraction</strong></p><p>Despite this accelerated trajectory, however, he never lost the sense that, as a programmer, he still lagged behind his peers. At Cornell, he was part of a team of undergraduates who wrote control algorithms for robots competing in the RoboCup robotic-soccer tournament. “We won for two years, not thanks to me at all,” Aaronson says. “I loved the mathematical part, but when it comes to software development, when it comes to making your code work with other people’s code, and documenting code, and meeting deadlines, other people were just going to be so much better at this than I was.”</p><p>The summer before his year at Clarkson, Aaronson had attended a math camp in Seattle where he had learned about the <a href="http://web.mit.edu/newsoffice/2009/explainer-pnp.html">P = NP</a> problem — the central problem in computer science — from one of its most prominent theorists, Richard Karp. “P” is a set of problems that can be solved relatively quickly, and “NP” is a set of problems whose solutions can be verified relatively quickly. For many problems in NP, however — notably those known as “NP-complete” — finding solutions appears to be a prohibitively time-consuming task.</p><p>Most mathematicians believe that P does not equal NP — that being easy to verify doesn’t make a problem easy to solve. But nobody’s been able to prove it.</p>
<p><span style="line-height: 1.6em;">When he was working with the RoboCup team, Aaronson says, “someone would mention offhandedly that we want the goalie to be able to move this way, and I’d start thinking about whether that’s NP-complete. And maybe two weeks later, I’d be able to prove that it’s NP-complete, but by then no one cares, anyway. They’ve moved on to a different way of doing it.”</span></p><p>Already intrigued by theoretical questions of computational complexity, Aaronson learned from a fellow Cornell student about Shor’s algorithm, perhaps the most important theoretical result in <a href="http://www.technologyreview.com/article/424362/the-quantum-frontier/">quantum computing</a>. Quantum computers are devices, still largely hypothetical, that would harness the strange behavior of matter at extremely small scales to perform computations. Discovered by Peter Shor in 1994, Shor’s algorithm is a quantum algorithm for factoring large numbers, one of the canonical NP problems that is easy to verify but apparently very hard to solve. Shockingly, Shor was able to show that for a quantum computer, solving the problem would be almost as easy as verifying it is for a classical computer.</p><p>“My first reaction was, ‘OK, this is probably some obvious crap that is getting hyped by the media,’” Aaronson says. But he had to know for sure, and he threw himself into the study of quantum computing. He came away convinced that, indeed, quantum computers would rewrite the rules of computational efficiency.</p><p><strong>Quantum complex</strong></p><p>The relationship between complexity — the classification of algorithms according to their execution time — and quantum physics has remained at the center of Aaronson’s research since. He did his graduate work at the University of California at Berkeley so that he could study with Umesh Vazirani, one of the pioneers of quantum complexity theory. And now, as a tenured professor in the Department of Electrical Engineering and Computer Science at MIT, he finds himself a colleague of Shor, who, since the announcement of his algorithm, has joined the MIT mathematics faculty.</p><p>Aaronson believes that his own most important research includes his first paper on quantum complexity theory, written when he was a graduate student, which provided the first lower bound — minimum theoretically provable execution time — for a problem known as the “collision problem.” Integrally connected to the cryptographically important question of cryptographic hashing, the collision problem asks whether a given mathematical function is one-to-one — every input produces a unique output — or two-to-one — every output can be produced by either of two inputs. Although subsequent researchers raised the bound, they used a variation on the same technique that Aaronson had developed.</p><p>Aaronson and Avi Wigderson of the Institute for Advanced Study in Princeton also proved that anyone hoping to answer the question of whether P = NP must first surmount an obstacle that they called “algebrization.” “If you want to advance the field to where it could address the [P = NP] question, then you have to look at our current proof techniques and the barriers that are preventing them from getting us where we want,” Aaronson says. “There were two previous times where we had to identify a barrier — the relativization and the natural-proofs barriers — in order to even start to think about what techniques were going to get around it.” Algebrization is another such barrier. “Once you’ve clearly identified what the barrier is,” Aaronson says, “then your mind is much freer to think about how to get around it.”</p><p>Most recently, Aaronson and his student Alex Arkhipov described an <a href="http://web.mit.edu/newsoffice/2011/quantum-experiment-0302.html">optical experiment</a> that, if performed successfully, could, for the first time, use quantum mechanics to execute a calculation that’s infeasible with conventional computers.</p><p>As for whether Aaronson is more of a quantum-computing researcher or a computational-complexity researcher, he finds the question impossible to answer. “Often, even when I’m working on a purely classical question, it’s a classical question inspired by something I’m trying to do in the quantum world,” Aaronson says. “But then, with quite a few of the quantum-computing problems that I’ve worked on, it’s ended up that the core of the difficulty was something in classical complexity theory. They’re very, very linked.”</p>
Scott Aaronson, an associate professor of electrical engineering and computer sciencePhoto: Bryce VickmarkResearch update: Multiple steps toward the ‘quantum singularity’
https://news.mit.edu/2013/research-update-quantum-singularity-0118
Over three days in December, four research groups announced progress on a quantum-computing proposal made two years ago by MIT researchers.Fri, 18 Jan 2013 05:00:00 -0500https://news.mit.edu/2013/research-update-quantum-singularity-0118Larry Hardesty, MIT News OfficeIn early 2011, a pair of theoretical computer scientists at MIT proposed an <a href="/newsoffice/2011/quantum-experiment-0302.html" target="_self">optical experiment</a> that would harness the weird laws of quantum mechanics to perform a computation impossible on conventional computers. Commenting at the time, a quantum-computing researcher at Imperial College London said that the experiment “has the potential to take us past what I would like to call the ‘quantum singularity,’ where we do the first thing quantumly that we can’t do on a classical computer.”<br /><br />The experiment involves generating individual photons — particles of light — and synchronizing their passage through a maze of optical components so that they reach a battery of photon detectors at the same time. The MIT researchers — Scott Aaronson, an associate professor of electrical engineering and computer science, and his student, Alex Arkhipov — believed that, difficult as their experiment may be to perform, it could prove easier than building a fully functional quantum computer.<br /><br />In December, four different groups of experimental physicists, centered at the University of Queensland, the University of Vienna, the University of Oxford and Polytechnic University of Milan, reported the completion of rudimentary versions of Aaronson and Arkhipov’s experiment. Papers by two of the groups appeared back to back in the journal <i>Science</i>; the other two papers are as-yet unpublished.<br /><br />All four papers, however, appeared on <a href="http://arxiv.org/" target="_blank">arXiv</a>, an online compendium of research papers, within a span of three days. Aaronson is a co-author on the paper from Queensland, as is Justin Dove, a graduate student in the Department of Electrical Engineering and Computer Science and a member of MIT’s Optical and Quantum Communications Group.<br /><br /><strong>Changing channels</strong><br /><br />The original formulation of Aaronson and Arkhipov’s experiment proposed a network of beam splitters, optical devices that are ordinarily used to split an optical signal in half and route it down separate fibers. In practice, most of the groups to post papers on arXiv — those other than the Queensland group — built their networks on individual chips, using channels known as waveguides to route the photons. Where two waveguides come close enough together, a photon can spontaneously leap from one to the other, mimicking the behavior caused by a beam splitter.<br /><br />Performing a calculation impossible on a conventional computer would require a network of hundreds of beam splitters, with dozens of channels leading both in and out. A few dozen photons would be fired into the network over a random subset of the channels; photodetectors would record where they come out. That process would have to be repeated thousands of times.<br /><br />The groups posting papers on arXiv used networks of 10 or so beam splitters, with four or five channels leading in, and three or four photons. So their work constitutes a proof of principle — not yet the “quantum singularity.”<br /><br />The computation that Aaronson and Arkhipov’s experiment performs is obscure and not very useful: Technically, it samples from a probability distribution defined by permanents of large matrices. There are, however, proposals to use optical signals to do general-purpose quantum computing, most prominently a scheme known as KLM, after its creators, Emanuel Knill, Raymond Laflamme and Gerard Milburn.<br /><br />According to Dove, some in the quantum-computing community have suggested that Aaronson and Arkhipov’s experiment may be difficult enough to perform with the requisite number of photons that researchers would be better off trying to build full-fledged KLM systems.<br /><br />But, Dove says, “One of the ways that Scott and I like to pitch this idea is as an intermediate step that we need to do KLM.” Building a KLM optical quantum computer would entail building everything necessary to perform the Aaronson-Arkhipov experiment — plus a bunch of other, perhaps even more challenging, technologies.<br /><br />“You can think of Scott and Alex’s result as saying, ‘Look, one of the steps to performing KLM is interesting in its own right,’” Dove says. “So I think it’s inevitable that we’re going to do these experiments, whether people label them that way or not.”<br />The Aaronson-Arkhipov sampling experiment can be thought of as the quantum-optical equivalent of a Galton board, a 19th-century device invented to illustrate some basic principles of probability theory.Image: Christine Daniloff/MIT10-year-old problem in theoretical computer science falls
https://news.mit.edu/2012/interactive-proofs-work-even-if-quantum-information-is-used-0731
Interactive proofs — mathematical games that underlie much modern cryptography — work even if players try to use quantum information to cheat.Tue, 31 Jul 2012 04:00:00 -0400https://news.mit.edu/2012/interactive-proofs-work-even-if-quantum-information-is-used-0731Larry Hardesty, MIT News Office<div class="video_captions"><img src="/sites/default/files/images/inline/newsofficeimages/vidick.jpg" border="0" /><br /><strong>Thomas Vidick</strong><br /><i>Photo: M. Scott Brauer</i><br /><br /></div>
<div class="video_captions" style="float: left; padding: 10px 10px 10px 0px; width: 150px;"><img src="/sites/default/files/images/inline/newsofficeimages/best-of-2012.jpg" border="0" alt="Best of 2012" /></div>
Interactive proofs, which MIT researchers helped pioneer, have emerged as one of the major research topics in theoretical computer science. In the classic interactive proof, a questioner with limited computational power tries to extract reliable information from a computationally powerful but unreliable respondent. Interactive proofs are the basis of cryptographic systems now in wide use, but for computer scientists, they’re just as important for the insight they provide into the complexity of computational problems.<br /><br />Twenty years ago, researchers showed that if the questioner in an interactive proof is able to query multiple omniscient respondents — which are unable to communicate with each other — it can extract information much more efficiently than it could from a single respondent. As <a href="http://www.technologyreview.com/mitnews/424362/the-quantum-frontier/" target="_blank">quantum computing</a> became a more popular research topic, however, computer scientists began to wonder whether such multiple-respondent — or “multiprover” — systems would still work if the respondents were able to perform measurements on physical particles that were “entangled,” meaning that their quantum properties were dependent on each other.<br /><br />At the IEEE Symposium on Foundations of Computer Science in October, Thomas Vidick, a postdoc at MIT’s Computer Science and Artificial Intelligence Laboratory, and Tsuyoshi Ito, a researcher at NEC Labs in Princeton, N.J., <a href="http://xxx.lanl.gov/abs/1207.0550" target="_blank">finally answer that question</a>: Yes, there are multiprover interactive proofs that hold up against entangled respondents. That answer is good news for cryptographers, but it’s bad news for quantum physicists, because it proves that there’s no easy way to devise experiments that illustrate the differences between classical and quantum physical systems.<br /><br />It’s also something of a surprise, because when the question was first posed, it was immediately clear that some multiprover proofs were not resilient against entanglement. Vidick and Ito didn’t devise the proof whose resilience they prove, but they did develop new tools for analyzing it.<br /><br /><strong>Boxed in</strong><br /><br />In an interactive proof, a questioner asks a series of questions, each of which constrains the range of possible answers to the next question. The questioner doesn’t have the power to compute valid answers itself, but it does have the power to determine whether each new answer meets the constraints imposed by the previous ones. After enough questions, the questioner will either expose a contradiction or reduce the probability that the respondent is cheating to near zero.<br /><br />Multiprover proofs are so much more efficient than single-respondent proofs because none of the respondents knows the constraints imposed by the others’ answers. Consequently, contradictions are much more likely if any respondent tries to cheat.<br /><br />But if the respondents have access to particles that are entangled with each other — say, electrons that were orbiting the same atom but were subsequently separated — they can perform measurements — of, say, the spins of select electrons — that will enable them to coordinate their answers. That’s enough to thwart some interactive proofs.<br /><br />The proof that Vidick and Ito analyzed is designed to make cheating difficult by disguising the questioner’s intent. To get a sense of how it works, imagine a graph that in some sense plots questions against answers, and suppose that the questioner is interested in two answers, which would be depicted on the graph as two points. Instead of asking the two questions of interest, however, the questioner asks at least three different questions. If the answers to those questions fall on a single line, then so do the answers that the questioner really cares about, which can now be calculated. If the answers don’t fall on a line, then at least one of the respondents is trying to cheat.<br /><br />“That’s basically the idea, except that you do it in a much more high-dimensional way,” Vidick says. “Instead of having two dimensions, you have ‘N’ dimensions, and you think of all the questions and answers as being a small, N-dimensional cube.”<br /><strong><br />Gaining perspective</strong><br /><br />This type of proof turns out to be immune to quantum entanglement. But demonstrating that required Vidick and Ito to develop a new analytic framework for multiprover proofs. <br /><br />According to the weird rules of quantum mechanics, until a measurement is performed on a quantum particle, the property being measured has no definite value; measuring snaps the particle into a definite state, but that state is drawn randomly from a probability distribution of possible states.<br /><br />The problem is that, when particles are entangled, their probability distributions can’t be treated separately: They’re really part of a single big distribution. But any mathematical description of that distribution supposes a bird’s-eye perspective that no respondent in a multiprover proof would have. Finding a way to do justice to both the connection between the measurements and the separation of the measurers proved enormously difficult. “It took Tsuyoshi and me about a year and a half,” Vidick says. “But in fact, one could say I’ve been working on this since 2006. My very first paper was on exactly the same topic.”<br /><br />Dorit Aharonov, a professor of computer science and engineering at Hebrew University in Jerusalem, says that Vidick and Ito’s paper is the quantum analogue of an earlier paper on multiprover interactive proofs that “basically led to the PCP theorem, and the PCP theorem is no doubt the most important result of complexity in the past 20 years.” Similarly, she says, the new paper “could be an important step toward proving the quantum analogue of the PCP theorem, which is a major open question in quantum complexity theory.”<br /><br />The paper could also have implications for physics, Aharonov adds. “This is a step toward deepening our understanding of the notion of entanglement, and of things that happen in quantum systems — correlations in quantum systems, and efficient descriptions of quantum systems, et cetera,” she says. “But it’s very indirect. This looks like an important step, but it’s a long journey.”<br />Thomas VidickPhoto: M. Scott BrauerAlgorithmic incentives
https://news.mit.edu/2012/algorithmic-incentives-0425
A new twist on pioneering work done by MIT cryptographers almost 30 years ago could lead to better ways of structuring contracts.Wed, 25 Apr 2012 04:00:00 -0400https://news.mit.edu/2012/algorithmic-incentives-0425Larry Hardesty, MIT News Office<div class="video_captions"><img src="/sites/default/files/images/inline/newsofficeimages/algoincent.jpg" border="0" /><br /><strong>Interactive proofs are a type of mathematical game, pioneered at MIT, in which one player — often called Arthur — tries to extract reliable information from an unreliable interlocutor — Merlin. In a new variation known as a rational proof, Merlin is still untrustworthy, but he's a rational actor, in the economic sense.</strong><br /> <i>Image: Howard Pyle</i><br /><br /></div>
In 1993, MIT cryptography researchers Shafi Goldwasser and Silvio Micali shared in the first <a href="http://www.sigact.org/Prizes/Godel/" target="_blank">Gödel Prize</a> for theoretical computer science for their work on interactive proofs — a type of mathematical game in which a player attempts to extract reliable information from an unreliable interlocutor. <br /><br />In their groundbreaking 1985 paper on the topic, Goldwasser, Micali and the University of Toronto’s Charles Rackoff ’72, SM ’72, PhD ’74 proposed a particular kind of interactive proof, called a zero-knowledge proof, in which a player can establish that he or she knows some secret information without actually revealing it. Today, zero-knowledge proofs are used to secure transactions between financial institutions, and several startups have been founded to commercialize them.<br /><br />At the Association for Computing Machinery’s Symposium on Theory of Computing in May, Micali, the Ford Professor of Engineering at MIT, and graduate student Pablo Azar will present a new type of mathematical game that they’re calling a rational proof; it varies interactive proofs by giving them an economic component. Like interactive proofs, rational proofs may have implications for cryptography, but they could also suggest new ways to structure incentives in contracts.<br /><br />“What this work is about is asymmetry of information,” Micali says. “In computer science, we think that valuable information is the output of a long computation, a computation I cannot do myself.” But economists, Micali says, model knowledge as a probability distribution that accurately describes a state of nature. “It was very clear to me that both things had to converge,” he says.<br /><br />A classical interactive proof involves two players, sometimes designated Arthur and Merlin. Arthur has a complex problem he needs to solve, but his computational resources are limited; Merlin, on the other hand, has unlimited computational resources but is not trustworthy. An interactive proof is a procedure whereby Arthur asks Merlin a series of questions. At the end, even though Arthur can’t solve his problem himself, he can tell whether the solution Merlin has given him is valid.<br /><br />In a rational proof, Merlin is still untrustworthy, but he’s a rational actor in the economic sense: When faced with a decision, he will always choose the option that maximizes his economic reward. “In the classical interactive proof, if you cheat, you get caught,” Azar explains. “In this model, if you cheat, you get less money.”<br /><br /><strong>Complexity connection</strong><br /><br />Research on both interactive proofs and rational proofs falls under the rubric of computational-complexity theory, which classifies computational problems according to how hard they are to solve. The two best-known complexity classes are <a href="/newsoffice/2009/explainer-pnp.html" target="_blank">P and NP</a>. Roughly speaking, P is a set of relatively easy problems, while NP contains some problems that, as far as anyone can tell, are very, very hard. <br /><br />Problems in NP include the factoring of large numbers, the selection of an optimal route for a traveling salesman, and so-called satisfiability problems, in which one must find conditions that satisfy sets of logical restrictions. For instance, is it possible to contrive an attendance list for a party that satisfies the logical expression (Alice OR Bob AND Carol) AND (David AND Ernie AND NOT Alice)? (Yes: Bob, Carol, David and Ernie go to the party, but Alice doesn’t.) In fact, the vast majority of the hard problems in NP can be recast as satisfiability problems. <br /><br />To get a sense of how rational proofs work, consider the question of how many solutions a satisfiability problem has — an even harder problem than finding a single solution. Suppose that the satisfiability problem is a more complicated version of the party-list problem, one involving 20 invitees. With 20 invitees, there are 1,048,576 possibilities for the final composition of the party. How many of those satisfy the logical expression? Arthur doesn’t have nearly enough time to test them all.<br /><br />But what if Arthur instead auctions off a ticket in a lottery? He’ll write down one perfectly random list of party attendees — Alice yes, Bob no, Carol yes and so on — and if it satisfies the expression, he’ll give the ticketholder $1,048,576. How much will Merlin bid for the ticket?<br /><br />Suppose that Merlin knows that there are exactly 300 solutions to the satisfiability problem. The chances that Arthur’s party list is one of them are thus 300 in 1,048,576. According to standard econometric analysis, a 300-in-1,048,576 shot at $1,048,576 is worth exactly $300. So if Merlin is a rational actor, he’ll bid $300 for the ticket. From that information, Arthur can deduce the number of solutions.<br /><br /><strong>First-round knockout</strong><br /><br />The details are more complicated than that, and of course, with <a href="http://www.claymath.org/millennium/" target="_blank">very few exceptions</a>, no one in the real world wants to be on the hook for a million dollars in order to learn the answer to a math problem. But the upshot of the researchers’ paper is that with rational proofs, they can establish in one round of questioning — “What do you bid?” — what might require millions of rounds using classical interactive proofs. “Interaction, in practice, is costly,” Azar says. “It’s costly to send messages over a network. Reducing the interaction from a million rounds to one provides a significant savings in time.”<strong></strong><br /><br />“I think it’s yet another case where we think we understand what’s a proof, and there is a twist, and we get some unexpected results,” says <a href="http://www.wisdom.weizmann.ac.il/~naor/" target="_blank">Moni Naor</a>, the Judith Kleeman Professorial Chair in the Department of Computer Science and Applied Mathematics at Israel’s Weizmann Institute of Science. “We’ve seen it in the past with interactive proofs, which turned out to be pretty powerful, much more powerful than you normally think of proofs that you write down and verify as being.” With rational proofs, Naor says, “we have yet another twist, where, if you assign some game-theoretical rationality to the prover, then the proof is yet another thing that we didn’t think of in the past.”<br /><br />Naor cautions that the work is “just at the beginning,” and that it’s hard to say when it will yield practical results, and what they might be. But “clearly, it’s worth looking into,” he says. “In general, the merging of the research in complexity, cryptography and game theory is a promising one.”<br /><br />Micali agrees. “I think of this as a good basis for further explorations,” he says. “Right now, we’ve developed it for problems that are very, very hard. But how about problems that are very, very simple?” Rational-proof systems that describe simple interactions could have an application in crowdsourcing, a technique whereby computational tasks that are easy for humans but hard for computers are farmed out over the Internet to armies of volunteers who receive small financial rewards for each task they complete. Micali imagines that they might even be used to characterize biological systems, in which individual organisms — or even cells — can be thought of as producers and consumers.<br />3 questions: P vs. NP
https://news.mit.edu/2010/3q-pnp
After glancing over a 100-page proof that claimed to solve the biggest problem in computer science, Scott Aaronson bet his house that it was wrong. Why?Tue, 17 Aug 2010 04:00:01 -0400https://news.mit.edu/2010/3q-pnpLarry Hardesty, MIT News Office<p>On Friday, Aug. 6, a mathematician at HP Labs named Vinay Deolalikar sent an e-mail to a host of other researchers with a <a href="http://www.hpl.hp.com/personal/Vinay_Deolalikar/Papers/pnp_updated_1.pdf" target="_blank">103-page attachment</a> that purported to answer the most important outstanding question in computer science. That question is whether P = NP, and answering it will earn you $1 million from the Clay Mathematics Institute.<br />
<br />
Last fall, MIT News published a <a href="/newsoffice/2009/explainer-pnp.html">fairly detailed explanation</a> of what P = NP means. But roughly speaking, P is a set of relatively easy problems, NP includes a set of incredibly hard problems, and if they’re equal, then a large number of computer science problems that seem to be incredibly hard are actually relatively easy. Problems in NP include the factoring of large numbers, the selection of an optimal route for a traveling salesman, and the so-called 3-SAT problem, in which you must find values that satisfy triplets of logical conditions. For instance, is it possible to contrive an attendance list for a party that satisfies the triplet (Alice OR Bob AND Carol) AND (David AND Ernie AND NOT Alice)? (Yes: Bob, Carol, David, and Ernie were there, but Alice wasn’t.) Interestingly, the related 2-SAT problem — which instead uses pairs of conditions, like (Alice OR Ernie) — is in the set of easy problems, P, as is the closely related XOR-SAT problem.<br />
<br />
Most computer scientists believe that P doesn’t equal NP, and that’s what Deolalikar claimed to have proved. But by the following Monday, despite being on vacation in the Mediterranean and having time only to glance through the proof, MIT Associate Professor of Electrical Engineering and Computer Science Scott Aaronson had announced on his blog that he would mortgage his house and chip in another $200,000 if Deolalikar’s proof was correct. Last week, Aaronson took a few minutes to answer three questions about P and NP.<br />
<br />
<strong>Q. </strong>Has the proof now been shown conclusively to be wrong?<br />
<br />
<strong>A. </strong>I would say yeah. It was clear a couple days ago that there was a very serious gap in the statistical-physics part of the argument. It was not clear at all that the argument for showing why an NP-complete problem like 3-SAT was hard wouldn’t also show that problems like XOR-SAT are hard. Now, XOR-SAT is a variant of this satisfiability problem, which is known to be in P, which has an efficient solution. So if you’re proving that a problem is hard, but your proof could also be adapted to show that an easy problem is hard, then your proof must be fallacious: it proves too much. That’s the first check that people look for when someone announces a proof of P not equal to NP. Why doesn’t it also work for the easy problems?<br />
<br />
The problem with saying that the thing has been conclusively refuted is that whenever anyone points to a problem like this, Vinay Deolalikar has been tending to respond, “Oh, yeah, sure, well, I’m going to address that in my next draft.” So it’s kind of a moving target. But I think it’s absolutely clear right now that at least the existing version does not solve the problem and furthermore wouldn’t solve the problem without some very, very major new ideas.<br />
<br />
<strong>Q. </strong>Why were you so certain that there was a flaw in the proof?<br />
<br />
<strong>A. </strong>P vs. NP is an absolutely enormous problem, and one way of seeing that is that there are already vastly, vastly easier questions that would be implied by P not equal to NP but that we already don’t know how to answer. So basically, if someone is claiming to prove P not equal to NP, then they’re sort of jumping 20 or 30 nontrivial steps beyond what we know today. So the first thing you look for is, What about steps one, two, and three? Can he explain even the easier questions, how he’s answering those? So I looked at the manuscript, and I didn’t see that.<br />
<br />
The other check is the one that I already mentioned, which is, Why does the proof fail for variants of NP-complete problems which are known to be easy? What Deolalikar was doing is, he’s trying to argue that 3-SAT is hard by looking at its statistical properties. The problem is that 2-SAT and XOR-SAT, the problems that are easy, have very, very similar statistical properties, so it did not look like something that could distinguish the hard problems from the easy ones.<br />
<br />
We have very strong reasons to believe that these problems cannot be solved without major — enormous — advances in human knowledge. So you look at the paper and you don’t see that it’s commensurate with the scale of the problem that it’s claiming to solve. This is not a problem that’s going to be solved by just combining or pushing around the ideas that we already have.<br />
<br />
<strong>Q. </strong>Given that most people are pretty confident that P does not equal NP, what would the proof really do for us?<br />
<br />
<strong>A. </strong>Yes, almost all of us believe already that P is not equal to NP. But this is one of those things where it’s not so much the destination as the journey. It’s the massive amount of new understanding of computation that’s going to be needed to prove such a statement. What are we trying to prove? That for solving all these natural optimization problems, or these search problems, or a proof of a theorem, finding the best schedule for airlines, breaking cryptographic codes — all these different things, that there’s no algorithm, no matter how clever, that’s going to solve them feasibly. So in order to prove such a thing, a prerequisite to it is to understand the space of all possible efficient algorithms. That is an unbelievably tall order. So the expectation is that on the way to proving such a thing, we’re going to learn an enormous amount about efficient algorithms, beyond what we already know, and very, very likely discover new algorithms that will likely have applications that we can’t even foresee right now.<br />
<br />
Often in the history of theoretical computer science, the same ideas that you use to prove that something’s impossible can then be turned around to show that something else is possible, and vice versa. The simplest example of that is in cryptography, where you show some problem is hard to solve, and that gives you a code that is useful. But there are many other examples.<br />
</p>
What computer science can teach economics
https://news.mit.edu/2009/game-theory
Constantinos Daskalakis applies the theory of computational complexity to game theory, with consequences in a range of disciplines.Mon, 09 Nov 2009 03:01:00 -0500https://news.mit.edu/2009/game-theoryLarry Hardesty, MIT News OfficeComputer scientists have spent decades developing techniques for answering a single question: How long does a given calculation take to perform? <a href="http://people.csail.mit.edu/costis/">Constantinos Daskalakis</a>, an assistant professor in MIT’s Computer Science and Artificial Intelligence Laboratory, has exported those techniques to game theory, a branch of mathematics with applications in economics, traffic management — on both the Internet and the interstate — and biology, among other things. By showing that some common game-theoretical problems are so hard that they’d take the lifetime of the universe to solve, Daskalakis is suggesting that they can’t accurately represent what happens in the real world.<br /><br />Game theory is a way to mathematically describe strategic reasoning — of competitors in a market, or drivers on a highway or predators in a habitat. In the last five years alone, the Nobel Prize in economics has twice been awarded to game theorists for their analyses of multilateral treaty negotiations, price wars, public auctions and taxation strategies, among other topics. <br /><br />In game theory, a “game” is any mathematical model that correlates different player strategies with different outcomes. One of the simplest examples is the penalty-kick game: In soccer, a penalty kick gives the offensive player a shot on goal with only the goalie defending. The goalie has so little reaction time that she has to guess which half of the goal to protect just as the ball is struck; the shooter tries to go the opposite way. In the game-theory version, the goalie always wins if both players pick the same half of the goal, and the shooter wins if they pick different halves. So each player has two strategies — go left or go right — and there are two outcomes — kicker wins or goalie wins.<br /><br />It’s probably obvious that the best strategy for both players is to randomly go left or right with equal probability; that way, both will win about half the time. And indeed, that pair of strategies is what’s called the “Nash equilibrium” for the game. Named for John Nash — who taught at MIT and whose life was the basis for the movie <em>A Beautiful Mind</em> — the Nash equilibrium is the point in a game where the players have found strategies that none has the incentive to change unilaterally. In this case, for instance, neither player can improve her outcome by going one direction more often than the other.<br /><br />Of course, most games are more complicated than the penalty-kick game, and their Nash equilibria are more difficult to calculate. But the reason the Nash equilibrium is associated with Nash’s name — and not the names of other mathematicians who, over the preceding century, had described Nash equilibria for particular games — is that Nash was the first to prove that every game must have a Nash equilibrium. Many economists assume that, while the Nash equilibrium for a particular market may be hard to find, once found, it will accurately describe the market’s behavior.<br /><br />Daskalakis’s doctoral thesis — which won the Association for Computing Machinery’s 2008 dissertation prize — casts doubts on that assumption. Daskalakis, working with Christos Papadimitriou of the University of California, Berkeley, and the University of Liverpool’s Paul Goldberg, has shown that for some games, the Nash equilibrium is so hard to calculate that all the computers in the world couldn’t find it in the lifetime of the universe. And in those cases, Daskalakis believes, human beings playing the game probably haven’t found it either.<br /><br />In the real world, competitors in a market or drivers on a highway don’t (usually) calculate the Nash equilibria for their particular games and then adopt the resulting strategies. Rather, they tend to calculate the strategies that will maximize their own outcomes given the current state of play. But if one player shifts strategies, the other players will shift strategies in response, which will drive the first player to shift strategies again, and so on. This kind of feedback will eventually converge toward equilibrium: in the penalty-kick game, for example, if the goalie tries going in one direction more than half the time, the kicker can punish her by always going the opposite direction. But, Daskalakis argues, feedback won’t find the equilibrium more rapidly than computers could calculate it.<br /><br />The argument has some empirical support. Approximations of the Nash equilibrium for two-player poker have been calculated, and professional poker players tend to adhere to them — particularly if they’ve read any of the many books or articles on game theory’s implications for poker. The Nash equilibrium for three-player poker, however, is intractably hard to calculate, and professional poker players don’t seem to have found it.<br /><br />How can we tell? Daskalakis’s thesis showed that the Nash equilibrium belongs to a set of problems that is well studied in computer science: those whose solutions may be hard to find but are always relatively easy to verify. The canonical example of such a problem is the factoring of a large number: The solution seems to require trying out lots of different possibilities, but verifying an answer just requires multiplying a few numbers together. In the case of Nash equilibria, however, the solutions are much more complicated than a list of prime numbers. The Nash equilibrium for three-person Texas hold ’em, for instance, would consist of a huge set of strategies for any possible combination of players’ cards, dealers’ cards, and players’ bets. Exhaustively characterizing a given player’s set of strategies is complicated enough in itself, but to the extent that professional poker players’ strategies in three-player games can be characterized, they don’t appear to be in equilibrium. <br /><br />Anyone who’s into computer science — or who read <a href="/newsoffice/2009/explainer-pnp.html">“Explained: P vs. NP”</a> on the MIT News web site last week — will recognize the set of problems whose solutions can be verified efficiently: It’s the set that computer scientists call NP. Daskalakis proved that the Nash equilibrium belongs to a subset of NP consisting of hard problems with the property that a solution to one can be adapted to solve all the others. (The cognoscenti will infer that it’s the set called NP-complete; but the fact that the Nash equilibrium always exists disqualifies it from NP-completeness. In fact, it belongs to a different set, called PPAD-complete.)<br /><br />That result “is one of the biggest yet in the roughly 10-year-old field of algorithmic game theory,” says Tim Roughgarden, an assistant professor of computer science at Stanford University. It “formalizes the suspicion that the Nash equilibrium is not likely to be an accurate predictor of rational behavior in all strategic environments.”<br /><br />Given the Nash equilibrium’s unreliability, says Daskalakis, “there are three routes that one can go. One is to say, We know that there exist games that are hard, but maybe most of them are not hard.” In that case, Daskalakis says, “you can seek to identify classes of games that are easy, that are tractable.”<br /><br />The second route, Daskalakis says, is to find mathematical models other than Nash equilibria to characterize markets — models that describe transition states on the way to equilibrium, for example, or other types of equilibria that aren’t so hard to calculate. Finally, he says, it may be that where the Nash equilibrium is hard to calculate, some approximation of it — where the players’ strategies are almost the best responses to their opponents’ strategies — might not be. In those cases, the approximate equilibrium could turn out to describe the behavior of real-world systems.<br /><br />As for which of these three routes Daskalakis has chosen, “I’m pursuing all three,” he says.<br /><br />Constantinos Daskalakis, an assistant professor in MIT’s Computer Science and Artificial Intelligence Laboratory.Photo: Satyen Kale, Yahoo! ResearchExplained: P vs. NP
https://news.mit.edu/2009/explainer-pnp
The most notorious problem in theoretical computer science remains open, but the attempts to solve it have led to profound insights.Thu, 29 Oct 2009 04:02:00 -0400https://news.mit.edu/2009/explainer-pnpLarry Hardesty, MIT News Office<p><i>Science and technology journalists pride themselves on the ability to explain complicated ideas in accessible ways, but there are some technical principles that we encounter so often in our reporting that paraphrasing them or writing around them begins to feel like missing a big part of the story. So in a new series of articles called "Explained," MIT News Office staff will explain some of the core ideas in the areas they cover, as reference points for future reporting on MIT research.</i><br />
<br />
In the 1995 Halloween episode of <i>The Simpsons</i>, Homer Simpson finds a portal to the mysterious Third Dimension behind a bookcase, and desperate to escape his in-laws, he plunges through. He finds himself wandering across a dark surface etched with green gridlines and strewn with geometric shapes, above which hover strange equations. One of these is the deceptively simple assertion that P = NP.<br />
<br />
In fact, in a 2002 poll, 61 mathematicians and computer scientists said that they thought P probably didn’t equal NP, to only nine who thought it did — and of those nine, several told the pollster that they took the position just to be contrary. But so far, no one’s been able to decisively answer the question one way or the other. Frequently called the most important outstanding question in theoretical computer science, the equivalency of P and NP is one of the seven problems that the Clay Mathematics Institute will give you a million dollars for proving — or disproving. Roughly speaking, P is a set of relatively easy problems, and NP is a set that includes what seem to be very, very hard problems, so P = NP would imply that the apparently hard problems actually have relatively easy solutions. But the details are more complicated.<br />
<br />
Computer science is largely concerned with a single question: How long does it take to execute a given algorithm? But computer scientists don’t give the answer in minutes or milliseconds; they give it relative to the number of elements the algorithm has to manipulate.<br />
<br />
Imagine, for instance, that you have an unsorted list of numbers, and you want to write an algorithm to find the largest one. The algorithm has to look at all the numbers in the list: there’s no way around that. But if it simply keeps a record of the largest number it’s seen so far, it has to look at each entry only once. The algorithm’s execution time is thus directly proportional to the number of elements it’s handling — which computer scientists designate N. Of course, most algorithms are more complicated, and thus less efficient, than the one for finding the largest number in a list; but many common algorithms have execution times proportional to N<sup>2</sup>, or N times the logarithm of N, or the like.<br />
<br />
A mathematical expression that involves N’s and N<sup>2</sup>s and N’s raised to other powers is called a polynomial, and that’s what the “P” in “P = NP” stands for. P is the set of problems whose solution times are proportional to polynomials involving N's.<br />
<br />
Obviously, an algorithm whose execution time is proportional to N<sup>3</sup> is slower than one whose execution time is proportional to N. But such differences dwindle to insignificance compared to another distinction, between polynomial expressions — where N is the number being raised to a power — and expressions where a number is raised to the Nth power, like, say, 2<sup>N</sup>.<br />
<br />
If an algorithm whose execution time is proportional to N takes a second to perform a computation involving 100 elements, an algorithm whose execution time is proportional to N<sup>3</sup> takes almost three hours. But an algorithm whose execution time is proportional to 2<sup>N</sup> takes 300 quintillion years. And that discrepancy gets much, much worse the larger N grows.<br />
<br />
NP (which stands for nondeterministic polynomial time) is the set of problems whose solutions can be verified in polynomial time. But as far as anyone can tell, many of those problems take exponential time to solve. Perhaps the most famous exponential-time problem in NP, for example, is finding prime factors of a large number. Verifying a solution just requires multiplication, but solving the problem seems to require systematically trying out lots of candidates.<br />
<br />
So the question “Does P equal NP?” means “If the solution to a problem can be verified in polynomial time, can it be found in polynomial time?” Part of the question’s allure is that the vast majority of NP problems whose solutions seem to require exponential time are what’s called NP-complete, meaning that a polynomial-time solution to one can be adapted to solve all the others. And in real life, NP-complete problems are fairly common, especially in large scheduling tasks. The most famous NP-complete problem, for instance, is the so-called traveling-salesman problem: given N cities and the distances between them, can you find a route that hits all of them but is shorter than … whatever limit you choose to set?<br />
<br />
Given that P probably doesn’t equal NP, however — that efficient solutions to NP problems will probably never be found — what’s all the fuss about? Michael Sipser, the head of the MIT Department of Mathematics and a member of the Computer Science and Artificial Intelligence Lab’s Theory of Computation Group (TOC), says that the P-versus-NP problem is important for deepening our understanding of computational complexity.<br />
<br />
“A major application is in the cryptography area,” Sipser says, where the security of cryptographic codes is often ensured by the complexity of a computational task. The RSA cryptographic scheme, which is commonly used for secure Internet transactions — and was invented at MIT — “is really an outgrowth of the study of the complexity of doing certain number-theoretic computations,” Sipser says.<br />
<br />
Similarly, Sipser says, “the excitement around quantum computation really boiled over when Peter Shor” — another TOC member — “discovered a method for factoring numbers on a quantum computer. Peter's breakthrough inspired an enormous amount of research both in the computer science community and in the physics community.” Indeed, for a while, Shor’s discovery sparked the hope that quantum computers, which exploit the counterintuitive properties of extremely small particles of matter, could solve NP-complete problems in polynomial time. But that now seems unlikely: the factoring problem is actually one of the few hard NP problems that is not known to be NP-complete.<br />
<br />
Sipser also says that “the P-versus-NP problem has become broadly recognized in the mathematical community as a mathematical question that is fundamental and important and beautiful. I think it has helped bridge the mathematics and computer science communities.”<br />
<br />
But if, as Sipser says, “complexity adds a new wrinkle on old problems” in mathematics, it’s changed the questions that computer science asks. “When you’re faced with a new computational problem,” Sipser says, “what the theory of NP-completeness offers you is, instead of spending all of your time looking for a fast algorithm, you can spend half your time looking for a fast algorithm and the other half of your time looking for a proof of NP-completeness.”<br />
<br />
Sipser points out that some algorithms for NP-complete problems exhibit exponential complexity only in the worst-case scenario and that, in the average case, they can be more efficient than polynomial-time algorithms. But even there, NP-completeness “tells you something very specific,” Sipser says. “It tells you that if you’re going to look for an algorithm that’s going to work in every case and give you the best solution, you’re doomed: don’t even try. That’s useful information.”</p>