MIT News - Probability
https://news.mit.edu/topic/mitprobability-rss.xml
MIT news feed about: ProbabilityenFri, 27 Jun 2014 00:00:02 -0400Mathematical patchwork
https://news.mit.edu/2014/profile-mathematician-alice-guionnet-0627
Alice Guionnet, an authority on random matrix theory, aims to make sense of huge data sets.Fri, 27 Jun 2014 00:00:02 -0400https://news.mit.edu/2014/profile-mathematician-alice-guionnet-0627Helen Knight | MIT News Office<p>From the increasing information transmitted through telecommunications systems to that analyzed by financial institutions or gathered by search engines and social networks, so-called “big data” is becoming a huge feature of modern life.</p><p>But to analyze all of this incoming data, we need to be able to separate the important information from the surrounding noise. This requires the use of increasingly sophisticated techniques.</p><p>Alice Guionnet, a professor of mathematics at MIT, investigates methods to make sense of huge data sets, to find the hidden correlations between apparently random pieces of information, their typical behavior, and random fluctuations. “I consider things called matrices, where you have an array of data,” Guionnet says. “So you take some data at random, put it in a big array, and then try to understand how to analyze it, for example to subtract the noise.”</p><p>The field of random matrix theory, as it is known, has grown rapidly over the last 10 years, thanks to the huge rise in the amount of data we produce. The theory is now used in statistics, finance, and telecommunications, as well as in biology to model connections between neurons in the brain, and in physics to simulate the radiation frequencies absorbed and emitted by heavy atoms.</p><p><strong>Mathematics as patchwork</strong></p><p>A world-leading researcher in probability, Guionnet has made important theoretical contributions to random matrix theory. In particular, she has made recent advances in understanding large deviations — the probability of finding unlikely events or unusual behavior within the array of data — and in connecting the theory with that of topological expansion, in which random matrices are used to help solve combinatorial questions.</p><p>“It’s a bit like when you make a patchwork quilt,” Guionnet says. “So you have all of your pieces of patchwork, and then you go to sew them together so that they make a nice pillow with no holes, and you have many possibilities for how to lay them out,” she says.</p><p>Random matrices can be used to calculate the number of ways in which this “patchwork” can be sewn together, Guionnet says. She also considers several of these random arrays simultaneously, to help solve problems in the field of operator algebra.</p><p>Guionnet was born in Paris. She completed her master’s degree at the Ecole Normale Superieure Paris in 1993, and then moved to the Universite Paris Sud to undertake her PhD. The focus of her PhD was the statistical mechanics of disordered systems, a branch of mathematical physics in which the world around us is modeled down to the level of microscopic particles. In this way, researchers attempt to determine how microscopic interactions affect activity at the macroscopic level.</p><p>In particular, Guionnet was interested in objects called spin glasses — disordered magnetic materials that are similar to real glass, in that they appear to be stationary, but which are actually moving, albeit at an incredibly slow rate. “If you looked at the windows of your house millions of years from now, they may be shifting downward as a result of gravity,” she says. “I was attempting to analyze the dynamics of these kinds of systems.”</p><p>Before she had completed her PhD, Guionnet was offered a position within the French National Center for Scientific Research (CNRS), and moved to Ecole Normale Superieure (ENS) Lyon, where she continued to focus on the spin glass model, before branching out into random matrices. “I initially wanted to work in applied mathematics,” Guionnet says. “But as I started to consider questions in random matrix theory, I moved into purer and purer mathematics.”</p><p>While at ENS Lyon, she was made a director of research for CNRS, and was given the opportunity to build her own team of top researchers in probability theory.</p><p><strong>Making connections</strong></p><p>She moved to MIT in 2012, where she continues her work in random matrix theory. In the same year, Guionnet was chosen as one of 21 mathematicians, theoretical physicists, and theoretical computer scientists named as Simons Investigators. Awarded by the Simons Foundation, a private organization that aims to advance research in math and the basic sciences, Simons Investigators each receive $100,000 annually to support their work.</p><p>“What I like about my work is that it crosses over into different fields — probability theory, operator algebra, and random matrices — and I’m trying to advance these three theories at the same time,” Guionnet says. “These different fields are all merging and connecting with each other, and that is what I try to understand in my work.”</p><p>The opportunity to work with people from different mathematical fields, and to learn new ideas from them, is one of the things Guionnet loves most about the subject. “When you work with people from different fields you begin to make new connections, and get a new point of view on the object you are studying, so it’s kind of exciting,” she says.</p><p>What’s more, the math itself is always evolving and progressing, she says: “Mathematics is beautiful.”</p>
MathematicsProfileFacultySchool of ScienceDataBig dataProbabilityMITx course builds a systematic approach to understanding the uncertain
https://news.mit.edu/2014/mitx-course-builds-a-systematic-approach-to-understanding-the-uncertain
6.041x shows learners how to use probability to make scientifically sound predictions under uncertainty.Wed, 29 Jan 2014 16:00:00 -0500https://news.mit.edu/2014/mitx-course-builds-a-systematic-approach-to-understanding-the-uncertainSara Sezun and Steve Carson | Office of Digital LearningMany aspects of our personal and professional lives are fraught with uncertainty. Should we invest in the stock market now, or wait? How reliable are the GPS readings on a smartphone? What is the likelihood that a medical treatment will be effective? Through the new <i>MITx</i> course <a href="https://www.edx.org/course/mitx/mitx-6-041x-introduction-probability-1296" target="_blank">6.041x (Introduction to Probability – The Science of Uncertainty)</a>, MIT professors John Tsitsiklis and Patrick Jaillet share the probabilistic models used to analyze these and many other uncertain situations.<br /><br />While the course was developed in the Department of Electrical Engineering and Computer Science (EECS) over many decades, Tsitsiklis says the course is relevant to a much wider audience: "The class is targeted not just to EE (Electrical Engineering) students. For example, biologists need probability tools more and more."<br /><br />Tsitsiklis, the Clarence J Lebel Professor of Electrical Engineering, also describes his course’s approach as unique: “We’re more ambitious than the typical undergraduate probability class. We’re different from a class that gives an overview of problems and ideas ... We aim to provide the crispest way of explaining the concepts.”<br /><br />He explains that the ability to think probabilistically is a fundamental component of scientific literacy. Students in 6.041x will learn the models, skills, and tools that are the keys to analyzing data and making scientifically sound predictions under uncertainty.<br /><br />Tsitsiklis is intrigued by teaching through a new medium, which may appeal to students with various learning styles. “Some people prefer to learn by reading a textbook ... Some want the encouragement of chatting with an instructor. We hope this medium (<i>MITx</i>) will be perfect for some people.”<br /><br />The online class will offer the same content as the residential class. Tsitsiklis, who is also associate director of the Laboratory for Information and Decision Systems, has done most of the course development, while Jaillet, the Dugald C. Jackson Professor of EECS, will be responsible for managing the course once it begins. In addition, a teaching assistant, along with two undergraduates, will monitor the class forum, prepared to answer students’ questions as necessary.<br /><br />6.041x will be divided into 26 lectures, of which 23 will be given by Tsitsiklis, and three will be given by Jaillet. Each lecture is divided into eight to 10 short video clips, interspersed with concept questions and simple exercises. “It will be like doing mini-homework during class,” Tsitsiklis says. “Students will have to solve the problems on the spot,” which will provide them immediate feedback. In addition, students will have access to problem-solving videos, mostly recorded by MIT graduate students, that correspond to the recitations and tutorials in the residential course.<br /><br />Tsitsiklis emphasized that 6.041x will be as challenging as the residential course. “We have decided to keep it at exactly the same level,” he says, adding that the lectures will cover the same material, and students will be required to have a year of college-level calculus. “We have made the decision that this will be an ambitious, complete class, instead of a watered-down version.”<br /><br />The textbook for the course, "Introduction to Probability" (2008, Athena Scientific), was co-written by Tsitsiklis and Professor Dimitri Bertsekas, the McAfee Professor of Electrical Engineering. Students will be given free online access through the edX platform, and offered a discount for purchase of a hard copy of the text.<br /><br />Thus far, 18,000 people have registered to take 6.041x. Tsitsiklis expects that at least 20,000 will eventually take the course. Whether or not students complete all of the homework and tests, he still welcomes their participation. “Let everyone follow as much as they wish. It’s fine if people just want to listen and learn something," he says. <br />MITxElectrical Engineering & Computer Science (eecs)ProbabilityEdXHigh probability of success
https://news.mit.edu/2013/ben-vigoda-lyric-0501
MIT alumnus and entrepreneur Ben Vigoda took his probability-processing technology to market with help from the Institute.Wed, 01 May 2013 04:00:00 -0400https://news.mit.edu/2013/ben-vigoda-lyric-0501Rob Matheson, MIT News OfficeWhile at MIT, Ben Vigoda SM ’99, PhD ’03 patented technology that, in theory, allowed computer chips to calculate probabilities, enhancing computer-processing speed and capabilities while reducing power consumption. Starting a company to help bring this technology to market, however, was the real challenge. <br /><br />That’s where MIT came in: Using the Institute’s entrepreneurial resources, Vigoda co-founded Lyric Semiconductor Inc. and set up shop in Kendall Square’s startup haven, the Cambridge Innovation Center (CIC), located a few blocks from MIT. <br /><br />For years, Lyric worked quietly on its novel technology, dubbed probability processing, while raising more than $20 million in funding, primarily from the U.S. Defense Advanced Research Projects Agency and Stata Venture Partners. After officially announcing its technology in 2010, the company gained rapid notoriety in tech circles and, in 2011, was acquired by tech giant Analog Devices Inc. (ADI) for a substantial amount.<br /><br />However, the entire Lyric team — including Vigoda and his co-founder, 25-year semiconductor veteran David Reynolds — remains at the CIC as ADI employees, developing innovative technologies as Lyric Labs, a research group of ADI. <br /><br />Looking back, Vigoda says he owes much of his entrepreneurial success to MIT: He conceived of his technological direction in the MIT Media Lab, back in the late 1990s, with help from his thesis advisor, Neil Gershenfeld, and other mentors. But at MIT he also found a host of startup resources — such as an entrepreneurship prize competition and the Venture Mentoring Service (VMS) — that breathed life into his startup and helped him amass business expertise. <br /><br />“I owe much of my success to what MIT had to offer,” Vigoda says. <br /><br /><strong>Helping computers navigate ambiguity</strong><br /><br />Vigoda’s group is creating computer chips that perform inferences and machine learning on uncertain data — data that can be incomplete or contradictory — more efficiently than today’s chips. <br /><br />“If a normal computer program receives an unanticipated or noisy input, it will ordinarily either give the user an error message, crash the program or even, in some rare cases, crash the machine,” Vigoda says. “With probabilistic processing, the hope is to help the computer directly understand that the world is noisy, ambiguous, or even contradictory, and to be able to cope with that in a more native way.”<br /><br />With this technology, Vigoda says, computers can enable capabilities for processing signals and data with “orders-of-magnitude greater efficiencies” in cost, power and size. This has implications for a wide variety of applications, including consumer electronics, communications infrastructure, automotive electronics, mobile health, industrial automation and energy systems. The chips will show up soon in phones and tablets, Vigoda says.<br /><br />In 2010, Lyric was named to <i>EE Times</i>’ prestigious list of 60 emerging startups; in 2011, it made <i>Technology Review</i>’s annual list of the world’s 50 most innovative technology companies.<br /><br /><strong>Removing entrepreneurial barriers</strong><br /><br />Although Vigoda has been working for more than a decade on probability processing — work that is now based on more than 75 patents — he found at MIT the resources and early success needed to make his vision a reality.<br /><br />The MIT Media Lab, for instance, provided him with a free, nonexclusive license to patents he filed there as a PhD student — something it does for all graduate students who develop technology in the lab. “This is part of a message that says MIT is trying to encourage entrepreneurship,” Vigoda says.<br /><br />More importantly, perhaps, is that Vigoda joined a team that won a $10,000 runner-up prize in MIT’s $50K (now $100K) Entrepreneurship Competition in 2002. The prize sparked Vigoda’s entrepreneurial spirit and acclimated him to the business community.<br /><br />Vigoda says he learned a few key skills in the $50K competition: namely, a greater understanding of what venture capitalists are looking for, and how to structure a business plan. Before the competition, Vigoda was talking about how his technology could reduce the number of joules (a standard measurement of energy) computer chips would use; he learned to recast his pitch in units of dollars. This is much more effective in drawing investors, Vigoda says. <br /><br />“That was the main thing I learned: to translate your idea from technical units to economic units,” he says. “Venture capitalists weren’t as interested in us until we started talking business.”<br /><br /><strong>‘Think with your hands’</strong><br /><br />While still developing Lyric, Vigoda participated as a mentee in MIT’s Venture Mentoring Service (VMS), receiving advice from about 10 different mentors. Each had knowledge of different aspects of business, such as patents, manufacturing and raising funds, he says.<br /><br />Vigoda says his mentors helped him find and rectify weak spots in his business by continuously questioning his thinking. This process of questioning, he says, is vital in building a business. “Through their Socratic method, they would get you to understand what you can do better,” he says. “That’s not only helpful for improving the thought process about your business and technology, but it’s just a good skill to learn.”<br /><br />But it wasn’t just extracurricular activities that helped Vigoda, it was also a core philosophy, he says, that he learned from Gershenfeld, who is director of MIT’s Center for Bits and Atoms: “Always think with your hands,” Vigoda says — essentially, start building a prototype right away and learn as you go.<br /><br />It’s something architects do, Vigoda says, paraphrasing words he heard from Joi Ito, the director of the MIT Media Lab: “Architects build a model before they even know what they want. They begin messing with materials and they learn as they go and eventually develop a viable model. That ‘just start building’ mentality at MIT is awesome and is how we developed our technology and company early on.”<br /><br /><strong>Staying close to the ‘fire hose’</strong><br /><br />While working on novel computer-processing technology may draw some to set up shop in Silicon Valley, Vigoda and his Lyric team have stayed put in the CIC. The reason, Vigoda says, is to stay close to the intellectual capital at MIT and Harvard University, and be involved with the innovative community fed by Kendall Square’s tech sector. <br /><br />“From a business perspective, there’s a life of ideas that flows into Kendall. We’re always hearing about the coolest innovations here. We’ve met customers, employees, and discovered new technologies here. That’s had a substantial impact on the growth and development of this company,” he says. <br /><br />For emphasis, Vigoda evokes a popular metaphor associated with MIT: “drinking from the fire hose,” which refers not only to receiving an education at MIT — with its sometimes overwhelming course load — but also to the flood of ideas and intellectual growth at the Institute. “We’re not leaving. We want to stay close to the fire hose,” he says.Ben Vigoda SM ’99, PhD ’03Photo courtesy of Lyric Semiconductor/Lyric LabsEntrepreneurshipInnovation and Entrepreneurship (I&E)Kendall SquareMIT $100K competitionProbabilityStartupsCambridge Innovation CenterVenture Mentoring ServiceResearch update: Multiple steps toward the ‘quantum singularity’
https://news.mit.edu/2013/research-update-quantum-singularity-0118
Over three days in December, four research groups announced progress on a quantum-computing proposal made two years ago by MIT researchers.Fri, 18 Jan 2013 05:00:00 -0500https://news.mit.edu/2013/research-update-quantum-singularity-0118Larry Hardesty, MIT News OfficeIn early 2011, a pair of theoretical computer scientists at MIT proposed an <a href="/newsoffice/2011/quantum-experiment-0302.html" target="_self">optical experiment</a> that would harness the weird laws of quantum mechanics to perform a computation impossible on conventional computers. Commenting at the time, a quantum-computing researcher at Imperial College London said that the experiment “has the potential to take us past what I would like to call the ‘quantum singularity,’ where we do the first thing quantumly that we can’t do on a classical computer.”<br /><br />The experiment involves generating individual photons — particles of light — and synchronizing their passage through a maze of optical components so that they reach a battery of photon detectors at the same time. The MIT researchers — Scott Aaronson, an associate professor of electrical engineering and computer science, and his student, Alex Arkhipov — believed that, difficult as their experiment may be to perform, it could prove easier than building a fully functional quantum computer.<br /><br />In December, four different groups of experimental physicists, centered at the University of Queensland, the University of Vienna, the University of Oxford and Polytechnic University of Milan, reported the completion of rudimentary versions of Aaronson and Arkhipov’s experiment. Papers by two of the groups appeared back to back in the journal <i>Science</i>; the other two papers are as-yet unpublished.<br /><br />All four papers, however, appeared on <a href="http://arxiv.org/" target="_blank">arXiv</a>, an online compendium of research papers, within a span of three days. Aaronson is a co-author on the paper from Queensland, as is Justin Dove, a graduate student in the Department of Electrical Engineering and Computer Science and a member of MIT’s Optical and Quantum Communications Group.<br /><br /><strong>Changing channels</strong><br /><br />The original formulation of Aaronson and Arkhipov’s experiment proposed a network of beam splitters, optical devices that are ordinarily used to split an optical signal in half and route it down separate fibers. In practice, most of the groups to post papers on arXiv — those other than the Queensland group — built their networks on individual chips, using channels known as waveguides to route the photons. Where two waveguides come close enough together, a photon can spontaneously leap from one to the other, mimicking the behavior caused by a beam splitter.<br /><br />Performing a calculation impossible on a conventional computer would require a network of hundreds of beam splitters, with dozens of channels leading both in and out. A few dozen photons would be fired into the network over a random subset of the channels; photodetectors would record where they come out. That process would have to be repeated thousands of times.<br /><br />The groups posting papers on arXiv used networks of 10 or so beam splitters, with four or five channels leading in, and three or four photons. So their work constitutes a proof of principle — not yet the “quantum singularity.”<br /><br />The computation that Aaronson and Arkhipov’s experiment performs is obscure and not very useful: Technically, it samples from a probability distribution defined by permanents of large matrices. There are, however, proposals to use optical signals to do general-purpose quantum computing, most prominently a scheme known as KLM, after its creators, Emanuel Knill, Raymond Laflamme and Gerard Milburn.<br /><br />According to Dove, some in the quantum-computing community have suggested that Aaronson and Arkhipov’s experiment may be difficult enough to perform with the requisite number of photons that researchers would be better off trying to build full-fledged KLM systems.<br /><br />But, Dove says, “One of the ways that Scott and I like to pitch this idea is as an intermediate step that we need to do KLM.” Building a KLM optical quantum computer would entail building everything necessary to perform the Aaronson-Arkhipov experiment — plus a bunch of other, perhaps even more challenging, technologies.<br /><br />“You can think of Scott and Alex’s result as saying, ‘Look, one of the steps to performing KLM is interesting in its own right,’” Dove says. “So I think it’s inevitable that we’re going to do these experiments, whether people label them that way or not.”<br />The Aaronson-Arkhipov sampling experiment can be thought of as the quantum-optical equivalent of a Galton board, a 19th-century device invented to illustrate some basic principles of probability theory.Image: Christine Daniloff/MITComputational complexity theoryQuantum computingComputer Science and Artificial Intelligence Laboratory (CSAIL)ComputingOptical computingFacultyGraduate, postdoctoralResearchProbabilityCollaborationProving quantum computers feasible
https://news.mit.edu/2012/proving-quantum-computers-feasible-1127
With a new contribution to probability theory, researchers show that relatively simple physical systems could yield powerful quantum computers.Tue, 27 Nov 2012 05:00:02 -0500https://news.mit.edu/2012/proving-quantum-computers-feasible-1127Larry Hardesty, MIT News OfficeQuantum computers are devices — still largely theoretical — that could perform certain types of computations much faster than classical computers; one way they might do that is by exploiting “spin,” a property of tiny particles of matter. A “spin chain,” in turn, is a standard model that physicists use to describe systems of quantum particles, including some that could be the basis for quantum computers.<br /><br />Many quantum algorithms require that particles’ spins be “entangled,” meaning that they’re all dependent on each other. The more entanglement a physical system offers, the greater its computational power. Until now, theoreticians have demonstrated the possibility of high entanglement only in a very complex spin chain, which would be difficult to realize experimentally. In simpler systems, the degree of entanglement appeared to be capped: Beyond a certain point, adding more particles to the chain didn’t seem to increase the entanglement.<br /><br />This month, however, in the journal <i>Physical Review Letters</i>, a group of researchers at MIT, IBM, Masaryk University in the Czech Republic, the Slovak Academy of Sciences and Northeastern University proved that even in simple spin chains, <a href="http://prl.aps.org/abstract/PRL/v109/i20/e207202" target="_blank">the degree of entanglement scales with the length of the chain</a>. The research thus offers strong evidence that relatively simple quantum systems could offer considerable computational resources.<br /><br />In quantum physics, the term “spin” describes the way that tiny particles of matter align in a magnetic field: A particle with spin up aligns in one direction, a particle with spin down in the opposite direction. But subjecting a particle to multiple fields at once can cause it to align in other directions, somewhere between up and down. In a complex enough system, a particle might have dozens of possible spin states.<br /><br />A spin chain is just what it sounds like: a bunch of particles in a row, analyzed according to their spin. A spin chain whose particles have only two spin states exhibits no entanglement. But in the new paper, MIT professor of mathematics Peter Shor, his former student Ramis Movassagh, who is now an instructor at Northeastern, and their colleagues showed that unbounded entanglement is possible in chains of particles with only three spin states — up, down and none. Systems of such particles should, in principle, be much easier to build than those whose particles have more spin states.<br /><br /><strong>Tangled up</strong><br /><br />The phenomenon of entanglement is related to the central mystery of quantum physics: the ability of a single particle to be in multiple mutually exclusive states at once. Electrons, photons and other fundamental particles can, in some sense, be in more than one place at the same time. Similarly, they can have more than one spin at once. If you try to measure the location, spin or some other quantum property of a particle, however, you’ll get a definite answer: The particle will snap into just one of its possible states.<br /><br />If two particles are entangled, then performing a measurement on one tells you something about the other. For instance, if you measure the spin of an electron orbiting a helium atom, and its spin is up, the spin of the other electron in the same orbit must be down, and vice versa. For a chain of particles to be useful for quantum computing, all of their spins need to be entangled. If, at some point, adding more particles to the chain ceases to increase entanglement, then it also ceases to increase computational capacity.<br /><br />To show that entanglement increases without bound in chains of three-spin particles, the researchers proved that any such chain with a net energy of zero could be converted into any other through a small number of energy-preserving substitutions. The proof is kind of like one of those puzzles where you have to convert one word into another of the same length, changing only one letter at a time.<br /><br />“Energy preserving” just means that changing the spins of two adjacent particles doesn’t change their total energy. For instance, if two adjacent particles have spin up and spin down, they have the same energy as two adjacent particles with no spin. Similarly, swapping the spins of two adjacent particles leaves their energy the same. Here, the “puzzle” is to convert one spin chain into another using only these and a couple of other substitutions.<br /><br /><strong>No bottlenecks</strong><br /><br />If you envision every set of definite spins for a chain of three-spin particles as a point in space, and draw lines only between those that that are interchangeable using energy-preserving substitutions, then you end up with a well-connected network. <br /><br />“If you want to go from any state to another state, it has high conductivity,” Movassagh says. “It’s like, if you have a town with a bunch of alleys, and you want to go from any neighborhood to any other, you can only go rapidly if there’s no one road that’s necessary to use and congested.” To prove that, in systems of three-spin particles, transitions between sets of spin were possible through these “back alleys,” Movassagh says, “we proved something that we think is new in probability theory.”<br /><br />“It’s been known that if the particles can have constant but rather high dimension” — that is, number of possible spin states — “the entanglement can be pretty high,” says Sandy Irani, a professor of computer science at the University of California at Irvine who specializes in quantum computation. “But the requirement is that these little particles have something like dimension 14, 15, 16. In terms of what people are actually looking at experimentally, they’re looking at very low-dimensional things. Having particles of dimension of 15, 16, is much more difficult to bring about in the lab.” <br /><br />Shor, Movassagh and their colleagues, Irani says, “have shown that if you just step up from two to three, the entanglement can actually grow with the number of particles.”<br /><br />Irani cautions, however, that the new paper shows only that entanglement scales logarithmically with the length of the spin chain. “If you go up to these larger-dimension particles, in the teens, you get entanglement that can scale with the number of particles instead of the log of the number of particles,” she says, “and that may be required for quantum computing.”The possible quantum states of a chain of particles can be represented as points in space, with lines connecting states that can be swapped with no change in the chain's total energy. MIT researchers and their colleagues
showed that such networks are densely interconnected, with heavily trafficked pathways between points.Graphic: Christine DaniloffQuantum computingFacultyMathematicsParticlesProbabilityResearchSpin chainsComputingTheoretical computer sciencePhysicsComputer science and technologyOn the hunt for mathematical beauty
https://news.mit.edu/2012/profile-borodin-0323
Alexei Borodin uses sophisticated tools to extract information from large groups.Fri, 23 Mar 2012 04:00:00 -0400https://news.mit.edu/2012/profile-borodin-0323Helen Knight, MIT News correspondentFor anyone who has ever taken a commercial flight, it’s an all-too-familiar scene: Hundreds of passengers sit around waiting for boarding to begin, then rush to be at the front of the line as soon as it does. <br /><br />Boarding an aircraft can be a frustrating experience, with passengers often wondering if they will ever make it to their seats. But Alexei Borodin, a professor of mathematics at MIT, can predict how long it will take for you to board an airplane, no matter how long the line. That’s because Borodin studies difficult probability problems, using sophisticated mathematical tools to extract precise information from seemingly random groups. <br /><br />
<div class="video_captions"><img src="/sites/default/files/images/inline/newsofficeimages/borodin.jpg" border="0" alt="Alexei Borodin" /><br /> <strong>Alexei Borodin</strong><br /> <i>Photo: M. Scott Brauer</i><br /><br /></div>
“Imagine an airplane in which each row has one seat, and there are 100 seats,” Borodin says. “People line up in random order to fill the plane, and each person has a carry-on suitcase in their hand, which it takes them one minute to put into the overhead compartment.”<br /><br />If the passengers all board the plane in an orderly fashion, starting from the rear seats and working their way forwards, it would be a very quick process, Borodin says. But in reality, people queue up in a random order, significantly slowing things down.<br /><br />So how long would it take to board the aircraft? “It’s not an easy problem to solve, but it is possible,” Borodin says. “It turns out that it is approximately equal to twice the square root of the number of people in the queue.” So with a 100-seat airplane, boarding would take 20 minutes, he says.<br /><br />Borodin says he has enjoyed solving these kinds of tricky problems since he was a child growing up in the former Soviet Union. Born in the industrial city of Donetsk in eastern Ukraine, Borodin regularly took part in mathematical Olympiads in his home state. Held all over the world, these Olympiads set unusual problems for children to solve, requiring them to come up with imaginative solutions while working against the clock. <br /><br />It is perhaps no surprise that Borodin had an interest in math from an early age: His father, Mikhail Borodin, is a professor of mathematics at Donetsk State University. “He was heavily involved in research while I was growing up,” Borodin says. “I guess children always look up to their parents, and it gave me an understanding that mathematics could be an occupation.”<br /><br />In 1992, Borodin moved to Russia to study at Moscow State University. The dissolution of the USSR meant that, arriving in Moscow, Borodin found himself faced with a choice of whether to take Ukrainian citizenship, like his parents back in Donetsk, or Russian. It was a difficult decision, but for practical reasons Borodin opted for Russian citizenship.<br /><br />Times were tough while Borodin was studying in Moscow. Politically there was a great deal of unrest in the city, including a coup attempt in 1993. Many scientists began leaving Russia, in search of a more stable life elsewhere.<br /><br />Financially things were not easy for Borodin either, as he had just $15 each month to spend on food and accommodation. “But I still remember the times fondly,” he says. “I didn’t pay much attention to politics at the time, I was working too hard. And I had my friends, and my $15 per month to live on.”<br /><br />After Borodin graduated from Moscow State University in 1997, a former adviser who had moved to the United States invited Borodin over to join him. So he began splitting his time between Moscow and Philadelphia, where he studied for his PhD at the University of Pennsylvania. <br /><br />He then spent seven years at the California Institute of Technology before moving to MIT in 2010, where he has continued his research into probabilities in large random objects.<br /><br />Borodin says there are no big mathematical problems he is desperate to solve. Instead, his greatest motivation is the pursuit of what he calls the beauty of the subject. While it may seem strange to talk about finding beauty in abstract mathematical constructions, many mathematicians view their work as an artistic endeavor.<br /><br />“If one asks 100 mathematicians to describe this beauty, one is likely to get 100 different answers,” he says.<br /><br />And yet all mathematicians tend to agree that something is beautiful when they see it, he adds, saying, “It is this search for new instances of mathematical beauty that largely drives my research.”Alexei BorodinPhoto: M. Scott BrauerDataFacultyMathematicsGlobalLarge setsProbabilityRussia