On Tuesday, Dec. 1, members of the MIT Media Lab’s Human Dynamics Laboratory received an e-mail with a $40,000 proposition. The U.S. Defense Department’s Defense Advanced Research Projects Agency (DARPA) was holding a competition that weekend: on Saturday morning, 10 large red weather balloons would be raised at undisclosed locations across the United States; the first team to use social media — like online social networks and communication systems — to determine the correct latitude and longitude of all 10 would receive $40,000.
On Wednesday, members of the lab began discussing possible approaches to the problem. By Thursday, they had built a demonstration version of the website they would use to aggregate data, and on Thursday evening the site went live. Within two days, 5,000 people had formally joined the team’s network, out of hundreds of thousands who had visited the site. On Saturday morning the balloons went up, and by the end of the day the MIT team — which consisted of postdocs Riley Crane and Manuel Cebrian and grad students Galen Pickard, Anmol Madan, and Wei Pan — had won the competition.
More than 4,000 teams had entered the competition, and some of them had been working for months. Some were made up of veteran “geocachers,” who spend their free time using GPS receivers to track down Tupperware containers filled with log books and trinkets; one team had been profiled on National Public Radio.
But the Human Dynamics Laboratory has a particular expertise in using digital media to gain perspective on and even alter the behavior of large groups of people. To some extent, the approach taken by the winning MIT team drew on that expertise; but the team’s experience in the competition also suggests new avenues of study, and the data it collected may contain important clues about how social media can aid in large-scale collective problem solving. For example, says Alex “Sandy” Pentland, who heads the Human Dynamics Lab, governments could use techniques similar to those employed in the DARPA challenge to mobilize resources after a disaster — to track down cranes for sifting rubble, say, or boats to rescue people stranded by a flood.
The crux of the MIT team’s approach was the incentive structure it designed — a way of splitting up the prize money among people who helped find a balloon. Whoever provided the balloon’s correct coordinates got $2,000; but whoever invited that person to join the network got $1,000; whoever invited that person got $500; and so on. No matter how long the chain got, the total payment would never quite reach $4,000; whatever was left over went to charity.
‘Long tail’
In principle, some people in the payment chain could end up getting only a few dollars or even a few cents. But Galen Pickard, a graduate student in the Human Dynamics Lab who was part of the MIT team, explains that the chain’s “long tail” gave people an incentive to spread the word about the MIT team’s offer. “If I tell somebody, and they tell at least two people, mathematically, I do better than if I hadn’t told them,” Pickard says. “It’s designed explicitly so that I actually am incentivized to tell you and then have you tell all your friends.” If the payment scheme rewarded, say, only the first two people in the chain, Pickard says, a participant in the contest would have an incentive to tell as many other people as possible about it — but to try to prevent them from telling anyone else.
Of course, the MIT researchers’ design for the incentive scheme meant that they wouldn’t get to pocket any of the prize money. But Pentland says that the team’s real motivation in entering the contest was “to try out some of the ideas we have been playing with.” Some of those ideas had to do with how information flows through an ad hoc network with many different distribution mechanisms.
Pentland points out that the MIT team used what he describes as “broadcast” media to draw attention to its incentive scheme — posts on highly trafficked websites like slashdot.org, for instance. The news then diffused through a variety of social media, but claiming a share of the prize money required registering on the MIT team’s website, which Pentland describes as a “concentrating mechanism.” “This is one of the first examples of combining these different types of media,” Pentland says. “You can imagine doing that more in the future, where all sorts of government things, all sorts of societal functions, have many different types of channels — broadcast, social networking, point to point, peer to peer — and that those are fluidly interleaved to be able to find the right resources, validate them, concentrate them, and then do it again. And we don’t know anything about that, at the moment, I don’t think. I don’t think anybody’s ever done stuff like this by thinking of it as a computational problem.”
Understanding the competition as a computational problem requires analyzing data, but so far, the MIT team itself hasn’t had the chance to look at the results of its massive exercise in networking. The team immediately turned its data over to MIT’s auditing and human-subjects departments, which are reviewing them to confirm that they were properly collected and meet the criteria for public-interest research.
One of the questions that the lab hopes to get a quantitative handle on is how to filter out reliable and unreliable reports. Pickard says that of the balloon sightings reported through the team’s website, about half contained inaccurate data, and some of those were intentionally faked. “There were other teams who were actively trying to deceive us,” Pickard says. “We talked to them afterwards, and they said they had fun spamming us with false information.” The researchers will look for patterns that provide a kind of statistical signature for false reports. As an example, Pickard points to one of the methods the team in fact used to weed out fakes: if several balloon reports came in that specified the same general geographical area but varied slightly as to the GPS coordinates, they were likely to have come from people who’d seen the balloon firsthand but hadn’t had an opportunity to track down its precise location. When, on the other hand, several reports came in with exactly the same GPS coordinates, they were likely to have had a common source, such as a posting on the Internet, which may or may not have been reliable.
David Lazer, the director of the Program on Networked Governance at Harvard’s Kennedy School of Government, says that the problem of spotting fakes is central to his own work, and he agrees that any system for collective problem solving will have to answer the question “If you had people who were trying to fake the system out, what kinds of processing would help filter out real information from misinformation?” The DARPA challenge, he adds, “was a neat contest because it really highlighted the general issue of collective problem solving, and my hat’s off to the MIT team, because I think they came up with an ingenious approach to tackling it.”
On Wednesday, members of the lab began discussing possible approaches to the problem. By Thursday, they had built a demonstration version of the website they would use to aggregate data, and on Thursday evening the site went live. Within two days, 5,000 people had formally joined the team’s network, out of hundreds of thousands who had visited the site. On Saturday morning the balloons went up, and by the end of the day the MIT team — which consisted of postdocs Riley Crane and Manuel Cebrian and grad students Galen Pickard, Anmol Madan, and Wei Pan — had won the competition.
More than 4,000 teams had entered the competition, and some of them had been working for months. Some were made up of veteran “geocachers,” who spend their free time using GPS receivers to track down Tupperware containers filled with log books and trinkets; one team had been profiled on National Public Radio.
But the Human Dynamics Laboratory has a particular expertise in using digital media to gain perspective on and even alter the behavior of large groups of people. To some extent, the approach taken by the winning MIT team drew on that expertise; but the team’s experience in the competition also suggests new avenues of study, and the data it collected may contain important clues about how social media can aid in large-scale collective problem solving. For example, says Alex “Sandy” Pentland, who heads the Human Dynamics Lab, governments could use techniques similar to those employed in the DARPA challenge to mobilize resources after a disaster — to track down cranes for sifting rubble, say, or boats to rescue people stranded by a flood.
The crux of the MIT team’s approach was the incentive structure it designed — a way of splitting up the prize money among people who helped find a balloon. Whoever provided the balloon’s correct coordinates got $2,000; but whoever invited that person to join the network got $1,000; whoever invited that person got $500; and so on. No matter how long the chain got, the total payment would never quite reach $4,000; whatever was left over went to charity.
‘Long tail’
In principle, some people in the payment chain could end up getting only a few dollars or even a few cents. But Galen Pickard, a graduate student in the Human Dynamics Lab who was part of the MIT team, explains that the chain’s “long tail” gave people an incentive to spread the word about the MIT team’s offer. “If I tell somebody, and they tell at least two people, mathematically, I do better than if I hadn’t told them,” Pickard says. “It’s designed explicitly so that I actually am incentivized to tell you and then have you tell all your friends.” If the payment scheme rewarded, say, only the first two people in the chain, Pickard says, a participant in the contest would have an incentive to tell as many other people as possible about it — but to try to prevent them from telling anyone else.
Of course, the MIT researchers’ design for the incentive scheme meant that they wouldn’t get to pocket any of the prize money. But Pentland says that the team’s real motivation in entering the contest was “to try out some of the ideas we have been playing with.” Some of those ideas had to do with how information flows through an ad hoc network with many different distribution mechanisms.
Pentland points out that the MIT team used what he describes as “broadcast” media to draw attention to its incentive scheme — posts on highly trafficked websites like slashdot.org, for instance. The news then diffused through a variety of social media, but claiming a share of the prize money required registering on the MIT team’s website, which Pentland describes as a “concentrating mechanism.” “This is one of the first examples of combining these different types of media,” Pentland says. “You can imagine doing that more in the future, where all sorts of government things, all sorts of societal functions, have many different types of channels — broadcast, social networking, point to point, peer to peer — and that those are fluidly interleaved to be able to find the right resources, validate them, concentrate them, and then do it again. And we don’t know anything about that, at the moment, I don’t think. I don’t think anybody’s ever done stuff like this by thinking of it as a computational problem.”
Understanding the competition as a computational problem requires analyzing data, but so far, the MIT team itself hasn’t had the chance to look at the results of its massive exercise in networking. The team immediately turned its data over to MIT’s auditing and human-subjects departments, which are reviewing them to confirm that they were properly collected and meet the criteria for public-interest research.
One of the questions that the lab hopes to get a quantitative handle on is how to filter out reliable and unreliable reports. Pickard says that of the balloon sightings reported through the team’s website, about half contained inaccurate data, and some of those were intentionally faked. “There were other teams who were actively trying to deceive us,” Pickard says. “We talked to them afterwards, and they said they had fun spamming us with false information.” The researchers will look for patterns that provide a kind of statistical signature for false reports. As an example, Pickard points to one of the methods the team in fact used to weed out fakes: if several balloon reports came in that specified the same general geographical area but varied slightly as to the GPS coordinates, they were likely to have come from people who’d seen the balloon firsthand but hadn’t had an opportunity to track down its precise location. When, on the other hand, several reports came in with exactly the same GPS coordinates, they were likely to have had a common source, such as a posting on the Internet, which may or may not have been reliable.
David Lazer, the director of the Program on Networked Governance at Harvard’s Kennedy School of Government, says that the problem of spotting fakes is central to his own work, and he agrees that any system for collective problem solving will have to answer the question “If you had people who were trying to fake the system out, what kinds of processing would help filter out real information from misinformation?” The DARPA challenge, he adds, “was a neat contest because it really highlighted the general issue of collective problem solving, and my hat’s off to the MIT team, because I think they came up with an ingenious approach to tackling it.”