Automating big-data analysis

System that replaces human intuition with algorithms outperforms 615 of 906 human teams.

Press Contact

Abby Abazorius
Phone: 617-253-2709
MIT News Office

Media Resources

1 images for download

Access Media

Media can only be downloaded from the desktop version of this website.

Big-data analysis consists of searching for buried patterns that have some kind of predictive power. But choosing which “features” of the data to analyze usually requires some human intuition. In a database containing, say, the beginning and end dates of various sales promotions and weekly profits, the crucial data may not be the dates themselves but the spans between them, or not the total profits but the averages across those spans.

MIT researchers aim to take the human element out of big-data analysis, with a new system that not only searches for patterns but designs the feature set, too. To test the first prototype of their system, they enrolled it in three data science competitions, in which it competed against human teams to find predictive patterns in unfamiliar data sets. Of the 906 teams participating in the three competitions, the researchers’ “Data Science Machine” finished ahead of 615.

In two of the three competitions, the predictions made by the Data Science Machine were 94 percent and 96 percent as accurate as the winning submissions. In the third, the figure was a more modest 87 percent. But where the teams of humans typically labored over their prediction algorithms for months, the Data Science Machine took somewhere between two and 12 hours to produce each of its entries.

“We view the Data Science Machine as a natural complement to human intelligence,” says Max Kanter, whose MIT master’s thesis in computer science is the basis of the Data Science Machine. “There’s so much data out there to be analyzed. And right now it’s just sitting there not doing anything. So maybe we can come up with a solution that will at least get us started on it, at least get us moving.”

Between the lines

Kanter and his thesis advisor, Kalyan Veeramachaneni, a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), describe the Data Science Machine in a paper that Kanter will present next week at the IEEE International Conference on Data Science and Advanced Analytics.

Veeramachaneni co-leads the Anyscale Learning for All group at CSAIL, which applies machine-learning techniques to practical problems in big-data analysis, such as determining the power-generation capacity of wind-farm sites or predicting which students are at risk for dropping out of online courses.

“What we observed from our experience solving a number of data science problems for industry is that one of the very critical steps is called feature engineering,” Veeramachaneni says. “The first thing you have to do is identify what variables to extract from the database or compose, and for that, you have to come up with a lot of ideas.”

In predicting dropout, for instance, two crucial indicators proved to be how long before a deadline a student begins working on a problem set and how much time the student spends on the course website relative to his or her classmates. MIT’s online-learning platform MITx doesn’t record either of those statistics, but it does collect data from which they can be inferred.

Featured composition

Kanter and Veeramachaneni use a couple of tricks to manufacture candidate features for data analyses. One is to exploit structural relationships inherent in database design. Databases typically store different types of data in different tables, indicating the correlations between them using numerical identifiers. The Data Science Machine tracks these correlations, using them as a cue to feature construction.

For instance, one table might list retail items and their costs; another might list items included in individual customers’ purchases. The Data Science Machine would begin by importing costs from the first table into the second. Then, taking its cue from the association of several different items in the second table with the same purchase number, it would execute a suite of operations to generate candidate features: total cost per order, average cost per order, minimum cost per order, and so on. As numerical identifiers proliferate across tables, the Data Science Machine layers operations on top of each other, finding minima of averages, averages of sums, and so on.

It also looks for so-called categorical data, which appear to be restricted to a limited range of values, such as days of the week or brand names. It then generates further feature candidates by dividing up existing features across categories.

Once it’s produced an array of candidates, it reduces their number by identifying those whose values seem to be correlated. Then it starts testing its reduced set of features on sample data, recombining them in different ways to optimize the accuracy of the predictions they yield.

“The Data Science Machine is one of those unbelievable projects where applying cutting-edge research to solve practical problems opens an entirely new way of looking at the problem,” says Margo Seltzer, a professor of computer science at Harvard University who was not involved in the work. “I think what they’ve done is going to become the standard quickly — very quickly.”

Topics: Research, School of Engineering, Artificial intelligence, Computer science and technology, Data, Computer Science and Artificial Intelligence Laboratory (CSAIL), Electrical engineering and computer science (EECS)


Oracle killed analysis.
Instead Oracle structures data, gets 100% data patterns and searches using synonyms.
That allows to search and find what you want. Nobody needs analysis anymore

In the future systems of this kind, what will matter most would be the ability to structure the data to a uniform construct (e.g. a sparse matrix) and the ability to find relationships between specific pieces of data (and the variables). Human mind and its cognition process is not so much about working as a big-compute engine, but rather as a recall and association engine.

Ilya: Can you expand on your Oracle comment? Which Oracle software package are you talking about?

I am a regular competitor on kaggle. On that site 615 of 906 means you did better than basically randomly guessing. They should publish what competitions they actually competed in. I am researching this topic myself so I don't mean to sound too critical (I would love to see a real breakthrough in this area of autoML!) but I am skeptical since humans are just so good at developing intuitions that computers are simply incapable of developing.

Lower accuracy and far greater speed does not work for every application. It is judiciously deciding when to use which method that could have the greatest impact on outcomes. In the end even more transparency and accountability may be out the window when decisions are made entirely by machine.

No analysis; no compassion; no wisdom: Will we now be able to make critical errors in record time? I am thinking of the Cuban Missile Crisis and the value of restraint.

If anyone from MIT is reading: How does the study account for individuals who use tracking blockers such as Ghostery or Disconnect? Thanks much.

How much context can it understand? If you feed it in enough data, would it eventually say "there is a pattern between global temperatures and number of pirates in the world"?

Their problem is they are not far enough in their research and stuck in a basic problem - you cannot set a computer to randomly crunch numbers.

Smartest comment board I've ever seen ...

Back to the top