It’s been two years since a New York Times article declared the “year of the MOOC” —short for “massive open online courses.” Now, for the first time, researchers have carried out a detailed study that shows that these classes really can teach at least as effectively as traditional classroom courses — and they found that this is true regardless of how much preparation and knowledge students start out with.
The findings have just been published in the International Review of Research in Open and Distance Learning, in a paper by David Pritchard, MIT’s Cecil and Ida Green Professor of Physics, along with three other researchers at MIT and one each from Harvard University and China’s Tsinghua University.
“It’s an issue that has been very controversial,” Pritchard says. “A number of well-known educators have said there isn’t going to be much learning in MOOCs, or if there is, it will be for people who are already well-educated.”
But after thorough before-and-after testing of students taking the MITx physics class 8.MReVx (Mechanics Review) online, and similar testing of those taking the same class in its traditional form, Pritchard and his team found quite the contrary: The study showed that in the MITx course, “the amount learned is somewhat greater than in the traditional lecture-based course,” Pritchard says.
Even the least-prepared learn
A second, more surprising finding, he says, is that those who were least prepared, as shown by their scores on pretests, “learn as well as everybody else.” That is, the amount of improvement seen “is no different for skillful people in the class” — including experienced physics teachers — “or students who were badly prepared. They all showed the same level of increase,” the study found.
Even if a student with a lower initial score still ends the online class with a test score that would represent a failing grade, that person would nevertheless have made substantial gains in understanding, Pritchard says. “This actually is a case where a rising tide lifts all boats,” he says.
The study’s basic methodology has been widely used to study the effectiveness of conventional on-campus classes, Pritchard says. At least 65 traditional MIT classes have been studied using the same system of pre- and post-testing of basic concepts, he says, but this is the first time anyone has applied such detailed, systematic testing to the effectiveness of an online class. Pritchard says the results show improvement among online students that is equal to or better than in any of the previously studied traditional classes.
In addition to the before-and-after testing, the study also analyzed in detail the homework and weekly test questions from each student, using an established technique called item-response theory, similar to the methodology used to ensure that results from standardized tests such as the SAT are consistent from one year to the next. The method uses a statistical analysis of each item in the test, Pritchard says, and includes a few of the same questions from other tests being compared, to ensure consistency.
Consistent results
Both of these methods of analyzing the impact of the online class give consistent results, Pritchard says: “All cohorts learn equally,” he says, whether compared on the basis of level of education, degree of preparation in math and physics, or other measures.
The one type of class in which students learned even more effectively than in either online or traditional classes, the study found, was an approach called “interactive engagement pedagogy,” where students interact frequently in small groups to grapple with concepts and questions. Such “constructive engagement” in the classroom is something education reformers have long pushed for, Pritchard says, and is already used in many MIT classes.
While a similar analysis could be done for any of the other roughly 1,000 classes currently available as MOOCs, he says, it requires an upfront commitment from course instructors, who must prepare and administer extra tests, and evaluate the scoring of those tests. “It’s a lot of work,” Pritchard says.
Pritchard sees the new study as just the start of a process of mining the data that can be gained from these online classes, where every detail of students’ interactions — how long they spend watching lectures, how often they pause or repeat sections, how much of the textbook they read and when, and so on — is recorded and could be used for research aimed at finding what systems work best. “We can study what students do in a way that would otherwise require everyone to wear a headcam all the time,” Pritchard says.
Fiona Hollands, a senior researcher at Teachers College of Columbia University who was not involved in this study, says, “In my opinion, this study represents the most rigorous attempt to date to measure learning in a MOOC. This study provides an excellent demonstration of how learning in a MOOC, or in other types of courses, can be rigorously assessed. Applied to a broader population of students and a variety of educational settings, such investigations would provide valuable information about the relative effectiveness of different forms of educational delivery.”
In addition to Pritchard, the study was carried out by MIT postdocs Kimberly Colvin and John Champaign and physics undergraduate Alwina Liu; Qian Zhou of Tsinghua University; and Colin Fredericks of Harvard. The research was supported by Google, the National Science Foundation, and MIT.