Skip to content ↓

How J-PAL thinks globally and acts locally

Can an antipoverty program work in different settings? A new report presents a user’s guide to a tough issue.
Press Inquiries

Press Contact:

Abby Abazorius
Phone: 617-253-2709
MIT News Office

Media Download

“One of the most frequent questions that we get at J-PAL is a version of, ‘So a program worked in one place. Is it likely to work in my context?’” says Mary Ann Bates, the deputy executive director of J-PAL North America.
Download Image
Caption: “One of the most frequent questions that we get at J-PAL is a version of, ‘So a program worked in one place. Is it likely to work in my context?’” says Mary Ann Bates, the deputy executive director of J-PAL North America.
Credits: Image: Kimrawicz/Shutterstock.com

*Terms of Use:

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license. You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

Close
“One of the most frequent questions that we get at J-PAL is a version of, ‘So a program worked in one place. Is it likely to work in my context?’” says Mary Ann Bates, the deputy executive director of J-PAL North America.
Caption:
“One of the most frequent questions that we get at J-PAL is a version of, ‘So a program worked in one place. Is it likely to work in my context?’” says Mary Ann Bates, the deputy executive director of J-PAL North America.
Credits:
Image: Kimrawicz/Shutterstock.com

It is a huge question in development economics: If a program yields good results in one country, will it work in another? Does a vaccination policy in India translate to Africa? Does a teen-pregnancy prevention program in Kenya work in Rwanda?

And: Why or why not?

Leaders of MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL), one of world’s foremost centers for antipoverty research, have developed their own formal framework for thinking about this vexing question, over the last several years. Now, in a new article, two J-PAL directors have unveiled the lab’s approach.

“At J-PAL, we spend a lot of time talking with policymakers and giving advice, but we’d never really written [this] down in a systematic way,” says Rachel Glennerster, the executive director of J-PAL and a co-author of the new article. “This is a framework that can be used by other people who want to do this kind of work.”

Co-author Mary Ann Bates, the deputy executive director of J-PAL North America, says the new paper is a response to years of queries: “One of the most frequent questions that we get at J-PAL is a version of, ‘So a program worked in one place. Is it likely to work in my context?’”

The J-PAL method of operation, it turns out, is less about replicating bottom-line results of programs down to the last decimal point than it is about understanding the mechanisms that make programs successful.

“If you completely replicated a program, you wouldn’t expect to have identical results in a different place,” Glennerster says.

But if the general conditions that make a program work in one place hold elsewhere, then a J-PAL-style antipoverty program can get traction more widely.   

The paper, “The Generalizability Puzzle,” appears in the summer 2017 issue of the Stanford Social Innovation Review, and sets out four basic steps that the lab’s researchers use when thinking about replicating or scaling up an antipoverty program in a new setting.

Four easy pieces

Founded in 2003 by MIT economists Esther Duflo, Abhijit Banerjee, and Sendhil Mullainathan (who is now at Harvard University), J-PAL has become the most high-profile academic enterprise of its kind. The lab incorporates a broad network of scholars dedicated to field experiments — randomized controlled trials, or RCTs — evaluating the effectiveness of antipoverty programs.

J-PAL works extensively with governments, NGOs, and international development groups to implement and evaulate programs. In the past, J-PAL experiments have demonstrated new methods of improving anything from vaccination rates, to school attendance, to safe water use. Glennerster helped establish Deworm the World, based on J-PAL research, a nonprofit that provides deworming pills to 150 million children a year.

And as Glennerster notes, “There is huge interest in the policy world in trying to better use the results of research.” Here, then, are the four steps J-PAL recommends when considering if an antipoverty program would translate to a new setting:

Step one: What are the components of the theory behind the program?

In India, a local J-PAL program providing a small incentive to parents — a couple of pounds of lentils — led to a massive increase in child immunizations, from 6 percent to 39 percent. The theory behind the program rests on a few assumptions: that parents are not inherently opposed to immunizing their children; that people respond to modest incentives; that people will procrastinate on important tasks; and that in some parts of India, lentils are a good incentive mechanism.

To think about how well a program would translate to another location, break down the larger action into these kinds of smaller components, and see if the program would still be viable, even in parallel form, the authors advise.

“If you use lentils to incentive people to get immunized, you wouldn’t get much of an effect in Boston,” Glennerster observes. “They are a very desirable thing in this bit of India where we were working, though.”

Step two: Does that theory apply to local conditions?

In Kenya, one J-PAL experiment produced a successful program to prevent teenage pregnancies by informing adolescent girls about the risks of contracting HIV from older men — 28 percent of whom had HIV in the district where the original intervention took place. This turned out to be a considerable deterrent for the girls who participated in education programs about their own risks.

J-PAL researchers subsequently considered trying out the program in Rwanda, too. But then they conducted surveys and discovered something quite different about the local conditions. Female students estimated that over 20 percent of Rwandan men in their 20s were HIV-positive, whereas only 1.7 percent actually are. As a result, the J-PAL researchers recommended against replicating the program in Rwanda. Because the students were dramatically over-estimating local HIV rates, highlighting the actual rates in an information campaign might have led to an increase in risky behavior.

“That’s not just a matter of sitting and scratching our heads, wondering,” Bates says. “That was targeted information that could be gathered that got right at the heart of the question of whether this intervention would be likely to work in a new context.”

And as Bates emphasizes, it was not necessary to replicate the entire experiment to make an assessment about the program’s adaptability. 

Step three: How strong is the evidence that the desired behavioral change will occur?

Consider again programs trying to get people to invest time and money in preventive medicine, whether through vaccinations, additional visits to medical clinics, or other means. Researchers have gathered evidence that people in many countries ignore preventative health care,  making this a good issue to tackle globally.

“People’s unwillingness to pay much for preventative health is something you find all around the world,” Glennerster says. “People are surprisingly unwilling to invest in preventative health, and small barriers can prevent them from taking up otherwise good options.”

Moreover, Glennerster emphasizes, the evidence for this does not have to be derived from RCTs performed by groups such as J-PAL. The weightier the evidence of a generalized problem, the more likely it is that some variation of a program will apply in new settings.

Step four: What is the evidence that the implementation process can be carried out well?

This last point requires very solid on-the-gound knowledge about the locale where an antipoverty program may be carried out: Are there functional institutions that can do the nuts-and-bolts program work?

“Even if a program may be based on a well-validated view of human behavior, you’ve got to know about the local context, about people’s ability to deliver it,” Glennerster says. “And that’s going to be very specific. Is the government or NGO good at implementing things?”

Don’t just replicate

Through all these points, at least one larger theme emerges: People are people, wherever they may live, and human nature is fairly consistent around the globe. Thus, as Bates and Glennerster write in the paper, “underlying human behaviors are more likely to generalize than specific programs.”

That means researchers and antipoverty leaders should think carefully about the core behavioral mechanisms within programs, and about how to adapt existing programs to novel settings. People need water and want good education, from continent to continent; the best way to deliver clean water and quality education may differ.

Bates and Glennerster say they have received a generally positive reception when presenting the J-PAL framework — Glennerster has presented it at the World Bank, among other places — and they hope the new article will gain traction in the community of antipoverty leaders.

“It’s not that nobody’s thought of this before,” Glennerster says. “I think what people have found useful is us providing a clear step-by-step process. It just gives people a clearer framework.”

Related Links

Related Topics

Related Articles

More MIT News