Skip to content ↓

Learning to evaluate social programs

Flagship J-PAL course teaches policy leaders how, why, and when to evaluate social programs ranging from antiviolence interventions to housing mobility initiatives.
Press Inquiries

Press Contact:

Bridget Wack
Phone: 781-879-2524
Abdul Latif Jameel Poverty Action Lab
Close
J-PAL teaching assistants and program participants discuss how to design randomized evaluations in real-world settings.
Caption:
J-PAL teaching assistants and program participants discuss how to design randomized evaluations in real-world settings.
Credits:
Photo: Amanda Kohn/J-PAL

This summer, international development, government, nonprofit, and philanthropic leaders from two dozen countries gathered at MIT to gain practical evaluation skills as part of Evaluating Social Programs, an Executive Education course offered by the Abdul Latif Jameel Poverty Action Lab (J-PAL).

Nearly 50 leading representatives attended the week-long class to develop the skills necessary to design randomized evaluations of social programs, from antiviolence interventions in Colombia to housing mobility programs in the midwestern United States. The course is J-PAL’s flagship training, offered annually to researchers and policymakers around the world. Instructors, who included academic experts in impact evaluation, covered technical concepts like sample size, data collection, and randomization, but also provided guidance on what makes a study generalizable. 

Evaluating Social Programs’ unique curriculum reflects a global movement to advance evidence-based policy and programs. Sessions explored how randomized evaluations are designed in real-world settings, and provided insights into best practices for producing and using evidence. In keeping with that, attendees included government policymakers as well as foundation and nonprofit staff who had varying levels of evaluation experience and wide-ranging interest areas, including public health, labor markets, political economy, and education.

Chinemelu Okafor, a research assistant at the International Finance Corporation and a George Washington University student, says the opportunity to interact with people from across different levels of experience was highly influential at this point in her career.  

“J-PAL created a really incredible learning environment for people of all backgrounds, all skills and all levels of experience,” Okafor says. “The environment was super collegial. You were learning from your peers and your peers are learning from you. … I have this goal of being Nigeria’s foreign affairs minister, or [in] some setting where I would be able to implement and inform Nigerian policy. Networking and interacting with my peers during the course affirmed for me that this is the type of work I want to do in the future.”

A key feature of Evaluating Social Programs is its integrated teaching methodology, which mixes interactive lectures with daily small group work. In this summer’s course at MIT, participants were able to explore case studies based on J-PAL affiliated research, including large-scale evaluations of a school-based deworming program in Kenya and a cognitive behavioral therapy program for youth in Chicago.

Throughout the week, participants met in small groups to create a preliminary evaluation plan for a real social program. This exercise helped to solidify theoretical concepts learned throughout the week, and gave participants the opportunity to present their evaluation plans to the larger group for feedback. 

Kyle Clements, a learning experience designer at Western Governors University, and his group developed a preliminary evaluation outline for a program focused on alleviating math anxiety among two-year college students in the United States. During the week, Clements was able to see how feasible conducting a randomized evaluation could be not only for his particular program, but also potentially for other programs across his organization. 

“In our specific education model, I think it will be really easy to do randomized control trials at the individual level,” Clements says. “We really should be doing this and there’s not a lot of barriers for us not to be.”

By learning alongside peers who were developing related program evaluations, participants could crowdsource innovative evaluation strategies with staying power in complex real-world settings.

"The most beneficial [elements] for me were the practical parts: the people who presented on their governmental experiences from previous RCTs [randomized controlled trials],” says Nora Ghobrial, a project officer at the Sawiris Foundation for Social Development. “I could relate the most to them. The group discussions were also very interesting for me."

At the end of the week participants gained not only a practical set of tools to better understand randomized evaluations, but also confidence that they could conduct randomized evaluations in the context of their own work and use evidence to improve upon their own programs. 

“I feel like I have a more balanced and centered view of evaluations, and I think I have some good rails on where I need to be cautious and how to get more information to make those kinds of decisions,” says Julie Moreno, bureau director at the California Franchise Tax Board.  

Anyone interested in learning more about topics covered at Evaluating Social Programs and who would like to receive updates on next year’s courses can visit J-PAL’s Training and Education page. Those interested in J-PAL’s MicroMasters in Data, Economics, and Development Policy program can register online by Sept. 11. 

Related Links

Related Topics

Related Articles

More MIT News

Andres Sevtsuk stands in the middle of a crosswalk as blurry travelers go by.

Street smarts

Andres Sevtsuk applies new sources of data to creating more sustainable, walkable, and economically thriving city spaces.

Read full story