Portrait Photo of Rob Olsen

Rob Olsen

Title:
Senior Fellow (Non-Resident)
Address: 805 21st Street NW, Office 619
Washington, District Of Columbia
20052
Email:
[email protected]

Background

View CV

Rob Olsen is an economist who specializes in the rigorous impact evaluations of social programs.  His research has primarily involved the design and analysis of randomized trials, and his substantive focus has been on educational programs and other interventions for children and youth. At GWIPP, he is conducting research on the external validity/generalizability of findings from randomized trials in education and exploring research methods for improved generalizability.

Previously Dr. Olsen held research positions at Abt Associates, the Urban Institute, and Mathematica Policy Research. He has conducted research for a variety of research sponsors, including the U.S. Department of Education, the U.S. Department of Health and Human Services, the U.S. Department of Labor, and the Robin Hood Foundation.  His research has been published in peer reviewed journals (e.g., the Journal of Policy Analysis and Management), edited volumes, and reports to government agencies.


Current Research

Statistical Methods for Using Rigorous Evaluation Results to Improve Local Education Policy Decisions. Most education policy is local:  Schools and school districts make many of the key decisions that affect students in public schools.  However, the most prominent evaluations in education are national in scope and include many schools and districts. Olsen, with colleagues at Johns Hopkins University and Abt Associates, are evaluating the generalizability of study findings from national impact evaluations to inform local policy decisions in education.

External Validity of Randomized Trials with Non-random Self-Selection into the Study.  In most randomized trials, potential study sites are not required to participate. If the sites that choose to participate are different from those that do not, the study may yield biased impact estimates for the population of potential study sites. Olsen, with colleagues at MDRC, are using data from a randomized trial of home visiting programs to estimate this bias by comparing two groups of local home visiting programs—those that were required to participate in the evaluation by virtue of receiving federal funding and those that were not required to participate.


Highlighted Research

Using Preferred Applicant Random Assignment (PARA) to Reduce Randomization Bias in Randomized Trials of Discretionary Programs. Journal of Policy Analysis and Management, forthcoming. With Stephen H. Bell and Austin Nichols.

Characteristics of School Districts that Participate in Rigorous National Educational Evaluations. Journal of Research on Educational Effectiveness, 10(1), 168-206, 2017. With Elizabeth A. Stuart, Stephen H. Bell, Cyrus Ebnesajjad, and Larry L. Orr.

On the “Where” of Social Experiments:  Selecting More Representative Samples to Inform Policy. In L. R. Peck (Ed.), Social experiments in practice: The what, why, when, where, and how of experimental design & analysis. New Directions for Evaluation, 152, 61–71, 2016. With Larry L. Orr.

Estimates of Bias When Impact Evaluations Select Sites Purposively. Educational Evaluation and Policy Analysis, 38(2), 318-335, 2016. With Stephen H. Bell, Larry L. Orr, and Elizabeth A. Stuart.

External Validity in Policy Evaluations that Choose Sites Purposively. Journal of Policy Analysis and Management, Winter 2013. With Stephen H. Bell, Larry L. Orr, and Elizabeth A. Stuart.