Rob Olsen

Portrait Photo of Rob Olsen

Rob Olsen

Research Professor


Contact:

805 21st Street NW Washington DC 20052

Rob Olsen is an economist who specializes in rigorous impact evaluations of social programs, especially randomized trials of educational programs. He has played leadership roles on impact evaluations of the Upward Bound Program, college advising for high school students, and smart phone apps to help community college students persist in school. He has served as a senior technical advisor on impact evaluations across a wide range of substantive areas, including employment and training, housing assistance, food and nutrition, and teen pregnancy prevention. Prior to joining GWIPP, Dr. Olsen held research positions at Westat, Abt Associates, the Urban Institute, and Mathematica.

Over the past 12 years, his research has focused on methods for improving the external validity or generalizability of findings from randomized trials. Currently, Dr. Olsen is conducting research on sampling methods for improved generalizability in evaluations of education programs and on statistical analysis methods for improved generalizability in evaluations of job training programs. His research on generalizability has culminated in an evaluator’s guide to generalizability (joint with Dr. Elizabeth Tipton), which was published by the Institute of Education Sciences (IES).  In addition, he consults on rigorous impact evaluations, including evaluations of math technology interventions and English Learner reclassification for IES, to help ensure that their findings generalize to populations of policy interest.

 


Using Evidence from National Studies to Improve Local Policy Decisions that Affect Youth. Most education policy is local:  Schools and school districts make many of the key decisions that affect students in public schools.  However, the most prominent evaluations in education are national in scope and include many schools and districts. Olsen, with colleagues at Johns Hopkins University and NORC, are evaluating the generalizability of study findings from national impact evaluations to inform local policy decisions in education.

Site Selection When Participation is Voluntary: Improving the External Validity of Randomized Trials.  In most randomized trials, potential study sites are not required to participate. If the sites that choose to participate are different from those that do not, the study may yield biased impact estimates for the population of potential study sites. Olsen, with colleagues at Johns Hopkins University and Abt Associates, are conducting simulations to test the performance of different methods of selecting districts and schools for randomized trials of educational interventions—as well as different methods for selecting replacement districts and schools for those that decline to participate.

Selecting Districts and Schools for Impact Studies in Education: A Simulation Study of Different Strategies. With Daniel Litwok, Austin Nichols, Azim Shivji. Journal of Research on Educational Effectiveness.

Enhancing the Generalizability of Impact Studies in Education (NCEE 2022-003). With Elizabeth Tipton. U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance.  

Using the Results from Rigorous Multisite Evaluations to Inform Local Policy Decisions. With Larry L. Orr, Stephen H. Bell, Ian Schmid, Azim Shivji, and Elizabeth A. Stuart. Journal of Public Policy Analysis and Management

A Review of Statistical Methods for Generalizing from Evaluations of Educational Interventions. With Elizabeth Tipton. Educational Researcher.

Using Preferred Applicant Random Assignment (PARA) to Reduce Randomization Bias in Randomized Trials of Discretionary Programs. With Stephen H. Bell and Austin Nichols. Journal of Policy Analysis and Management.

Characteristics of School Districts that Participate in Rigorous National Educational Evaluations. With Elizabeth A. Stuart, Stephen H. Bell, Cyrus Ebnesajjad, and Larry L. Orr. Journal of Research on Educational Effectiveness.

On the “Where” of Social Experiments:  Selecting More Representative Samples to Inform Policy. With Larry L. Orr. New Directions for Evaluation.

Estimates of Bias When Impact Evaluations Select Sites Purposively. With Stephen H. Bell, Larry L. Orr, and Elizabeth A. Stuart. Educational Evaluation and Policy Analysis.

External Validity in Policy Evaluations that Choose Sites Purposively. With Larry L. Orr, Stephen H. Bell, and Elizabeth A. Stuart. Journal of Policy Analysis and Management.