Tuesday, December 12, 2017

APPAM/ASHEcon Webinar: The Intersection of Opioid Addiction and Evidence-Based Policy

The opioid epidemic in the United States has reached alarming proportions. With over a thousand people dying each week due to opioid related overdoses, many have suggested evidence-based policy as a way to combat the epidemic. Join ASHEcon and APPAM experts on health policy and opioids as they take a deep-dive into the opioid crisis, how to use evidence-based policy to combat it, and what health economists can do to influence policy.

PRINT PAGE

JPAM Featured Article: Using Preferred Applicant Random Assignment (PARA) to Reduce Randomization Bias in Randomized Trials of Discretionary Programs

June 20, 2017 04:00 PM

As part of our ongoing effort to promote JPAM authors to the APPAM membership and the public policy world at large, we are asking JPAM authors to answer a few questions to promote their research article on the APPAM website.

By: Robert Olsen, PhD; Stephen Bell, PhD; and Austin Nichols, PhD

What does your JPAM article address?
 
Our research addresses a common but oft-ignored problem in randomized trials. Randomized trials are used to estimate the impacts of government programs, and many of these programs have discretion over which applicants to serve when there are more eligible applicants than the program can serve. Policymakers often want to know the program’s impacts for program participants. However, to accommodate an unserved control group, randomized trials usually include a broader set of applicants—perhaps all eligible applicants. If the program chooses applicants based on factors that are associated with the program’s impact (e.g., motivation and other attributes), the randomized trial will produce biased impact estimates for the applicants who would have been selected to participate in the program under normal circumstances (i.e., if the program had not been part of a randomized trial). 
 
Our paper offers an experimental design called Preferred Applicant Random Assignment (PARA). This design can reduce the bias described above; it can also inform policy decisions about the potential benefits of increasing the size of the program.  
 
What spurred your interest in research on the Preferred Applicant Random Assignment (PARA) approach?  
 
Over 10 years ago, we were conducting an evaluation of the Upward Bound program, a federal program to help disadvantaged high school students attend college. The previous evaluation of this program revealed that local Upward Bound programs tended to favor certain types of students in the admissions process—both to achieve distributional goals (e.g., equal numbers of boys and girls) and to focus on the students that they believed would benefit most. As a result, for our evaluation, we designed PARA to allow local programs some control over the mix of students that they served—and only later recognized the range of additional benefits the method offers.  
 
Were there any challenges to conducting your research on the PARA approach?
 
Yes, we faced substantial challenges in conducting the first field test of PARA. The Upward Bound study faced vocal opposition from the program’s advocacy community, which had also criticized the previous evaluation. The study was ultimately cancelled under pressure from Congress. By this point, we had demonstrated the viability of implementing PARA, but we missed the opportunity to conduct the distinctive analyses that PARA supports.   
 
What are the main conclusions of your research on the use of the PARA approach?
 
Our main conclusions are that:
  • PARA can reduce the bias that results when the impact of the program differs between program participants and other eligible applicants included in the randomized trial;
  • PARA accommodates the preferences of program operators over which applicants to serve, potentially making random assignment more palatable;
  • PARA provides information about the impact of expanding the program; and
  • PARA is easy to implement for both programs and researchers.  
Do you have any recommendations based on your analysis?
 
In using PARA, the probability of assignment to the treatment group is set higher for “preferred applicants”—those who would have been selected to participate under normal program operations—than for other applicants. Our recommendation is to set these probabilities to balance bias and variance. As the paper shows, the probability must be set higher for preferred applicants than other applicants to reduce the bias—but doing so increases the variance or standard error of the impact estimate, increasing the study’s sample size requirements. Striking the right balance is key to benefiting from PARA while ensuring the cost of the study is not too large.
 
In your article, you note that for many government programs, not all eligible applicants can participate. How might the government use the PARA approach to build better programs for constituents?
 
PARA allows the government to assess the potential impacts of canceling a program, expanding the program to cover all eligible applicants, or revising the program’s screening criteria. In addition, if PARA makes random assignment more acceptable to the program operators—and early evidence from one of our own evaluations suggests that it can—it could increase the quantity and quality of randomized trials, producing more and better evidence to inform policy decisions about government programs. If sites were more willing to participate in randomized trials, more randomized trials would be conducted. In addition, the sites that participate in them would be more typical of those affected by key policy decisions (e.g., cancelling or expanding the program)—so the findings would more accurately predict the consequences of those decisions.  
 
What would be the ideal next step for your research findings? How would you like to see the PARA approach implemented?
 
It would be to field test PARA in one or more randomized trials of government programs that exercise discretion in choosing which eligible applicants to serve. However, since PARA requires larger samples than simple random assignment, we should initially test PARA in studies where the marginal cost of additional sample members is relatively low (e.g., studies that rely solely on administrative data). Early field tests can produce evidence on the magnitude of the bias that PARA is designed to address; this evidence can be used to assess the value of PARA to reduce the bias in future studies. 

 
Authors' Bios
Olsen_photo

Robert Olsen is a Research Professor at the George Washington Institute of Public Policy at George Washington University. With 20 years of experience conducting impact evaluations, Dr. Olsen specializes in randomized trials to evaluate the impacts of educational and other social programs. He has played leadership roles on randomized trials of the Upward Bound program, charter schools, education technology, and behavioral “nudge” interventions for high school and college students. Dr. Olsen is an expert on evaluation standards for impact evaluations, serving on multiple advisory panels for the What Works Clearinghouse (WWC) and leading the review of evidence produced by local evaluations for the Investing in Innovation (i3) Program. Previously, Dr. Olsen was a Principal Scientist at Abt Associates, a Senior Research Associate at the Urban Institute, and a Senior Researcher at Mathematica Policy Research. He received his Ph.D. in Labor Economics from Cornell University in 1999.

 
Bell_photo
Stephen H. Bell is a Senior Fellow at Abt Associates whose three decades of research focus on measuring the effectiveness of social interventions for disadvantaged Americans. A specialist in econometric impact analysis, Dr. Bell has helped design many large-scale random assignment field evaluations. His current methodological research focuses on ways of making the findings from rigorous impact evaluations more generalizable to the nation and other inference populations, and on “stretching” experimental designs to answer more diverse policy questions and do so more rapidly. Dr. Bell’s contributions to the impact evaluation field have been published in numerous peer-reviewed statistics and evaluation journals, a U.S. Department of Labor (DOL) field guide on evaluation methods, and a book on quasi-experimental impact analysis techniques. He holds a Ph.D. in economics from the University of Wisconsin.

 

Nichols_photo

Dr. Austin Nichols (@AustnNchols) is a Principal Scientist at Abt’s Social and Economic Policy Division (SEP), where he is a senior methodologist with more than 15 years of experience in evaluation of policies and programs that address social insurance, labor market, and education issues. Dr. Nichols serves as principal investigator, project director, director of design and analysis, and internal reviewer on Abt evaluations. He is a founding research director of the Institute for Upward Mobility (funded by the DeBruce Foundation), using design thinking and principles of human-centered design to generate new solutions to the economic opportunity gap in America. His current research includes work on innovative approaches to promoting college persistence, early childhood education, disability insurance policy, housing for people with disabilities, apprenticeship, teen pregnancy prevention, financial skill and well-being, measuring economic insecurity and inequality, medical technology diffusion and regulation, and a variety of international development topics.

 

Check out this and other Journal of Policy Analysis and Management articles online.

 

« Back

 
 
 
Association for Public Policy Analysis and Management (APPAM)
NEW ADDRESS! 1100 Vermont Avenue, NW, Suite 650 Washington, DC 20005
Phone: 202.496.0130 | Fax: 202.496.0134
Twitter Facebook LinkedIn Subscribe to me on YouTube

Home|About APPAM|Membership|Public Policy News|Conference & Events|Publications| Awards|Careers & Education|Members Only

Web site design and web site development by Americaneagle.com

© 2017 Association for Public Policy Analysis & Management. All Rights Reserved.
Site Map | Privacy Policy | Terms of Use | Events | Add Your Event