JPAM Featured Article: "On Measuring and Reducing Selection Bias with a Quasi-Doubly Randomized Preference Trial"
January 25, 2017 08:00 AM
As part of our ongoing effort to promote JPAM authors to the APPAM membership and the public policy world at large, we are asking JPAM authors to answer a few questions to promote their research article on the APPAM website.
By: Ted Joyce; Dahlia K. Remler; David A. Jaeger; Onur Altindag; Stephen D. O'Connell; and Sean Crockett. Ph.D.
What was the genesis of the idea for your research/paper?
After five of us ran a randomized field experiment about the impact of class time on student outcomes, Dahlia Remler, a co-author on this paper, showed us a doubly randomized preference trial (DRPT) published in theJournal of the American Statistical Association in 2008. It is easiest to understand how a DRPT works through an example. Imagine that 100 individuals are recruited for a study. Fifty are randomly assigned into an “experimental” arm and fifty are randomized into the “choice” arm. Within the experimental arm, subjects are randomized a second time into treatments A and B, while in the choice arm subjects choose between treatments A and B. In short, a DRPT runs an observational study alongside a randomized one, holding everything identical between them except that treatment is endogenous in the choice arm but exogenous in the randomized one.
Because we had already run the online field experiment, we decided to implement a quasi-DRPT. In the subsequent year, we ran an observational study that was identical to the experiment in every way, except that students chose their treatment (compressed or traditional lecture format) through the usual registration process. Because both designs took place in the same setting and at proximate points in time, there was almost perfect overlap in student characteristics. We therefore called our overall study a quasi-DRPT because it has all of the characteristics of an actual DRPT except that the allocation to the experimental and choice arms occurred at different points in time and was quasi-random.
What is the main conclusion that becomes evident from your research?
Quasi-DRPTs are practical. Adding a choice study to a completed randomized experiment is not that much more work. Quasi-DRPTs are also much less work than true DRPTs, which require recruiting twice as many subjects as a stand-alone randomized experiment. Recruitment can be a real bear.
Like true DRPTs, quasi-DRPTs leverage the internal validity of a randomized experiment for insights into selection bias and the effects of subject preferences for a treatment on outcomes.
We showed how a quasi-DRPT reveals the amount of selection bias in an observational study, which variables are needed to reduce it, and whether it can be eliminated. We thought about all possible variables that could determine students’ format choice and then got measures, either through institutional research data or a survey we conducted at the start of the semester. The key to a successful quasi-DRPT is to measure, prospectively, all conceivable determinants of treatment choice identically across all designs.
Although we hadn’t initially planned to, we also showed another benefit of quasi-DRPTs: how unbiased treatment effects vary by subject’s preferences. In our case, we examined whether the effect of the compressed vs traditional format differs between students who prefer traditional and students who prefer compressed. In other words, do students know best what works for them? In real-world settings in which students choose, and when treatment effects are heterogeneous by preference, these are often the relevant unbiased estimates, not the average treatment effects estimated in randomized experiments.
What are some of the more interesting or surprising findings/conclusions, you discovered during this process?
The biggest surprise was that there was no statistically significant selection bias in the choice study and we could rule out differences between experimental and choice estimates of more than 0.2 standard deviations. The factors that determine student performance, like math SAT scores, are unrelated to format choice. The factors that determine format choice, like students’ perception of their learning style, don’t drive performance. We did not expect that.
Ted Joyce, Ph.D., is a Professor of Economics at Baruch College and the Graduate Center, the City University of New York and a Research Associate in the National Bureau of Economic Research’s program in Health Economics. He has published extensively in the area economic demography and reproductive health policy. His work on abortion policy has appeared in the Journal of Political Economy, New England Journal of Medicine, the Journal of the American Medical Association, the Journal of Human Resources and the Review of Economics and Statistics. His recent research projects include a three-year study funded by the National Institute of Child Health and Human Development (NICHD) on the effect of state laws on teen fertility.
Professor Joyce is also interested in the econometrics of program evaluation and is involved in several projects evaluating the effectiveness of online learning formats relative to traditional lecture classes. He recently headed a randomized experiment of student performance in a hybrid versus a traditional lecture of introductory microeconomics that was published in the Economics of Education Review. He is also collaborating with other researchers on the effectiveness of honors college programs at public universities. Specifically, how well do students in an honor college program at a public university do in terms graduation rates, graduate school admissions and wages relative to students who were accepted to the honors college but chose to attend a private university.
Dahlia K. Remler, Ph.D.,
(@DahliaRemler) is Professor at the Marxe School of Public and International Affairs, Baruch College, City University of New York. She is also a Research Associate at the National Bureau of Economic Research and an affiliate of the CUNY Institute for Demographic Research. She has published widely in health care policy, including work on health care cost-containment, health care and insurance markets, cigarette tax regressivity, and health care information technology. Current research involves higher education policy, incorporating health insurance needs into poverty measurement, evaluating the effects of health reform on poverty and the effects of pedagogical interventions. The second edition of her textbook, Research Methods in Practice: Strategies for Description and Causation, co-authored with Gregg Van Ryzin, was published by SAGE in 2014. She is about to start as a managing editor at JPAM.
David A. Jaeger, Ph.D.,
received his Ph.D. from the University of Michigan and his B.A. from Williams College. He is Profesor of Economics at the City University of New York Graduate Center, a visiting Professor at the University of Cologne, a Research Fellow at the Institute of Labor Economics (IZA), and a Research Associate at the National Bureau of Economic Research and has held regular and visiting positions at the College of William and Mary, Hunter College, the Bureau of Labor Statistics, the University of Bonn, and Princeton University. His current research interests include immigration, education, fertility, and quantitative medieval history. His research has been published in the American Economic Review, the Journal of the American Statistical Association, the Review of Economics and Statistics, the Journal of Public Economics, and the Journal of Labor Economics, among others.
Onur Altindag, Ph.D.,
(@Quoting_Marx) is a Bell postdoctoral fellow at the Harvard Center for Population and Development Studies. His main research interest is the application of econometric methods to investigate public health and population related issues, with a specific focus on policy evaluation, fertility behavior and infant health. Onur received his bachelor’s degree from Galatasaray University in Istanbul, holds a master’s degree from University of Paris I, Pantheon-Sorbonne, and a PhD in economics from the Graduate Center, City University of New York.
Stephen D. O'Connell, Ph.D.,
is a Postdoctoral fellow in the Department of Economics at MIT and a member of the School Effectiveness and Inequality Initiative (SEII). He graduated from the Ph.D. program in Economics at the CUNY Graduate Center in 2016.
Sean Crockett, Ph.D., is an Associate Professor of Economics at Baruch College, City University of New York. His research primarily involves the use of experimental methods to investigate questions in general equilibrium theory, behavioral economics, and decision theory. His recent focus has been to characterize individual choice under uncertainty, to study the impact of these characterizations on market prices, and to study the relationship between individual choices made in isolation versus relative to analogous choices in a market setting. Professor Crockett earned his Ph.D. in Economics from Carnegie Mellon University in 2004, and has been a professor at Baruch College since 2006.
Check out this and other Journal of Policy Analysis and Management articles online.