News & Publications

news

 

#2020APPAM Theme Series: What does it mean?

FRC_42_640_360-100

The 2020 Virtual Fall Research Conference will be held online from November 11th to the 13th, under the theme Research Across the Policy Lifecycle: Formulation, Implementation, Evaluation and Back Again. We've tried to give you a brief explanation on the website, but context is more important than ever, so we're kicking off this series where leaders from the conference and APPAM will each tackle an aspect of how the theme connects to our reality. 

Today, Dr. Sherry Glied, Dean of New York University's Robert F. Wagner Graduate School of Public Service, APPAM President-Elect, and Chair of #2020APPAM goes more in depth on the theme. 

The roots of APPAM lie in the outpouring of policy analysis research that accompanied the 1960s Civil Rights movement and LBJ’s War on Poverty.  As American policymakers expanded social welfare programs with the deliberate aim of reducing inequality and disparities, social scientists developed data and tools that could help decision-makers better understand the dimensions of these problems.  Decision-makers, in turn, wanted social scientists to assess whether policies they had put in place actually accomplished their goals.  The National Longitudinal Survey series and the Panel Study of Income Dynamics, workhorses of our research, date back to the mid-1960s.  The Office of the Assistant Secretary for Planning and Evaluation in the Department of Health, Education, and Welfare, was established in 1966/1967, tasked with evaluating the many new programs under the Department’s auspices.  This process – from problem identification and data development, to policy initiative and implementation, to evaluation – describes a lifecycle of policymaking.

Several lifecycles of policymaking have occurred since the mid-1960s.  These have generated learning not only about individual programs and policies, but about the strengths, weaknesses, and biases of the policy analysis process itself.  The development of the Census Bureau’s Supplemental Poverty Measure in the mid-1990s, for example, sprang from a realization that existing methods were, by design, incapable of assessing the extent to which non-cash and tax credit programs affected poverty levels, skewing the evaluation of such programs.  Concerns about the ability to derive causal estimates of the effects of programs with non-random assignment contributed both to the development and diffusion of new statistical approaches and to increased interest in random experiments, such as the RAND Health Insurance Experiment and HUD’s Moving-to-Opportunity study.

The recent proliferation of randomized experiments and of quasi-experimental studies of natural experiments, in turn, has revealed that even evidence-based policies don’t necessarily work the same way in every context.  The next evolution of our toolbox may include a turn toward studying implementation: Why do some programs achieve results in some contexts, but fail in others?  Why do programs work in small demonstrations but fail when brought to scale?  Why does a program improve outcomes among a subset of beneficiaries but not among others?  Alongside the longstanding tradition of process evaluation, the new availability of very large data sets and aligned methodologies may enable another cycle of policy analysis, analogous to the genomic revolution in medicine, helping policymakers get beyond “what works?” to “what works, when, where, and for who?”

Back to news
Close