Thursday, March 26, 2020

#2020APPAM Submission Deadline Extended

APPAM is extending the deadline to April 24th, in an attempt to provide extra time to prepare proposals, considering the public health context.

© Russell Sage Foundation

Fighting for Reliable Evidence

April 22, 2014 10:00 AM

Random assignment experimentation, once used primarily in medical clinical trials, is now an accepted method among social scientists across a broad range of disciplines. The technique is used to evaluate a variety of programs from microfinance and welfare reform to housing vouchers and teaching methods. So how did randomized experiments move beyond the realm of medicine and into the social sciences? Can these methods be used effectively to evaluate complex social problems and programs? In Fighting for Reliable Evidence from the Russell Sage Foundation, authors Judith M. Gueron and Howard Rolston look at the characters and controversies that have propelled the wider use of random assignment within social policy research over the last four decades.

judith_gueronGueron is a Scholar-in-Residence and President Emerita at MDRC, joining the organization at its founding in 1974. Through her tenure, she has directed many of the largest federal and state evaluations of interventions for low-income adults, youth, and families. She is also a past president of APPAM and was awarded the American Evaluation Association’s Myrdal Prize for Evaluation Practice in 1988. In 2005, she received the inaugural Richard E. Neustadt Award from the John F. Kennedy School of Government, Harvard University, in “recognition of individuals who have had a significant impact on public policy either through scholarship or practice.” In 2008, she received APPAM’s Peter H. Rossi Award for contributions to the theory or practice of program evaluation. She received her Ph.D. in economics from Harvard University in 1971.

HowardRolstonWebRolston has been involved in funding, promoting, designing, and implementing social policy experiments for more than 30 years. During that time, he served for more than two decades as the Director of Research and Evaluation at the Administration for Children and Families in the United States Department of Health and Human Services. Initially his responsibilities involved welfare policy and evaluation, but they later expanded into other areas, including early childhood education, child welfare and child support enforcement. In 2006, he joined Abt Associates as a Principal Associate and is currently principal investigator for two large, multi-site social policy experiments.

“Over the years, my colleagues and I had been among the pioneers working to show that randomized, controlled trials could produce uniquely reliable and convincing results about the effectiveness of social programs,” says Gueron. Part of the result of this work was that this type of research became the method of choice for evaluating such programs. Gueron recalls that for several years, she had been asked multiple questions about the transition. “How did this happen?  Why do we know so much about welfare programs and so little in other areas? How much was due to luck, the people, the actions of funders, the institutional structure, or the evolving policy context? What explains these decades of rigorous research, and what are the lessons for other fields that want to emulate this achievement? On leaving MDRC, I decided to write a book that would tell about the fight to get reliable evidence: the heroes, the coalition of allies, and the lessons on methodology, policy, and communications.”

Rolston realized that both of them had similar plans after their retirement. “After deliberation, we decided that a single book would be more valuable than two, and we joined forces,” he says. “I believed that there were important lessons from the use of random assignment in welfare reform evaluations that, if published, could further the development of evidence-based policy in other areas. In welfare, an unusually rigorous and large body of evidence had been developed and applied to welfare policy and practice.” Rolston recounts that experiments were continuously conducted over many years in a coherent manner, so as evidence was brought to answer some questions, new ones were raised and addressed, resulting in multiple generations of studies that built off each other. “I believed I was in a unique position to tell the government side of the story, and, if I didn’t do so, this aspect of the opportunity for building on the welfare experience would be lost,” says Rolston.

“Our discussions convinced us that we should work together, since then we could tell the inside story from both the public and private perspectives of how and why key decisions were made and what it took to make this happen,” says Gueron.

The main thrust of the book, as Rolston points out, is that “broad, systematic use of random assignment to evaluate the effects of social policies and programs is feasible, produces evidence that the community of policy and practice accepts, and can lead to better informed government action.” In particular, Gueron noted that the fight for reliable evidence on the effectiveness of welfare reform programs—“waged in the face of skepticism and sometimes bitter name-calling—showed that randomized experiments are feasible, legal, ethical, and a uniquely credible method to answer important questions about the effectiveness of major social policy options.”

Both Gueron and Rolston found some interesting findings as they wrote the book, which took eight years to complete. Says Gueron: “Confounding the fears of skeptics that social programs are ineffective, many state efforts to promote work and reduce welfare succeeded; some even produced savings that more than paid for the program.” Tradeoffs were apparent, as different approaches were more or less successful in reaching different goals, such as saving money, reducing poverty, or helping children."  What surprised Rolston most was how fast the field was moving forward even as they wrote. “Both the Bush and Obama administrations advanced evidence-based policy and the use of random assignment in several key policy areas,” he says. Rolston points out that the Abdul Latif Jameel Poverty Action Lab at MIT exponentially increased experiments internationally. “The capacity to design and implement high quality random assignment studies grew greatly. We still have a long, long way to go, but if the period Judy and I describe was the childhood of social policy experimentation, we’re at least at the point of reaching its adolescence.”

Gueron also pointed out that “despite the controversial nature of welfare policy, high-quality evidence, combined with forceful but even-handed marketing of the results, produced uncontested findings and a perception that the results had an unusually strong impact on policy.”

What’s important, however, is that while making every individual study as strong as possible, “we always need to stay focused on the goal of building a body of evidence from multiple well-designed and well-implemented studies that address key questions of interest to policymakers and practitioners,” says Rolston. “The strength of the welfare-to-work studies is not in a single one-off study, but in the large number of overlapping evaluations that have been synthesized to produce evidence of substantial generality.”

Gueron says the message of the book is optimistic. “We tell a story of scientific and policy innovation, where allies and heroes emerge in unexpected places,” she says. “There are social innovators in foundations seeking to produce a legacy and leverage change; civil servants determined to protect the federal treasury and maintain momentum when a new administration takes over in Washington; state officials who joined a relentless and risky fight to find out whether their own reforms paid off; researchers who pioneered a new method, before it became fashionable; and people in advocacy organizations who saw rigorous evidence as ultimately serving their constituency.”

But she is quick to point out that the studies find no “quick fixes” to complex social problems, and many questions remain unanswered. “But they do provide a convincing antidote to cynicism,” she says. “Rigorous evidence, sustained by a coalition of people of good will helped forge a consensus for change.”

“To continue to make progress, our primary goal needs to be to develop institutions and associated cultures that routinely produce large numbers of coherently-organized, generationally-linked experiments,” says Rolston.

Reliable evidence on what works and what doesn’t is critical to making wise policy. What Gueron and Rolston show through their work is that it is possible to acquire that evidence. The rigor of random assignment, development of a coherent multi-year research agenda, and coalition-building of people committed to evidence-based policy became the keys to success.

Gueron summarizes the 40-year transition succinctly: “The fight was worth it.”

Fighting for Reliable Evidence can be ordered directly from the Russell Sage Foundation.


« Back

Association for Public Policy Analysis and Management (APPAM)
NEW ADDRESS! 1100 Vermont Avenue, NW, Suite 650 Washington, DC 20005
Phone: 202.496.0130 | Fax: 202.496.0134
Twitter Facebook LinkedIn Subscribe to me on YouTube

Home|About APPAM|Membership|Public Policy News|Conference & Events|Publications| Awards|Careers & Education|Members Only

Web site design and web site development by

© 2020 Association for Public Policy Analysis & Management. All Rights Reserved.
Site Map | Privacy Policy | Terms of Use | Events | Add Your Event