Low-Cost Randomized Controlled Trials
February 4, 2013 02:05 PM
One of the sessions at the 2012 Fall Research Conference was “Low-Cost Randomized Controlled Trials.” The session focus was on lessons learned from low-cost evaluations as well as the costs and benefits associated with investing in evaluation research when resources are tight. The session roundtable included four panelists: Jon Baron, Paul Decker, Roland Fryer and Ricky Takai.
Jon Baron is the president of the Coalition for Evidence-Based Policy, which he founded in 2001 as a nonprofit and nonpartisan organization. Paul Decker is the President and Chief Executive Officer of Mathematica Policy Research and has expertise in the evaluation of education and workforce development programs. Roland Fryer is the Robert M. Beren Professor of Economics at Harvard University and is a research associate with the National Bureau of Economic Research. Ricky Takai is with Abt Associates and has experience in designing and managing evaluation studies in education.
During the session, Decker and Takai highlighted the importance of early decisions in the evaluation process that could reduce costs later down the road. At the beginning of an evaluation, you can control the costs of the study by identifying what will be measured. Costs are highly dependent on the decisions made during the evaluation’s initial design phase where hundreds of choices need to be made. With this being stated, balance should be sought by communicating with key stakeholders in the evaluation.
What are some of the issues that can contribute to higher costs in an evaluation? A couple of the presenters highlighted the difficulty of gaining access to data and knowledge that might be essential in the execution of a low-cost evaluation.
Some of the key players in a program being studied may not have ready access to the information the evaluator needs to conduct the assessment. Other times, those in charge of the data may find little incentive to make the information easily accessible to those who are outside of the program’s administration.
When the professional is conducting an evaluation on a program that includes multiple sites, the biggest obstacle may not be access to information. Rather, it may be that the collected information is not uniform, which can cause major headaches. For example, there may be different protocols for gathering information at each site and the categories they are placed in may be different. These challenges should be noted and addressed at the beginning of the design phase.
Speaking from the funder’s point-of-view, Takai stressed the importance of being able to explain key findings when working with a funder. When evaluators find that a funded program had no effect on participants or did not work as originally intended, being able to explain why is essential.
In regards to knowledge collected, such information is usually limited to internal stakeholders and not shared with outsiders who could gain value from it. While several of the speakers agreed that much of the knowledge developed from the use of public funds should be made available, others may disagree that the knowledge should be shared as a public good. In these instances, it is important to understand the role of incentives and how they can effectively be put to use in such situations.
In conclusion, wise decision making during the beginning of the design phase of an evaluation can reduce the overall costs associated with it.
Article provided by Sophia Guevara