Session Recap: Improving the Link Between Analytical Methods and Policy
November 13, 2014 09:00 AM
By Chiho Song, University of Washington
Truck Drivers and Traffic Fatalities: Estimating the Value of a Statistical Life Using Panel Data, by Benjamin T. Galick, University of Chicago
Comparing Risk Models for Policy: Divergent and Shared Elements, by Scott Farrow, University of Maryland, Baltimore County
The Role of Policy Analysis in Modern Drunk Driving Legislation, by Darren Grant, Sam Houston State University
Does the Content of the Analysis Matter? Assessing the Impact of Benefit Cost Analysis on Decision Making Processes, by Ryan Scott, University of Washington
Discussants: Janie Chermak, University of New Mexico, and Michael Livermore, University of Virginia
Using four different studies, the panel discussed methods to continually improve decision-making and analysis between methodology and policy.
Galick’s presentation discussed the value of a statistical life (VSL) of truck drivers to overcome shortcomings mentioned in previously published literature. To date, most publications in this research area have an analytic limitation that is biased due to the unobserved time-invariant factors or non-classical measurement errors in estimating wage-risk trade-offs. To address this problem, this study uses a corrected version of Hedonic Wage Equation. In Galick’s model, wage data as an outcome is derived from the Current Population Survey-Outgoing Rotations Group (CPS-ORG). Risk measures as a predictor are constructed from the National Highway Traffic Safety Administration-Fatality Analysis Reporting System (NHTSA-FARS) and the Census of all traffic fatalities on public U.S. roadways.
In addition, two methodological strategies such as a Difference-in-difference (DID) and an Instrumental Variable (IV) are used for creating the lower and upper bound of the VSL. To control occupation-specific productivity shocks and worker’s risk tolerance level, wages within a competitive occupation is also examined. The results show that compared to the pooled cross-section VSL estimate (range: 7.5-11 million USD), the VSL estimate by DID and IV approaches ranges from 0.4 to 4.2 million USD). It indicates that the early estimated VSL may be overestimated by several million USD due to time-invariant factors. It implies that future research should attempt to reduce sample measurement error while controlling time-invariant factors with panel data. One of the interesting findings is a visualization of longitudinal truck driving fatality risk from 1980 to 2005. Several points were brought up by the discussants for refined research, including: 1) controlling for random variation between 1980s (more variant in the distribution of the fatality risk) and 1990s (less variant), 2) doing heterogeneity test, 3) using synthetic occupation, and 4) considering the working hours of truck drivers (because more working leads to more risk premium).
The purpose of Farrow’s study was to provide a comprehensive viewpoint of methodological pros and cons to policymakers in their policy decision-making by comparing and contrasting a broad range of risk-based evaluation analysis methods. To achieve this aim, Farrow attempted to make a conceptual distinction between risk assessment and risk management. The former is closer to a quantitative estimate of risk (likelihood that harm will result) and the latter is closer to decision-making or taking action by using information from the former in conjunction with various social and political values and resources.
Based on the examination of University System of Georgia (USG) and the Department of Homeland Security (DHS) risk management contexts, the author critically examines the general or underlying assumptions of the mostly used models (e.g. Impact Analysis (NEPA), Standardized Decision Analysis, Advanced Decision Analysis, Standardized Benefit-Cost Analysis) and creates a summary table. The results show that the risk management models differ depending on the three points: 1) whose values are being analyzed (decision-maker or public), 2) risk metric (natural units, EU, Esurplus, Income), and 3) whether aggregation of social welfare function (SWF) is explicit or implicit. The implication is that this can help make analytic choices clearer for decision-makers.
Grant’s presentation examined how the governmental agencies use the research evidence regarding the relation between ages with starting to drink and fatality rates for the formation of policies. Regardless of the change in the starting age of drinking alcohol, Grant argues that there is inconsistency in results and basic findings surrounding the average effect of starting age of drinking on fatality. Based on the thorough examination on both the estimates for the impact of ages with starting to drink on fatality by different research designs and the frequencies of citing those evidence by six government and quasi-government agencies across four hearings, this paper attempted to explain why the government and agencies use evidence in a more optimistic way than it could support.
To explain this overrated use of the evidence, the author suggested a set of complements that is composed of three essential parts: 1) an adversarial political process, 2) intellectual segmentation, and 3) NHTSA’s research model. Grant explains that the root cause for overrated assessments is an intellectual bifurcation between a more policy-focused (more concerned about short-term effects of the law for intending to pass it) vs. a more academically-focused group (more concerned about long-term effects of the law after its wide adoption). This study helps policymakers do effective decision-making by understanding the nature of any bias from opposing camps.
Scott’s study investigated how the use of Benefit Cost Analysis (BCA) analysis changes rulemaking outcomes. To this aim, Scott attempted to identify what elements in BCA are the most important on decision making by using automated content analysis. In addition, he showed a multi-state Markov event history model for analyzing how the content of BAC are associated with changes in the proposed rule. The main findings are 1) Better BCA is correlated with an increased time needed for analysis and 2) Better BCA is correlated with “contentious” rules making it through the entire rulemaking process. The discussants noted that the results indicated that using quantified evaluation on the impact of BCA on rulemaking can be crucial for more efficient and effective decisions.