by Megan Kang
On Wednesday, the day before APPAM’s 2019 Fall Research Conference, students, professors, and practitioners from across the country gathered to swap insights on the development and deployment of machine learning tools in public policy. Over the course of the six-hour workshop, we witnessed presentations by scholars from various fields - including economics, statistics, theoretical computer science, sociology, political science, information science, and engineering. Though each represented different disciplinary backgrounds, the presenters were united in their conviction that the frontier of public policy lies at the intersection of machine learning and social science research.
Attendees learned about the intuition behind ML tools, new data sources that ML can help decipher, the types of policy questions that make for good ML applications, new challenges that arise when applying ML tools to the public sector, and how to think about tradeoffs among desired social outcomes. A few key takeaways reappeared throughout the day’s discussions.
The introduction of ML tools into social science research introduces enormous potential for asking new types of questions, involving not just causal inference but also prediction questions. As one workshop organizer Sendhil Mullainathan stated, “We are remarkably biased against seeing things that our old tools were not good at solving. Our tendency when we encounter a new tool is to think, how does this help solve old problems? Truly explosive stuff in the next 10 years will arise from new problems that will come from questions that people couldn’t have posed 10 years ago. It will take a new muscle to think about these new problems.”
“Human decision-makers have different objective functions than algorithms, and therefore meaningful progress requires a deep understanding of the machine learning tool as well as the social science tools,” said Prof. Jens Ludwig one of the workshop organizers. Applying machine learning to public policy will call for people who are bilingual in AI and public policy analysis in order to thoughtfully navigate the challenges that arise when applying ML in high stakes decision making processes.
We need theory and imagination to make sense of the data: according to research that James Evans’ Knowledge Lab has generated, “the dimension that explains the most variation in the world is steroidal vs. non-steroidal.” (He admits that this makes no sense.) Therefore, “those who have the domain knowledge to know the meaning and non-meaning behind the data are critical to avoiding blind spots in the ML community,” said workshop organizer Alex Chouldechova.
“The concreteness that algorithms require compared to the opacity of human decision making will lead to new conversations that we can no longer avoid and may actually be helpful in improving how we think about important social problems,” said Jens Ludwig. In order to embed social values into algorithms, we will need to be precise about what we mean when we talk about privacy, fairness, interpretability, and accountability; we can no longer avoid quantitative analysis when thinking about tradeoffs among these social values that we hold dear. “The need to be mathematically constrained about what we mean when referring to desired outcomes like fairness and interpretability will ultimately make important decision-making pipelines more efficient and better,” said Aaron Roth.
As Roth concluded in the workshop’s final session, “these problems are easy to get wrong, but possible to get right.” It will require that we expand our imagination about the types of questions to ask, as well as clarify the priorities we as a society should strive for. The first step, as was evidenced by today’s discussion, involves bringing together a diverse array of perspectives to the round table. #2019APPAM’s first workshop made a strong case for why it’s an exciting moment to be working in this field.