Better Evidence and Better Research Yield Better Policy

Evidence is the basis of knowledge. It answers the question; how do we know what we know? But, that’s not enough. Too often, we don’t know what to do with research findings, especially when it comes to informing public policy.

Earlier this year, in an effort to increase access to government databases for research, Congress established the federal Commission on Evidence-Based Policymaking. The 15-member bipartisan Commission of leading researchers and social scientists issued a request for comments on how to develop stronger evidence-building strategies to inform and continuously improve public programs and policies. This week is the deadline for those comments. The Commission will spend the next year reviewing and synthesizing the responses, and then present its recommendations to Congress and the President.

Policy analyst Paul Lingenfelter submitted comments urging the Commission to carefully consider the strengths and limitations of policy, and the importance of using improvement science to find the best evidence. Lingenfelter has spent decades working in education and public policy and is author of the new book, “Proof,” Policy, & Practice: Understanding the Role of Evidence in Improving Education.

He will also discuss his work later this week at the Education Policy Networking Series organized by the University of Colorado in Denver.

The Carnegie Foundation spoke with Lingenfelter about his recommendations to the Commission. What follows is a summary of those comments.


CF: What prompted your comments to the Commission?

PL: Blurry thinking about policy and evidence is growing in popularity. A 2013 article in the Atlantic, by John Bridgeland and Peter Orszag, senior executives in the Bush and Obama administrations respectively, correctly observed that few federal programs have strong evidence that they work. But they suggested an impractical, potentially harmful solution. Using the movie Moneyball as an example, they argue that the government should collect more information on what works and withhold funding for anything not supported by such evidence.

Their argument is similar to a previously popular idea that successful public programs can be replicated in other places. Both ideas are flawed because of the complexity and variability of people, situations, and program delivery. Public policy efforts to improve health care or help disadvantaged youth (examples cited by Bridgeland and Orszag) are too complex to fix with standardized programs and interventions. When situations are complicated, what works varies.

CF: You surely don’t mean government should support things that don’t work. Can you say more about why such thinking is a problem?

PL: If the role of policy is to design and implement complicated programs that work, policy has an impossible job. It can never be successful. Policy has powerful tools, but they are blunt instruments — money, law, and regulation. Policy can allocate money for different purposes, such as building roads, parks, schools, supporting research, fighting wars, providing incentives and support for important needs, but it has limited power over how the work is done. When policy attempts to control how work is done, the work immediately becomes more expensive and, likely, less effective.

CF: How, then, do you see the role of evidence in informing and guiding public policy?

PL: First of all, evidence should guide what policy does with its blunt but powerful instruments. If our infrastructure is crumbling, policy can invest in rebuilding. If unemployment is high, policy can provide incentives for the private sector to create jobs or create public jobs. Policy can support education, public safety, and research to discover new knowledge based on an analysis of what the public needs. But the government should not attempt to create and implement complex program interventions to solve problems as a matter of policy. Complex problems require sophisticated, adaptive practice, not inflexible policy solutions.

Government does have a role in supporting the tools for effective practice, but it is counterproductive when government gets too close to the work of practitioners. A good example — and a never-ending debate — is the extent to which law and regulation shape teacher preparation and education in public schools.

CF: Given the different roles for policy and practice, how is the role of evidence in practice different from its role in policy?

PL: The first difference is that practice can and must be flexible and more quickly adaptive than policy. So it needs more evidence and more fine-grained evidence. Practitioners (i.e., teachers, health care professionals, counselors) must deal with complexity and variation among individuals, among the situations in which individuals live, and even in the conditions under which the practitioners themselves do their work.

The only way practitioners can improve outcomes of their work is to use evidence to learn from experience. Improvement science is a systematic way to do this. It was first developed in manufacturing and is now spreading to other fields such as health care and education with significant Carnegie leadership. Improvement science works to get better results within the complexity of the practice situation.

Improvement science begins by defining and measuring important goals. Then based on the experience of practitioners, feedback from clients or users, such as students, and research in the field, it creates theories about the factors that either obstruct or advance goal attainment and designs strategies to intervene for improvement. Evidence is used to design and test improvement strategies and to adapt them to get better results.

On improvement science, you really have to know what the outcome is that you want. You have to measure that and you have to measure other factors that you think are important by talking to people, looking at the literature and the research, and measuring things that might be helpful in getting better.

CF: What obstacles do you see to using evidence to improve practice?

PL: One is the need to improve measurement and its use. Practitioners need to have clear goals, they need to focus on a limited number of important goals and factors related to those goals, and they need to use measurement to test interventions designed to improve results.

Some measures can be constructed fairly easily, such as how many students graduate or progress to the next level of education. Others are more challenging, such as what are our learning objectives, and how can we measure attainment.

A second obstacle is the tendency to use measurement primarily for accountability, rather than improvement. The presumption that practitioners will improve only if forced to improve by external threats has been demonstrated to be false over and over again.

Policy makers and practitioners share responsibility for solving problems and creating better lives. But neither policy nor practice is self-sufficient.

CF: The U.S. Department of Education has promoted randomized clinical trials (RCTs), which are considered the gold standard of research, for its What Works Clearinghouse. Why isn’t the Commission focusing on that research methodology?

PL: Randomized clinical trials are very effective for getting insight into what Donald Berwick, founder of the Institute for Healthcare Improvement, calls, “conceptually neat problems.” These are problems with a fairly simple cause that can be fixed with a fairly simple, easily replicated intervention. A good example of such a problem is the complicated, intimidating Free Application for Federal Student Aid (FAFSA). Many low-income families fail to apply for help because of it. A smart researcher used an RCT to demonstrate that if H&R Block tax preparers helped low-income families complete the form while doing their taxes, significantly more students applied for aid and enrolled in college.

But RCTs are not so effective in finding what works when the problems and interventions are complex. In my recent book, “Proof,” Policy, & Practice, (Stylus Publishing, 2016), I examined the analysis published by the What Works Clearinghouse and observed that very few of the thousands of studies met its research standards and fewer still found evidence of effectiveness. Alan Ginsburg and Marshall S. Smith recently did a close analysis of 27 studies of mathematics curricula that did meet the clearinghouse’s research standards and found numerous issues that undercut the claims of valid findings.

In the situation of practice many things work, but they don’t work for everybody all the time. Practitioners want to know what works for which people, and under which circumstances. Improvement science is better able to guide practice because it employs rigorous measurement and engages the intelligence and insights of practitioners, while being situated in the practitioners’ real world.

CF: How would your recommendations better serve policymakers, the public, and practitioners?

PL: Policy makers and practitioners share responsibility for solving problems and creating better lives. But neither policy nor practice is self-sufficient. After many years of failure, the policy strategy of discovering what works and cloning it has been discredited. Rather than attempting to replicate good practice, policy makers should make sure that the instruments they legitimately control — money, law, and regulation — are supporting good practice.

Practitioners need to become more disciplined about improving their work by using the evidence and strategic analysis of improvement science or similar approaches. It is far too easy for policy makers to point fingers of blame at practitioners and for practitioners to blame policy makers for failing to solve problems. When practitioners demonstrate their willingness and ability to use evidence for improvement, it will be easier for policy makers to focus on what they can properly do to make things better. Policy and practice need to be in partnership, observing proper division of labor and collaborating when they are interdependent.


lingenfelter photoPaul Lingenfelter has four decades of experience in public policy and education, including positions with the Illinois Board of Higher Education, the John D. and Catherine T. MacArthur Foundation, and as CEO and president of State Higher Education Executive Officers. His previous blog for Carnegie urged policy makers and practitioners to change how they work together.