Why are we at Carnegie interested in improvement research? What does the work of the Institute for Healthcare Improvement (IHI) have to do with education?
The answer to these questions is related. In both sectors, there is a gap between what is known and what happens daily in practice. Both sectors are made up of a dedicated workforce whose best efforts do not consistently add up to improvement. And both healthcare and education face the challenge of effectively and efficiently affecting improvement at scale. Improvement research holds promise for addressing these challenges and IHI has decades of experience of using these methodologies to foster change. We knew we could learn from them.
Improvement research is based on simple but powerful questions, coined as the Model for Improvement by Associates in Process Improvement (API):
(1) What are we trying to accomplish?
(2) How will we know that a change is an improvement?
(3) What changes can we make that will result in an improvement? Together these questions structure an active and disciplined way of pursuing change.
As we begin to apply improvement research to education, we have found it useful to begin conversations around improvement using a fourth question:
(4) How do we understand the problems and systems in which they’re embedded?
We should start with a problem and take a look across the system to understand the causes that influence outcomes.
We have a tendency in education to jump to solutions and not think deeply about the problems we are trying to solve. A more productive approach starts with a problem and taking a careful look across the system to better understand the causes that influence current outcomes.
It is these four improvement research questions that have structured the strand of our Statway and Quantway Community College Pathways program of work that we have come to call Productive Persistence. Since we took on the problem of the extraordinarily high failure rates of community college students in developmental math, we have known that we could not get movement on the kinds of outcomes we were looking for by changing the curriculum or course structure alone. There was a common notion that it was important to attend to what can be referred to as student success factors, student motivation and engagement or non-cognitive factors. There was also a lot of activity in this area and many innovations to draw on. Lack of innovation was certainly not the problem.
Many financial and human resources are already dedicated to student success activity in community colleges. Community colleges offer students a variety and mixture of initiatives and services designed to help them succeed in college, some of them quite innovative. But if you walk from one institution to another, there is very little agreement as to what makes a good student success program. And there is a weak evidence base suggesting that these efforts are accumulating into real improvements in the college lives of students. We also know that there are a lot of exciting new research theories—particularly from social psychology—about specific practices that could be powerful levers of change. However, it is not really clear how these theories would be made to work in practice, specifically applied to developmental math and with community college students. A lot of exciting ideas, but the translation in how to make them work, reliably in real contexts is not there.
As we tried to structure this strand of work into the Pathways, we experienced a time of flailing at Carnegie as well. We knew we needed to work on it, we had people assigned to the task, everybody believed it was important, but from conversation to conversation no one could really tell you the same thing about what we were doing or what specifically we were trying to accomplish. To focus the work and halt the flailing, we launched a 90-Day Cycle in the fall of 2010. A 90-Day Cycle is an improvement research tool developed by IHI to accomplish deep-dive, quick turn-around research.
We began this R&D process to build a theory of change and a measurement model to go along with it. We were attempting to answer two of the improvement questions for this strand of work: what specifically are we trying to accomplish and how will we know if a change is an improvement? We put together a team with the relevant expertise in social psychology, improvement research and the on-the ground experience supporting developmental math students. We scanned the field, talked to many people that understood the problem from different angles and identified five areas that were most important to focus on to get to the outcome that we cared about. We “tested” these drivers with a diverse set of experts and built a measurement model that would enable us to refine this theory over time.
One of the unique things about improvement science that separates it from other education research approaches is that it is not about being comprehensive.
The goal is not to develop a conceptual framework that tries to organize every possible influence and include everything we could work on. Instead, we asked what are the big drivers for improvement? And what measurement will we need to learn from our efforts at change and to improve our theory over time? Since this initial 90-day cycle, the Productive Persistence team has refined our measurement model to make it more practical and embedded in the daily lives of the community college students with minimal interruption. They have collected these measures in our networks and convened additional experts, improving the theory over time. And they have started to develop and test changes, focusing on the critical first three weeks of the course.
Improvement research brings practice and research together in a collective process aimed at solving concrete problems of practice.
In the process, we have become increasingly convinced that improvement methodologies hold promise for productively integrating diverse kinds of expertise to solve important problems. We often talk about notions of bridging research and practice. Normally we mean just that, building a thin thing between two land masses that stay firmly planted. Research stays firmly on one side of a line, practice stays firmly on the other and we have a tiny space in which they talk to each other. Improvement research brings these two sides together in a collective process aimed at solving concrete problems of practice. It pairs action with discipline, moving some people into action more quickly than they are comfortable and requiring others to be a little more patient and disciplined. It also carries with it the excitement of bringing ideas into action, helping our best efforts lead to visible improvements in the lives of students.
This post was adapted from a presentation to the Executive Committee of the Carnegie Board of Trustees.