The Problem with Solutions

Today in education, we are trying to achieve increasingly high aspirations. Simultaneously, key resources, such as time and money, seem constantly shrinking. Under this pressure of rising expectations and limited supports, we often rush to find solutions. The silver bullet that will suddenly help us to evaluate teachers seamlessly or that plug-and-play curriculum that will ensure high levels of student engagement are presented as solutions to pressing problems.

In this rush to solve problems quickly, we often fail to truly understand the problem itself. Too often these proposed solutions are not responsive to the conditions that actually define the problem or sensitive to the ways that the system contributes to or enables the problem. The education field has a culture of solutionitis, constantly jumping to implement solutions before fully developing a clear sense of what is creating the problem.

The education field has a culture of solutionitis.

Recently, Carnegie convened Networked Improvement Community (NIC) Design Learning Lab teams from across the country to address issues related to STEM education. These teams challenged themselves to create a clear theory of what was causing the problem before designing potential solutions. This effort represented a change from the norm, but one these groups, eager to create lasting, system-level change, were committed to. Aided by the use of several improvement science tools, the teams were able to develop a coherent picture of why they were falling short of their aims. A deeper understanding of the problem and the causal relationships associated with it are key components of initiating Networked Improvement Communities, the goal of the Learning Lab.

Embarking on this type of work, groups had their own work styles and team dynamics, but communally the groups identified three key concepts to help them gain clarity on the problems and make the knowledge explicit.

Value the voice of the user

Each group included at least one teacher who acted as a practitioner expert for the rest of the team. Not only is being user-centered a key improvement science principle, it also is essential in truly understanding the what, how, and why of a problem. Their unique insights into the lived experience of this problem help capture where system failure exists. They also provide a reminder of the motivation for addressing this goal.

Highlight how processes interact

The problems that these groups seek to improve are complex and touch many different facets of their local education systems. In understanding the problem, groups took time to describe and map out how different processes interact with and impact each other. While groups may not have the resources or time to test new interventions in all of these areas, by making the effort to represent and acknowledge them, they can identify the high-leverage areas where it makes the most sense to begin.

Test anecdotes and intuition with data

Measurement and the consequent data are at the heart of improvement science work. You cannot understand what you cannot measure and that begins at the problem level. In addition to the users’ perspectives and experiences, groups outlined where they needed data to better understand the magnitude of the problem and calibrate what their potential impact could be. As data will be key throughout the improvement process, starting early to think about what data they want, need, and can easily obtain will help not only in outlining the problem, but also in moving forward.

You cannot understand what you cannot measure and that begins at the problem level.

When taken together these three actions moved groups to a very different starting point for their improvement efforts than had they simply jumped into generating potential solutions (or worse yet, began the process wedded to one solution). For example, in one group, a user highlighted how although the problem of student enrollment in advanced STEM courses is rooted within high schools, the problem itself starts far before when students struggle in early STEM courses. While the group was aware of this pipeline problem, it was not their focus and it fell off their radar. This recognition led the group to account for the other processes that impact students on their way to advanced STEM courses in high school. With a mapping of the interdependent processes and data from the entire pipeline, the group got a clearer picture of how much progress could be made towards their goal if they focused on high school and what other areas were stressing the system.

A second example comes from a group that is working on increasing the number and quality of teacher education candidates in mathematics. While focused on the nature of the teacher preparation experience (and ultimately committed to working on its improvement), issues of recruitment and enrollment loomed large in their analysis of the problem. There is no doubt that a comprehensive address of the larger issue will include improvements aimed at these processes as well.

Understanding the problem helps improvement teams establish the knowledge bases to test potential solutions that are high-leverage, have the support of users, and actually address the problem. In the above examples, while the insights gained were almost certainly already known, by explicating their understanding of the problem, particularly through the lenses of user perspectives and related data, they are primed to test solutions that address the full complexity of the problem and have a better chance of realizing the potential impact of those solutions.