Fidelity of Implementation: Is It the Right Concept?

Heterogeneity of program effects has become a central concern in educational field trials. Increasingly, studies seek to measure “fidelity of implementation” as a key moderator for variation in effects across sites.

Often the programs under study are quite complex by design (e.g., involving multiple work roles, processes, and tools that interact with one another). They also confront a wide range of local contextual and organizational conditions. This has caused me to ponder, “Under what conditions is the idea of fidelity of implementation an appropriate conceptual organizer and when may it be less so?”

In our recent book, Learning to Improve, we introduced an alternative concept, adaptive integration, as a better conceptual guide for efforts seeking to implement complex programs across diverse settings.

It strikes me that we have two related but different factors to consider. The first factor has to do with the nature of the intervention itself. The second focuses on the implementation demands that the intervention places on local contexts and organizational structures. Both may be thought of as varying along a dimension from simple to complex. To illustrate, I describe three different cases.

“Under what conditions is the idea of fidelity of implementation an appropriate conceptual organizer and when may it be less so?”

A simple-simple case

A good example of an innovation in this category is the online interventions that now exist to promote students’ growth mindsets (i.e., with effort and good strategies you can learn this material) instead of a fixed mindset (i.e., you are either smart or you are not). There is a strong social psychological research base that undergirds this intervention, including multiple well-designed randomized field trials. The intervention can be administered online. Consequently, standardization of the treatment can be achieved, and useful data on implementation is a by-product of students working their way through the exercise. I would characterize this intervention as simple. I use the word “simple” here to denote that the intervention is defined by a very explicit sequence of steps to be followed in every case. These are well-detailed, and the online platform largely assures that they are followed in a routine fashion.

The intervention is also simple in a second sense. It makes little or no demands on educators to change their classroom practice. For example, online mindset interventions are now being used with rising college freshman during the summer prior to college entry. The intervention commands no classroom time nor requires faculty to change their instruction. So little or no “adaptive integration” concerns arise here.

In such situations, implementation with fidelity seems precisely the right concept to apply.

A simple-complex case

The development of the surgical checklist1 to reduce harm to patients is another example of a simple intervention. It is simple in the sense that it too is a routine sequence of steps to follow. The intervention takes all of about 90 seconds to complete, and the full protocol can be captured on one piece of paper.

Nonetheless, the implementation of the surgical checklist in any particular healthcare institution can be quite complex. This complexity arises because its effective use requires changes in communication among participants and will often challenge long-standing cultural norms about deference to the surgeon in charge. Gawande and colleagues knew the checklist could work because they had substantial evidence that it worked in their home institution. Making it work reliably in other institutions, however, raised a whole host of new problems to solve.2 Without significant changes in the culture of surgical units, the checklist could be implemented without realizing similar benefits across contexts.3 In short, effective use of the surgical checklist is not a simple implementation fidelity problem. Rather, successful implementation requires learning how to get this intervention to work reliably in the hands of many different professionals working in varied organizational contexts; it is a problem of local adaptive integration.

Successful implementation requires learning how to get this intervention to work reliably in the hands of many different professionals working in varied organizational contexts; it is a problem of local adaptive integration.

A complex-complex case

Most educational interventions are themselves quite complex. They often entail multiple new processes and tools, changes in individuals’ work roles, substantial knowledge and skill development, and normative shifts in how participants are supposed to think, act, and work together. It is for this reason that we describe such interventions as solution systems in Learning to Improve. They consist of a set of interrelated actions, all of which stand in strong relationship with one another. A material weakness in any one element and the intended effects may not accrue.

Likewise, introducing such solution systems into an extant school, district, or college will often make significant demands on the organization. Time, typically the scarcest educational resource, will often need to be reallocated. These solution systems often entail participants’ deep engagement with some set of design principles that require multiple cycles of trial and error to learn how to engage well in their context. Effective take-up and use will also often require some modifications to the intervention itself. Now, adaptive integration is the governing concept. The central issue becomes, “What do we need to do to get this to work here?” Advancing this goal requires situated learning. We would want to ask, “Are the changes being introduced locally consistent with the original design principles that undergird the intervention?” And then, secondly, we would need local evidence as to how well the intervention as adapted is actually working, and for whom and under what circumstances?

Research evidence from a field trial may tell us that an intervention can work, in that the average positive difference documented in the field trial (aka the standardized effect size) means that the intervention likely worked somewhere for someone. These results, however, tell local actors nothing about whether it will work reliably in their specific context or what it might take to actually accomplish this.

Finding Common Ground with Implementation Science

Interestingly, in the context of implementation science, Dean Fixsen and Karen Blase make a similar distinction between what they call atom-based and interaction-based innovations.4 The quintessential atom-based intervention in healthcare is the new drug or pill. The education analogy might be a piece of stand-alone educational software. The idea is that the innovation is entirely built into the “pill.” Implementation fidelity consists simply of getting people to use it as designed. Fixsen and Blase contrast this with interaction-based interventions that are more complex. They note that achieving quality outcomes here is dependent on building knowledge and skill among practitioners. These interventions often entail changes in how individuals coordinate and communicate around their work and may require significant normative shifts in how participants understand and think about their work. Such innovations are “messy.” They entail complex changes in what are already complex human service environments. And along the way, the intervention itself will almost surely need to be adapted as well.

Now, there is no clear blueprint to follow, as the concept of implementation fidelity implies. It seems more sensible in these circumstances to speak of “implementation with integrity” instead. Are local actors engaging external research evidence in ways that might actually accomplish improvements? Are they remaining true to research-based design principles as they construct modifications to the initial intervention? Are they learning from their initial efforts how to get better? Are they engaging in their own local improvement research, and are they sharing data and learning from other sites engaging in this same improvement journey? If we want quality outcomes to occur more reliably at scale, these are the kinds of efforts we need to encourage. These are the capacities to inquire and learn that we must build all across our field.

  1. See Atul Gawande’s The Checklist Manifesto: How to Get Things Right.
  2. See chapter five in The Checklist Manifesto for how they systematically went about addressing these concerns in the context of the WHO-funded study.
  3. For evidence on this account see http://healthaffairs.org/blog/2015/04/22/health-care-reform-and-the-trap-of-the-iron-law/
  4. Remarks by Dean Fixsen at the “Bright Spots” convening at Stanford University, November 2015.