What We Need in Education is More Integrity (and Less Fidelity) of Implementation

For many years, educational researchers have worked with program designers and implementers in pursuit of what has been called fidelity of implementation. Simply put, this has involved the application of numerous tools and procedures designed to ensure that implementers replicate programs exactly as they were designed and intended.

There is a simple and immutable logic to this urge to strictly control implementation. It is compelled by the methodologies (and their attendant mindset) that warrant programs as effective. The manner in which we validate “what works” in education involves research methodologies that privilege explanatory power as their primary purpose. To arrive at valid explanations, the methodologies necessarily abstract problems and programmatic solutions from their contexts.

The result is an empirical warrant, offered in the form of significant effect size indices, that represent impacts that just weren’t actually observed anywhere. What they are is the average effect across the many individual implementations subsumed within the research. The analysis that informs our sense of “what works” does not measure anything that actually happened anywhere. Neither does it represent a measure of impact that is likely to be realized in any subsequent implementation.

Programmatic solutions simply cannot be implemented outside of their contexts.

But what typical R&D thinking does is compel us to pursue fidelity of implementation. The traditional empirical warrant justifying a program as effective holds only in so far as the program is replicated exactly as tested. And so, exact reproduction is made necessary – despite the fact that it is not likely to produce the same effects or even be possible to accomplish.

The issue is that problems simply don’t exist and programmatic solutions simply cannot be implemented outside of their contexts. The real challenge of implementation, then, is to figure out how to thoughtfully accommodate local contexts while remaining true to the core ideas to ensure improvements in practice that carry the warrant of effectiveness.

What we need is less fidelity of implementation (do exactly what they say to do) and more integrity of implementation (do what matters most and works best while accommodating local needs and circumstances).

This idea of integrity in implementation allows for programmatic expression in a manner that remains true to essential empirically-warranted ideas while being responsive to varied conditions and contexts.

What does it take to achieve integrity in implementation? The answers permeate all of the current work of the Carnegie Foundation, including the reconceptualization of the education research enterprise such that it better addresses real problems of practice and generates knowledge that genuinely improves practice (i.e., the application of improvement science and improvement research) and the redefinition of the human organization that pursues education R&D (the formation of networked improvement communities that continuously test their practice to ensure that proposed changes are, in fact, improvements).

Just how this is done is being explored and documented in the Foundation’s work. While much more needs to be learned, tested, and shared, some focal areas are clearly emerging as requiring serious and thoughtful attention if implementation with integrity is to be realized. The first of these is in the nature of programmatic design (and even the design process itself). The second is in the manner in which implementation is pursued.

Simply put, when we design for implementation with integrity, we design differently – both the process and the characteristics of the resulting programs. While much will be elaborated in subsequent postings here, some considerations include:

  • identify goals as measurable aims;
  • develop a comprehensive and public articulation of the problem and the system that produces it;
  • guide development with clearly articulated design principles, including essential characteristics that are definitional to the solution;
  • create generative structures that accommodate integrative adaptations while enforcing essential characteristics;
  • identify/encourage/embrace/but test variants;
  • enter into authentic partnerships (NICs) to promote fidelity:
    • Common goals
    • Shared values
    • Shared power
    • Real problems to solve; and
  • discipline the implementation effort with a commonly held measurement model that ensures accomplishment and with the rigor of improvement research to test local adaptations for validation as improvements.

 

Each entry in the list above is a topic worthy of extensive elaboration. In many cases there are methodologies, tools, and processes that address each. The Foundation is currently learning how to use these tools effectively by working with practitioner scholars in networked improvement communities to address real and pressing problems of practice. The knowledge that we acquire will be shared broadly so that all who are interested can learn along with us how a science of improvement supporting the work of these communities can make integrity of implementation a reality.