Practical Measurement for Improvement
In the journey of leveraging practical measures for continuous improvement, improvers might need to address some key questions arise along the way. This resource is designed to provide insights into and examples of tackling the following questions.
- What type of data from the frontline do I collect to inform my practical measure(s)?
- How can I use practical measures to provide feedback for practice?
- How do I modify practical measures over time?
- What is the role of practical measures within the theory of improvement?
- How do multiple practical measures fit within a system of measures?
A Definition of Practical Measurement
What is Practical Measurement?
Practical measurement for improvement is the deliberate gathering, analysis, and interpretation of information that enhances the learning of system actors as they test changes and improve processes that are at the heart of their work. Practical measures are “practical” in that they can be collected, analyzed, and used within the daily work lives of practitioners. They are also “practical” in that they reflect practice. Practical measures are used to identify improvement goals and to learn continuously whether the changes that are introduced are, in fact, leading to improvement.
Characteristics of practical measures:
- Easy to do
- Fit into everyday work stream
- Provide timely information you can use
- Measures what you need it to—not just what’s available
- Has signaling capacity—changes are meaningful
- Are related to other things you care about
- Will lead to improvements in your practice
What makes a measure “practical”?
The key purpose of a practical measure is to send timely signals regarding the way a system is operating in order to accelerate learning and progress toward an improvement aim. These measures are typically research informed and process oriented – tied to specific practices within a system that are the target of an improvement effort. To determine if a measure is actually “practical,” some key questions to consider include:
- Is it closely tied to a theory of improvement? In an improvement effort, an individual measure is part of a system of measures that is used to interrogate a working theory of improvement.
- Does the measure provide actionable information to drive positive changes in practice? The data provided by the measure should point to actions that users can take in order to improve targeted practices or processes.
- How well does it capture variability in performance? To drive improvement, the measure and resulting data should indicate what is working for whom and under what conditions, disaggregating outcomes based on appropriate subgroups, contexts, or conditions.
- Does it demonstrate predictive validity? Practical measures are often connected to a larger theory of improvement, which represents a causal chain or hypothesis about how to reach the desired outcome. To serve a signaling purpose, an individual measure related to one aspect of that hypothesis should predict the next measure down the causal chain. For instance, a process measure should predict a driver measure, and a driver measure should predict leading indicators.
- Is it minimally burdensome to users? Since practical measures are intended to be embedded into users’ daily work, it’s important to minimize any additional effort or time that they might require.
- Does it report results in a timely manner? Target users should find the measure and resulting data valuable, which often means that data is reported quickly and is easy to understand.
- To what extent does it attend to social processes of use in order to support building an improvement culture? When developing and implementing a practical measure, it is important to attend to social processes including but not limited to establishing trust among users and routines for analyzing data.
Since improvement efforts take place in the real world and in a variety of contexts, it can be challenging to develop a measure that illustrates all of these characteristics. To accomplish this, practical measures are often collaboratively designed, integrating diverse expertise: 1) research knowledge about what works and how to develop and iterate on a measure; 2) professional knowledge that takes into account the local organizational context, structure, and processes; and 3) improvement knowledge related to an understanding of the system and different variations within it.