Lessons from Paper Airplanes

Successfully and consistently landing paper airplanes in a target zone may seem an unlikely goal for educational practitioners, administrators, consultants, and researchers attending Carnegie’s first Practical Measurement Workshop. Yet, this was exactly what unfolded during one workshop session.

Participants were split into teams and assigned the task of reliably flying paper airplanes into a designated landing zone. In the first phase of the activity, all of the team members constructed and flew their airplanes before receiving data on the number of successful landings. In the second phase of the activity, a variety of data (for example, whether a plane landed to the right or left of the landing zone, whether it went too far or fell short of the landing zone, whether it landed far or close to the target, how confident the group felt about achieving a successful landing, or at what angle and with what amount of “push” a plane was thrown) were collected and reported by a team member after each flight. From there, participants could decide on continuing to use, adapt, or abandon that airplane or approach to flight.

Although the session was designed to be entertaining and certainly created a room filled with highly engaged participants, it was far more than just fun and games. Participants were able to walk away with lessons learned that were as diverse and colorful as the planes they built. Below are three of their most powerful takeaways about how to problem solve using improvement science.

Focus on the problem—not the solution

The failure of countless improvement efforts can be attributed to solutionitis, the tendency to seize on solutions without adequate study of the problem or the context and system that produces the problem. While it may be appealing to buy into quick fixes, we find that often they are rarely able to address complex problems in their entirety. Just as only reducing class sizes alone cannot reliably increase student success rates; neither can simply increasing teacher accountability or any of a whole slew of other silver bullet ideas. Thus, it is critical for those involved in improvement work to avoid tunnel vision and consider multiple perspectives through which they may approach a problem from different angles. In other words, one must focus on solving the problem at hand rather than championing a singular solution.

One must focus on solving the problem at hand rather than championing a singular solution.

FOR EXAMPLE

During the paper airplane activity, one team decided that paper airplanes were not the most effective solution for their problem, which they defined as having an object touch down on their landing zone. They found that flying paper airplanes did not result in reliable outcomes because of its hypersensitivity to changes, such as varying aerodynamics. Just one extra fold or the slightest asymmetry of the wings, for instance, could cause a plane to prematurely plummet to the ground or veer off-course. The team discovered that throwing paper balls could greatly reduce the amount of variation in performance. Because the construction of the balls mattered much less than it did for the airplanes, team members could shift the majority of their focus to perfecting their aiming and throwing techniques. By better understanding the problem they were trying to solve, they were able to test a solution that hit the target with greater consistency than many of their counterparts.

Data should be timely, specific, and actionable

One of Carnegie’s core principles of improvement states, “we cannot improve at scale what we cannot measure.” It follows that measures developed for improvement purposes are meant to shed light on both a system’s long-term outcomes and short-term processes. Most traditional measures developed for research or accountability purposes, however, only serve to report the former. Unfortunately, these lagging outcome measures are rarely specific or timely enough to actionably inform improvement efforts. That they only serve to describe the outcome of a system, rather than diagnose which particular processes or structures are effective or not largely hinders any subsequent meaningful action. Similarly, standardized test scores that are reported as aggregated means to an entire school fail to illuminate which faculty members may need extra support or which classroom practices may be the most useful in helping students understand the concepts on which they are tested. Moreover, these results are generally reported months or even a full year later—rendering them too late to act upon. Thus, measurement for improvement examines the outcomes within a system, for whom and under what conditions those outcomes are realized, as well as the mechanisms underlying them with both outcome and process measures. By investigating essential parts of our system and reporting data relevant to them in real-time, process measures promote rapid testing in multiple iterations, thereby producing actionable results and helping us to understand which changes really are improvements and which are not.

Measurement for improvement examines the outcomes within a system, for whom and under what conditions.

FOR EXAMPLE

In the first phase of the airplane activity, teams were simply told the number of planes that successfully flew into the landing zone. This limited data (which mirrors typical accountability data) was largely unhelpful in helping them determine which planes needed to be redesigned and whose throwing techniques needed to be adjusted. And since this number was only reported after all of their planes were flown, participants could do little to act on it. On the other hand, in the second phase, teams were given the data immediately after each flight and across multiple process measures. This allowed for more informed adjustments between every flight. For instance, knowing that a specific plane veered to the right could spark an effort to straighten its course by bending the corner of its left wing. Likewise, knowing that a plane plummeted to the ground immediately after it was thrown at a 90 degree angle could lead the team to try throwing it at a lower angle. Through this kind of rapid learning, teams were able to quickly confirm or refute their theories of the specific design principles and other techniques that were necessary to reach their aim, and tweak them based on their findings.

Data must be reported to users in a meaningful way

In improvement work, it is not only crucial to collect the right data on the right timeline, but also to report it in a way that makes it both relevant and easy to understand for users. If on-the-ground workers fail to grasp how to leverage data for improvement and why collecting a measure is central to their system, then having it at all becomes moot. Data in improvement must be easily translated into actual practices. Accordingly, selecting the appropriate data representation or visualization method is vital to communicating data in a coherent and effective fashion.

Data in improvement must be easily translated into actual practices.

FOR EXAMPLE

During the second phase of the airplane exercise, teams were supplied with multiple pre-made data trackers that corresponded to the different process measures they were encouraged to collect. While many teams continued to use these data trackers and were able to glean useful information from them, one team found the number of data trackers to be a bit overwhelming. Consequently, they decided to consolidate some of the information that appeared on separate graphs into a single chart, on which they sketched out a model of the landing zone. From there, they simply drew a picture of where each plane landed, along with curved or straight lines representing their flight paths. This picture effectively displayed in an intuitive manner how close an airplane was to the landing zone and whether it veered left or right. This allowed the team to spend time on discussing changes and improvements instead of trying to interpret the data.

Landing paper airplanes is obviously lower stakes than the practices teachers engage in within the classroom, but these takeaways highlight how by employing improvement science principles, educators can accelerate their ability to reach their goals. In short, they are able to learn fast to implement well. With these three ideas, improvement in classrooms could soar too.