Student Independent Writing Time
Student Independent Writing Time as a Practical Measure
|Characteristics of Practical Measures
|Descriptions of the Student Independent Writing Time Measure
|Is closely tied to a theory of improvement
|The Student Independent Writing Time measure — and other measures such as the Writing Conference Tracker — is a part of a system of measures developed to test a theory of improvement with an aim to increase student proficiency in writing and with drivers focusing on student learning opportunities and instructional practices. It is a driver measure that aligns specifically with the “write independently for 20-30 minutes every day” driver. To address this driver, teachers have created different change ideas, including planning for no more than two “teachable moments” during the direct instruction part of the lesson to save time for independent writing.
|Provides actionable information to drive positive changes in practice
|The measure provides information for teachers to answer the question, “How much independent writing time is happening during writing lessons?” Without the measure, teachers might gauge the impact of their change ideas by intuition. Through use of the measure, teachers can discern patterns of their instructional practice and make adaptations to ensure independent writing time happens consistently. In the Literacy Improvement Partnership, some teachers leveraged the measure results to fine-tune their work in their classrooms and observed increases in student independent writing.
|Captures variability in performance
|As teachers log students’ independent writing time, the data entered are used to populate run charts automatically. Each run chart captures variation in independent writing time within a classroom over time. A benefit of using run charts to display data is that it allows practitioners to distinguish between common cause variation (e.g., variation to be expected from a process) and special cause variation (e.g., variation induced by change ideas). If a teacher can pinpoint when a change is induced on a run chart, comparison of the data pattern pre- and post-change can be done to assess empirically whether the change is an improvement.
|Demonstrates predictive validity
|Although the Literacy Improvement Partnership team did not conduct an analysis of predictive validity of the measure, research from the field suggests that student writing time matters for students’ improved writing.1
|Is minimally burdensome to users
|Although the logging of independent writing time is not time-consuming (the process takes less than a few minutes), it is one more thing that teachers have to do. Teachers in the Literacy Improvement Partnership found the daily documentation burdensome when sustained for about two months. Using the measure in learning sprints that last for 4-5 weeks and are spaced apart was preferred.
|Functions within social processes that support improvement culture
|The teacher-report data is not used in teacher evaluation. Yet, because it is not anonymous, teachers doing the reporting may be concerned about who has access to the data. This can have implications on teacher willingness to be transparent about their practice. Efforts directed toward building a culture of social learning and a sense of psychological safety around innovation and experimentation should be prioritized to invigorate teacher reflection and collaborative inquiry. In the case of the Literacy Improvement Partnership, the hub supports schools in establishing a learning huddle structure to do just that.
|Is reported on in a timely manner
|Teachers are expected to log the independent writing time daily. Some teachers enter data into the designated Google Sheet after a writing lesson or at the end of a school day. Other teachers choose to document daily using paper and pencil and engage in online data entry once at the end of each week. In either case, they can use the automatically populated run charts to reflect on their practice any time they deem it valuable.
Questions on Practical Measures Inspired by the Student Independent Writing Time Measure
What are the functions of practical measures in improvement beyond working as a tool to test a theory of improvement?
In the process of identifying a practical measure to test a change, practitioners discover along the way what the “must-have” and “nice-to-have” elements of the change are. Measurement in many ways shapes intervention. In some cases, measurement is even a major part of the intervention. Sola Takahashi, a researcher who led the development of the Student independent Writing measure, said in an interview, “The recording of the measure was a change idea in itself…Just by virtue of tracking this information, teachers were pressed to attend to an important aspect of their instruction that they may not have been thinking about.”
What contributes to the effective use of practical measures?
Although practitioners are increasingly trained to use data to inform practice (as shown by the increased adoption of data-driven instruction), routine reflection and learning from practice-based data may be hampered by suboptimal school and district context (e.g., a top-down decision-making culture). To maximize what practitioners get out of using practical measures to improve the student learning experience, system leaders should not only focus on building practitioner analytical capacity, but also invest in building coherent structures, processes, and norms that constitute a scientific professional learning system. The learning huddle used in the Literacy Improvement Partnership is an example of such an investment.
You may also be interested in…
Developing Exit Tickets in an Improvement Network
Ms. Johnson, an English teacher at an urban middle school, was concerned that her students were not doing as well in her class as they could be. Her district had recently…
Case Study of ORF as Practical Measure Brief
In this post we describe the identification and use of Oral Reading Fluency (ORF), a common and widely used Curriculum Based Measurement (CBM) measure, as a…