American schools and school systems have established a pretty consistent pattern in how data are used. Data are used in defined events – “event-based data use.” By that, I mean that education continues largely as it traditionally has, but educators stop sometimes to “use data.” Then they go back to what they were doing before. Examples of this include “data days,” early release, and periodic benchmark exams.
These structures aren’t necessarily a bad thing. In fact, I’ve argued in favor of structured, data-specific time. However, they shouldn’t be the only thing. If they’re the only thing, data use becomes an isolated activity, something an educator has to choose to do at the expense of something else (“Do I ‘use data’? Or do I do my lesson plans?”). It also doesn’t fit the way teachers use information, which is deeply embedded in their everyday practice. In short, depending solely on events makes it really hard to weave data use into the life of a school or district. To paraphrase something I heard Steven Katz say once, “If data use is an event, you don’t have a culture.”
I blame a focus on test scores. Look, I get it: if I’m a school leader and test scores are how we’re going to be judged, I’m going to pay attention to test scores. But the problem is that “test scores” have become the de facto definition of “data.” Achievement tests themselves are events, so event-based data use follows fairly naturally.
So…what should we do? I think it starts with a change in how we conceive of data and data use. I like to define data as anything that helps educators know more about their students. And I like to define data use as the things that educators do as they use these data to inform practice. If we conceive of data use in this way, there are a number of practices that follow naturally, ones that can help data use be a more consistent, coherent, and supportive part of an educator’s craft. I’ll talk about those in my next blog post, so check back soon!