The Work We Can Trust
“Failing to give expert classroom craftspeople the opportunity to reflect, plan, and experiment together about how to help their students reach these ambitious goals is the equivalent of using prefab components and hoping they will look and function like custom-built.” (p. 15)
—Remodeling Literacy Learning Together: Paths to Standards Implementation (NCLE, 2014)
When I read this sentence from the just-released NCLE report, I couldn’t help making a connection to one of the key claims of another recent document, NCTE’s Formative Assessment That Truly Informs Instruction. This position statement offers a similar warning against the temptation of “prefab components,” and encourages teachers, administrators, and policymakers to be wary of tools, products, and tests being marketed as “formative assessments.”
More often than not, these assessments are little more than miniature versions of some aspect of a high-stakes assessment item. Students’ performance on one of these pre-made assessments may give you a preview of how they will do on “the real thing,” but it’s not terribly well suited to letting you know where students are in the actual processes of coming to understand what they need to learn. In other words, it’s not actually a formative assessment. It’s practice for a test.
This past week, I got to meet with a group of teachers to work through an area in which our English curriculum needed attention in order to meet one of the goals of Common Core—the informational text standard asking students to delineate and evaluate the arguments and claims of a text.
We began by reading the standard carefully, thinking about all of the sub-expectations embedded within it. Then we looked at some existing online resources designed for teachers to use with students to meet the standard. We found these materials useful as conversation starters, but lacking the clarity and context specificity we needed to think about how we would teach and assess this standard with our students, with our curriculum, with an actual text our kids would read.
So our next step was to engage in the only kind of work I trust in a case like this: We read a few short argumentative texts, annotating and puzzling over them on our own and with table partners. There were audible signs of relief when one of us shared that while it was easy to list all the different claims the author makes in the course in one of the pieces, it was not immediately evident which functioned as the primary claim of the argument.
Our confusion quickly led way to the collaborative creation of a heuristic for reading, analyzing, and evaluating an argument. It looked nothing like the examples we had found online; though they were useful in fueling our process, they were not right for our work. We used the heuristic together on a second piece, refining elements of it and finding confirmation of others.
Remodeling Literacy Learning Together offers a description of what was going on in our group, explaining that “good teaching requires deep understanding of the goals we are trying to help students reach, analysis of their current level of understanding, and careful design of learning experiences, all of which are tasks that require professional time outside of the classroom and are best accomplished with the support of colleagues” (p. 12). And while I completely agree with that observation, I’d like to develop this notion of assessment even further. The work we did together didn’t involve any analysis of student work, but it did make clear the moments in the process where we would need to engage in the close, careful observation work of formative assessment.
Do our students know, for example, how to identify a claim based on the rhetorical language of argument, of “claiming”—and in relation to the presentation of evidence and examples? (If not, we would need to demonstrate how we “automatically” do this through a think-aloud.) Do our students understand that arguments often use mini-claims and arguments in support of a larger argument? (If not, we can use the graphic organizer we designed and introduce the concept of warrant, as necessary.) Do they realize that when an argument appears in a publication such as a magazine or newspaper that its headline may be designed to attract readers—not necessarily to clarify the main point of the piece? (If not, we have at least one example that helps make it clear; we just struggled through it!)
These formative assessment points are related to the standard, but they don’t on their own necessarily look much like successful completion of the work of the standard itself. They’re too buried within the complex literacy task to be immediately apparent in the seemingly self-evident language of the standard. For the sake of understanding new standards, for considering how to teach them, and for developing a sense of where embedded checks for understanding along the way should lie, teachers need time and support for collaboration that allows for this kind of inquiry.
I’m sure that somewhere this work “has already been done for teachers,” a phrase that may be music to some ears. But I prefer the ring of this recommendation from the NCLE report, built from the survey findings:
“Encourage and support educators to take initiative in designing and using innovative literacy teaching resources that are appropriate for their students, and not rely on prepackaged programs or solutions.”
I find that most teachers relish the opportunity to do this work—it helps them teach and assess their students more expertly. It’s time well spent because it actually increases our ability to teach responsively, using thoughtfully embedded formative assessment.
Don’t look for this kind of expertise to come from teaching out of a box—it can only come from the observation, inquiry, and collaboration of committed educators, given time to think about their own curriculum and their own students’ needs.