I made the mistake of tossing a comment into the middle of a twitter thread on Monday. Not a nice quiet subject like vaccinations or abortion or Trump's wall, but reading. As soon as it became apparent that thread would blow up and swallow my feed, I could have asked to be cut loose or just muted the participants, but I was curious. How much longer would this go on? The answer is that after five days, the argument is still flopping around like a beached herring.
The latest explosion in the ageless reading wars was sparked by Emily Hanford, who has been making the rounds with variations on an article asserting that science tells exactly how people learn to read and teachers should be doing more of that.
Will Hanford's piece, or some blistering response to it, finally settle the reading wars once and for all?
Of course not.
Teach phonics. Don't teach phonics. Whole language! Decoding is everything. Knowledge base is everything. On and on and on we go. It will never end.
The reading war will not go on eternally because Some People are obdurate dopes. I mean, Some People are obdurate dopes, but that's not the heart of the problem.
The heart of the problem is that we don't know how to tell what works. And that's because we don't have a method to "scientifically" measure how well someone reads.
Yes, we have tests. But testing and pedagogy of reading are mostly locked in a tautological embrace. I think decoding is The Thing, so I create a test that focuses on decoding, then implement classroom practices to improve decoding skills and voila-- I scientifically prove that my decoding-based pedagogy works. Mostly what we're busy proving is that particular sorts of practices prepare students for particular sorts of tests. Big whoop.
We get stuck because we don't know what Being A Good Reader really means. Chris can read a book about dinosaurs and tell you every important fact, idea, and theme after just one reading, but ten times through a book about sewing and Chris can't tell you the difference between a needle and a bobbin. Pat reads the sewing book and can't pass a test about it, but can operate a sewing machine far better than before reading the book. Sam can read short passages and answer comprehension questions, and so aces tests like the PARCC-- but Sam can't read an entire book and come away with anything except the broadest idea of what it included. Gnip and Gnop (I'm running out of gender neutral names) can both read the same article, but when they're done, Gnip understands exactly in detail what the article says, but doesn't realize it's bunk, while Gnop only about half gets what the author says, but can explain why it's all baloney. Blorgenthal reads car magazines daily, voraciously, with great understanding, but can't get through a single paragraph of their history textbook. I know a woman who keeps devouring books about Jewish theology and building a deeper and deeper understanding, but who could not finish a work of fiction if you paid her. And lots of folks can't make any sense out of poetry (including the vast number of people who misread "The Road Not Taken")
Now go ahead and rank all these people according to how well they read.
As with writing, we can mostly identify those who are on the mountaintop and those who are in the pits below, but on the mountain side, it all gets kind of fuzzy.
In writing, at least, we talk about purpose and audience. Doesn't purpose make a difference in reading? Does it make a difference if the purpose is artificial, like, say, reading in order to take a test or to satisfy a teacher? (And no, Common Core's artificial division of fiction and information doesn't really address these questions.)
We know a bunch of different problems that struggling readers can have, and we know solutions to some of those problems (though many wash up on the shores of The Student Has To Care Enough To Want To Do The Hard Thing). We know that past a certain point, readers get better by doing more reading.
And every actual classroom teacher knows that some combination of a wide variety of tools is necessary-- and different-- for every student. There is, in fact, science to (sort of) back them up. So the war can be over, right? Everyone can go home? If only.
The most important lesson of the reading wars is that when any one side wins, students lose. In schools where all decoding was dropped and students were left to touch and feel their way through texts, the students suffered. And we are, hopefully, just emerging from as period when the mechanic were ascendant, with their insistence that reading was comprised of free-floating "skills" that could be developed and applied completely separate from context and content knowledge. That has been bad for everyone.
People know what the answer is. A full tool kit, applied thoughtfully by a professional. When one side is winning, many kits are missing some of the tools. But to have the argument that the house must be built with only a hammer or only with nails is just foolish.
So why will the argument not die?
Well, partly because Some People are obdurate dopes. But also because we will always have a chorus of people saying, "Can that kid read? How well? Prove it." Reading, as much as anything in education, demands that we measure what cannot be measured. So we create ways to measure a text's "reading level," and it's mostly bunk. We crank out reading tests, and some are diagnostically useful, but as a means of precisely quantifying how well a student read-- bunk. Reading assessment brings us up against the biggest challenge in education-- how to make visible a process that goes on entirely inside the student's head. And every attempt to measure the process/skill/knowledge requires test manufacturers to simplify it, to take something with twelve dimensions and squeeze it down to two.
Every attempt to measure means a truncated understanding of what's going on, which in turn leads to a distortion of the relationships between the many tools, which in turn leads to the false sense that one tool is The Only True Tool. And the war breaks out anew.
The attempt to make the invisible visible accurately really requires a whole toolbox full of artificial activities to try to tease out what's going on in there, and those tools will always be imperfect. That's fine. I am not arguing that we just give up on the whole business and go home. Nor do I know how to design a test that would really absolutely measure reading or literacy in a way that would let us slap a nice clear number on it. I am imploring teachers, reading experts, policy wonks, reformsters, bureaucrats and politicians to remember the nature of how we generate the "data" and to stop mistaking it for a Great Objective Truth handed down from God. Stop imagining that any single test tells you how well a student or many students read. Let the reading wars rage on, but most of all, never let there be a winner.
Exactly, what we choose to measure is both informed by and creates the world as we see it. And how we measure is just as important.
Great topic. Reminds me of a book I read recently - https://www.amazon.com/Science-Reading-Information-Modern-America/dp/022682148X/ that includes how reading became a topic of interest to politicians as well as educators and parents. Reducing instruction to an approach that will work for the largest proportion of kids whether it bores them or not is far different than providing what is best for each child.