The literacy field has long been beleaguered by generic terms that no one seems to understand – or more exactly, of which nobody agrees on the definitions. Terms like whole language, balanced literacy, direct instruction, dyslexia, sight words, and guided reading, are bandied about in journals, conference presentations, newspaper articles, and teacher’s lounges as if there was some shared dictionary out there that we were all accessing. Even terms that seem like they would be widely understood like research or fluency often turn out to be problematic.
This plague of vagueness is exasperating, and I think it prevents productive dialogue or any kind of substantive progress in the field.
Over the decades, reporters and policymakers have often asked me my opinion of [insert any of those undefined terms]. My usual response has been something along the lines of:
“Tell me what ________ is, and I’ll give you my opinion,” not-so-cleverly shifting the responsibility for definition to my questioner.
If they say, “balanced literacy means providing explicit instruction in key reading skills while trying to provide a motivational and supportive classroom environment”, I say, “I’m all for it.” If they tell me, “it means teaching reading with a minimum of explicit instruction, particularly in foundational skills like spelling and decoding,” then I’m strongly opposed.
That approach keeps me out of the soup, but it really doesn’t solve any important problem. My clarity and consistency aside, teachers are still inundated with invitations to professional development programs, textbooks, and classroom instructional practices that are supposedly aligned with some unspecified definition of today’s hot jargon.
The biggest offender now – if my Twitter feed is representative – is the “science of reading.”
I can’t believe the number of webinars, blogs, textbooks, professional development opportunities, and the like that aim to provide the latest and greatest information from the science of reading (whatever that is?).
My advice to everyone: Grab your wallets and run!
Okay, I admit that isn’t very helpful, but it should save you a lot of money and aggravation.
What would be more helpful?
Consumers of a science of reading should start out with a definition of what would fairly constitute such a science. That way they could always check to see if what was being promoted was what they were seeking.
Back in the late 1990s, federal education law – recognizing how misleadingly the term “research” was being used by textbook companies, consultants, and the like – provided definitions of “scientifically-based reading instruction (SBRR).”
Unfortunately, in one fell swoop, the feds stopped promoting instructional approaches based on research and did away with the legal definition of scientific evidence; moves that coincided, I might point out, with the last round of gains in national reading scores.
I’d suggest that, though that definition no longer has legal standing, it is a good starting point for deciding what should be in your personal definition of “a science of reading.”
What would that look like?
First, the evidence must be derived from a scientific method that is appropriate to the claim being made. If you want to claim that a particular instructional method or approach improves reading achievement, you need to prove that; that such instruction is more beneficial than other approaches.
That can only be accomplished through an educational experiment; that provides a sound comparison between students who are receiving that instruction and those who aren’t.
Other scientific methods can provide valuable information, but they can’t answer a “what works” kind of question.
Descriptive and correlational research methods are appropriate for many other important questions (e.g., Are kids of different races or genders making equal gains? What kinds of library books are students most interested in? Have reading scores risen in the past three years?). Those other research methods, if implemented appropriately, can provide sound answers to such questions.
You might be surprised how many fine scientists are out there telling teachers how and what to teach – even though their research has never tested the effectiveness of what they are recommending.
Evidence from their studies can be usefully provocative – that is, it may suggest worthwhile questions. If, for example, you noticed greater student engagement when kids were allowed to choose what to read, you might wonder, “Would such choice lead to more learning?” Unfortunately, too often, people see or think they see that kind of pattern and jump right to a conclusion, “Student choice must lead to more learning,” without bothering to test that claim through a rigorous experiment. (Sometimes research supports such a claim and sometimes it doesn’t. But it certainly can’t be recommended as being based on science without such a test).
Something we should remember that when science identifies a potentially valuable avenue to better learning that doesn’t mean we know how best to exploit that knowledge.
Basically, all I’m saying is, if you want to claim that something works, you need to try it out and show that it can be beneficial.
Second, a science of reading would require studies that provided a rigorous analysis of the data derived from educational experiments. Such analysis must ensure that the results are due to the instruction and not just to normal variations in performance. It also must ensure that the comparisons being made are sound. Some studies try to compare results with groups that are so different in the beginning that it would be impossible to attribute outcome differences to the instruction.
Third, the studies need to go through peer review or some other kind of independent scientific evaluation to protect against serious flaws in the reasoning or analysis.
Fourth, the studies need to be replicated or generalized. That’s why I depend so heavily on meta-analysis; it combines the results of multiple studies. It is not enough to know that the XYZ reading method had great results in one study, if there are 9 other investigations that showed it to be ineffective. That kind of pattern says to me, this technique can work, but it rarely does. Not something I’d be likely to adopt or to recommend to schools.
Fifth, it helps if there are convergent findings – in other words, other evidence that appears to be consistent with these findings. Like the U.S. Department of Education of two decades ago, I would never place the imprimatur of science upon an instructional approach that had not actually been tried out in classrooms and shown to be effective. But once I have that evidence, I am heartened to know of other supporting information.
I don’t talk much about the brain research in reading. Not because I’m unaware of its potential importance, but because of its insufficiency. Any pattern revealed in neurological investigations that suggests an instructional possibility still must be evaluated in the classroom. Sometimes a basic idea is sound, but it is more challenging or complicated to implement than you realize.
In any event, descriptive and correlational studies, theories, neurological investigations, and studies of other kinds of learning may bolster your trust in the instructional studies that you have.
We have many studies showing the effectiveness of decoding instruction. Those are studies that have compared the results of a strong phonics emphasis versus a no phonics or a weak phonics approach. My trust in those results goes up when I see the mRI studies showing how the brain connects the visual recognition of letters and words with the part of the brain that carries out phonological processing. That neurological evidence on its own, wouldn’t be enough to scientifically endorse phonics as an effective instructional approach, but it sure provides convergent proof that should strengthen my resolve to offer such instruction. (The same, in this case, could be said about digital simulation studies of reading as well.)
Where does this leave us?
If I were invited to a science of reading seminar, and wondered if it would be worthwhile, I’d ask the sponsors if the presenters will either
- Limit their endorsement of instructional approaches to those that have been evaluated through rigorous and well analyzed classroom experiments that have been published in peer reviewed outlets, and replicated; or
- Distinguish which of their instructional recommendations have such evidence and which do not?
If I had no choice but to attend, those would be the kinds of questions I’d be asking the presenters if their presentations didn’t make the foundations of their claims clear.
If we are serious about improving reading achievement for all children, we are only likely to get there if we hold ourselves to the highest standards of professional practice. Having a sound definition for what constitutes a “science of reading” is more than a game of semantics. Employing instructional approaches that have repeatedly benefited learners in rigorously implemented and analyzed studies is likely to be the most productive way to progress.
These days I’m seeing schools mandating instructional practices that have no direct research evidence in the name of the science of reading. Those practices don’t become part of the science of reading because someone wrote them down, or because they were recommended by a researcher, or because they address a particular aspect of reading development.
Comments
See what others have to say about this topic.