Lexiles and other readability measures are criticized these days about as much as Congress. But unlike Congress they don’t deserve it.
Everyone knows Grapes of Wrath is harder to read than predicted. But for every book with a hinky readability score many others are placed just right.
These formulas certainly are not perfect, but they are easy to use and they make more accurate guesses than we can without them.
So what’s the problem?
Readability measures do a great job of predicting reading comprehension, but they provide lousy writing guidance.
Let’s say that you have a text that comes out harder than you’d hoped. You wanted it for fourth-grade, but the Lexiles say it’s better for grade 5.
Easy to fix, right? Just divide a few sentences in two to reduce average sentence length, and swap out a few of the harder words for easier synonyms, and voila, the Lexiles will be just what you’d hoped for.
But research shows this kind of mechanical “adjusting” doesn’t actually change the difficulty of the text (though it does mess up the accuracy of the readability rating). This kind of “fix” won’t make the text easier for your fourth-graders, but the grade that you put on the book will be just right. Would you rather feel good or look good?
With all of the new emphasis on readability levels in Common Core, I fear that test and textbook publishers are going to make sure that their measurements are terrific, even if their texts are not.
What should happen when a text turns out to be harder or easier than intended, is that the material should be assigned to another grade level or it should really be revised. Real revisions make more than make mechanical adjustments. Such rewrites engage in the author in trying to improve the text’s clarity.
Such fixes aren’t likely to happen much with reading textbooks, because they tend to be anthologies of texts already published elsewhere. E.B. White and Roald Dahl won’t be approving revisions of their stuff anytime soon, nor will many of the living and breathing authors whose books are anthologized.
But instructional materials and assessment passages that are written—not selected—specifically to teach or test literacy skills are another thing altogether. Don’t be surprised if many of those kinds of materials turn out to be harder or easier than you thought they’d be.
There is no sure way to protect against fitting texts to readability formulas. Sometimes mechanical revisions are pretty choppy, and you might catch that. But generally you can’t tell if a text has been manipulated to come out right. The publishers themselves may not know, since such texts are often written to spec by independent contractors.
Readability formulas are a valuable tool in text selection texts, but they only index text difficulty, they don’t actually measure it (that is, they do not reveal why a text may be hard to understand). Qualitative review of texts and continuous monitoring how well students do with texts in the classroom are important tools for keeping the publishing companies honest on this one. Buyer beware.