Friday, December 27, 2013

How Publishers Can Screw Up the Common Core

Lexiles and other readability measures are criticized these days about as much as Congress. But unlike Congress they don’t deserve it.

Everyone knows Grapes of Wrath is harder to read than predicted. But for every book with a hinky readability score many others are placed just right.

These formulas certainly are not perfect, but they are easy to use and they make more accurate guesses than we can without them.

So what’s the problem?

Readability measures do a great job of predicting reading comprehension, but they provide lousy writing guidance.

Let’s say that you have a text that comes out harder than you’d hoped. You wanted it for fourth-grade, but the Lexiles say it’s better for grade 5.

Easy to fix, right? Just divide a few sentences in two to reduce average sentence length, and swap out a few of the harder words for easier synonyms, and voila, the Lexiles will be just what you’d hoped for.

But research shows this kind of mechanical “adjusting” doesn’t actually change the difficulty of the text (though it does mess up the accuracy of the readability rating). This kind of “fix” won’t make the text easier for your fourth-graders, but the grade that you put on the book will be just right. Would you rather feel good or look good?

With all of the new emphasis on readability levels in Common Core, I fear that test and textbook publishers are going to make sure that their measurements are terrific, even if their texts are not.

What should happen when a text turns out to be harder or easier than intended, is that the material should be assigned to another grade level or it should really be revised. Real revisions make more than make mechanical adjustments. Such rewrites engage in the author in trying to improve the text’s clarity.

Such fixes aren’t likely to happen much with reading textbooks, because they tend to be anthologies of texts already published elsewhere. E.B. White and Roald Dahl won’t be approving revisions of their stuff anytime soon, nor will many of the living and breathing authors whose books are anthologized.

But instructional materials and assessment passages that are written—not selected—specifically to teach or test literacy skills are another thing altogether. Don’t be surprised if many of those kinds of materials turn out to be harder or easier than you thought they’d be.

There is no sure way to protect against fitting texts to readability formulas. Sometimes mechanical revisions are pretty choppy, and you might catch that. But generally you can’t tell if a text has been manipulated to come out right. The publishers themselves may not know, since such texts are often written to spec by independent contractors.


Readability formulas are a valuable tool in text selection texts, but they only index text difficulty, they don’t actually measure it (that is, they do not reveal why a text may be hard to understand). Qualitative review of texts and continuous monitoring how well students do with texts in the classroom are important tools for keeping the publishing companies honest on this one. Buyer beware.

6 comments:

Mike Herkes said...

We can only hope a publisher would make actual revisions and make the text appropriate for the original grade level. The skeptic in me says they will go the way that saves them money.

Tim Shanahan said...

i fear you're right. Readabllity is a great tool and yet, it can be manipulated.

Amy Cox said...

Thanks for this thoughtful post but I think a nuance is missing from the discussion. With informational text in particular, there is a tension between quantitative levels and curriculum requirements, particularly in the younger grades. Longer sentence length (little dialogue) and more complex, domain-specific vocabulary tend to drive levels higher and artificial attempts to lower the level may result unsatisfactory content. We all-- educators and publishers--could do with a better understanding of how leveling systems interact differently with different types of text rather than fall into the trap of offering students a one-size-fits-all approach by restricting their reading based on narrow quantitative bands. I know that you aren't advocating such restrictions--I'm just taking the opportunity to jump on one of my own soap boxes!

Tim Shanahan said...

This is a good insight. In fact, one of the readability tools being used by CCSS is something called Source Rater. It actually has separate formulas for different genres of text and finds it makes more accurate predictions by doing so. The notion that publishers and freelance writers might do more tweaking to informational text is probably correct.

Michael Claiborne said...

I agree with the post and would add one more option for publishers: Leave the text as-is and have a Lexile that is slightly off-target. Given the inherent limitations of quantile measures, we should be comfortable with minor deviations from the guidelines where professional judgment says the text is grade-appropriate

Tim Shanahan said...

Michael-

That is a good point, and I'd even go further than that. CCSS does not require any particular mix of texts in a classroom with regard to complexity, but we are required to teach students to handle texts at certain complexity levels. I believe to accomplish that successfully we will need to guide student reading of a wide range of easier and harder texts--including those outside the CCSS bands.