Friday, September 30, 2011

More on Text Complexity and Common Core

Here is an interview that I did recently that you might find interesting.


Please note on the YouTube page that opens, there are some other interviews about common core with Jan Hasbrouck, Vicki Gibson, and Jana Echevarria. Good luck.


Angela M. said...

My exploration of text complexity and common core standards has now brought me to Lexile scores/ranges. One of the assessments our school uses provides Lexile ranges for each student. Are Lexiles a good thing? Should I be using them to help students find books at their level and to analyze the text passages I use in my reading instruction? Or, do you think this is heading in the wrong direction?

Tim Shanahan said...


Lexiles are terrific. A very useful tool, and I would pay a lot of attention to those ranges. Lexiles are able to explain about 80% of the variance in reading comprehension, which is much better than other readability formulas.

But as good as Lexiles are all they provide is a good estimate of which kids could likely read these texts with comprehension. They don't tell you the best level to teach children at, and they don't even tell you whether you should avoid a book or just beef up your scaffolding.

Also, Lexiles are not perfect. Yes, they usually give good, useful, sound information, but some percentage of time they do not (every assessment has some error associated with it). So, according to Lexiles a given book may be appropriate for fifth graders(and they analyze the sentence complexity and the rareness of the vocabulary words). And, while that scheme usually works, let's say that you have a text that presupposes particular background experiences or emotional awareness that is uncommon for fifth graders (so the words were common and the sentences simple, but the ideas were abstract and beyond the students' maturity level). In such a case, even though the Lexile says fifth grade, I might either choose to not use the book with these kids or I might see that my kids are going to find the "reading" easy and interpretation hard, perhaps I'll use it and scaffold it differently.

Hope that makes sense.

Angela M. said...

Thank you for the information on Lexiles. Would you share your opinion on another topic?

A colleague and I are reviewing our middle school remedial reading program to ensure all of the skills and strategies we use are evidence-based. Naturally, we have found an overwhelming amount of information from a variety of academic databases, reading journals, etc. about what constitutes best practice in the teaching of comprehension.
1) What are the top five or ten evidence-based skills/strategies that absolutely MUST be included in a middle school remedial reading program?
2) Would you recommend any websites or resources that compile information about which skills/strategies are most evidence-based? In particular, we are looking for a way to search for a skill or strategy by name (ex: summarizing) and find information on its evidence base.
3) Are the skills/strategies promoted in reading journals (such as the Reading Teacher) considered 'evidence-based' - or are there merely skills/strategies that have been researched?

Thank you.

Tim Shanahan said...

The U.S. government required a review of research about a decade ago (the National Reading Panel Report) and it provides some of the kind of information that you ask for. You can find a link to that report on my site ( . Also, you would be wise to go to the What Works Clearinghouse, as they review research studies on intervention programs aimed at students in these grade levels. And my wife produced a guide on the selection of intervention programs that could help you

Anonymous said...

Hello Dr. Shanahan,

I am finalizing my presentation for the National Title 1 Conference, entitled, "Marrying Rigor and Passion in Reading Comprehension". I'm wondering if I have your permission to quote you on a summary of the CCS that I read in a power point presentation of yours. In the presentation(which I read), you succinctly described the key CCR standards by stating: Key Ideas & Details standards primarily focus on "What did the author say?"; Craft & Structure standards focus on "How did the author say it?"; Integration = How do I evaluate what the author has told me? I think this is such a memorable way to distill the standards, that I'm hoping I can quote you on this. Would that be okay?

Many thanks,

Rebecca Beck

Tim Shanahan said...

Yes, please feel free to use this material. It is always polite to cite the source, but otherwise you and other readers can feel free to use my stuff.

Thanks and god luck with our presentation.

Rebecca Beck said...

Thanks so much! I'll be sure and cite you as my source.

Terri Lieberman said...

Hi Dr. Shanahan,

I just completed an article that you wrote titled, "Text Complexity". In this article you talk about comprehension variance. I am having a difficult time wrapping my head around this idea. Can you please simplify this for me. What does it actually mean when you say, "readability measures are now able to explain about 50 to 60 percent of the comprehension variance." Thank you in advance, Terri Lieberman

Tim Shanahan said...

I'll try. This is a real basic concept in science and yet it is a bit complicated. Science is always about variation. In reading we look at things like variation in children's reading performance (i.e., some kids read better than others). There are ways of summarizing the variation that exists in the values of a variable. Now let's say we want to predict the differences in children's reading scores at the end of first grade. We would take measures of their reading of course, but what would use as predictor variables? We might measure their ABC knowledge in kindergarten or parents' SES, or first-grade teacher quality. We would then correlate one or more of those variables with the reading variability. The more closely related that predictor variable the larger portion of variation that we would be able to predict by knowing their scores on the predictor variables. Thus, when I say readability measures allow us to predict or explain 70% of the variation in comprehension I'm saying that just knowing the Lexile levels of the texts would allow me to improve my ability to predict student reading performance by 70%. In this case, it simply means that a big amount of variation that you see in reading performance is due to differences in the text difficulties that we ask students to read.

Terri Lieberman said...

Thank you for your reply. I do appreciate your time. It is very scientific and I think I will need to take more time to wrap my head around it.

Tim Shanahan said...

It's a tough one. Let it suffice that the readability measures like Lexiles and ATOS, etc. are reasonably accurate predictors of how well students will comprehend a text. They likely do better than the typical teacher (or Education professor) at identifying the level of a text, now that they are computerized they are a fast way of making that kind of judgment. However, the 70% figure should not only convey a high degree of accuracy, it should also point out that the predictions are less than 100% accurate. Texts will sometimes be easier or harder than predicted (not very often, but often enough that you should be cautious). If Lexiles says a text should be understood by third graders, but it turns out the text was hard for your third graders--the kids' performance is what it is and the readability prediction is incorrect.

Terri Lieberman said...

Thanks again for your quick response. I think I'm beginning to get lease enough to try to explain this to other teachers.