So why is the common core making such a big deal out of having kids read hard text?
One of the most persuasive pieces of evidence they considered was a report, “Reading: Between the Lines,” published by American College Testing (ACT; 2006). This report shows the primacy of text in reading and the value of having students spend time reading challenging text in the upper grades.
Virtually every reading comprehension test and instructional program makes a big deal out of the different kinds of questions that can be asked about text. You’d be hard pressed these days to find teachers or principals who don’t know that literal recall questions that require a reader to find or remember what an author wrote are supposed to be harder than inferential questions (the ones that require readers to make judgments and recognize the implications of what the author wrote).
Similarly, in our fervor to use data and to facilitate better test performance, it has become common practice to analyze student test performance by question type, and then to try to teach the specific skills required by those questions. There are even commercial programs that you can buy that emphasize practice with main ideas, drawing conclusions, specific details, and the like.
There is only one problem with these schemes, according to ACT: they don’t work. In Reading: Between the Lines, ACT demonstrates that student performance cannot be differentiated in any meaningful way by question type. Students do not perform differently if they are answering literal recall items or inferential items (or other question types like main idea or vocabulary, either). Test performance, according to ACT, is driven by text rather than questions. Thus, if students are asked to read a hard passage, they may only answer a few questions correctly, no matter what types of questions they may be. On the other hand, with an easy enough text, students may answer almost any questions right, again with no differences by question type.
Thus, the ACT report shows that though different questions types make no difference in performance outcomes, but that text difficulty matters quite a bit (and this conclusion based on an analysis of data drawn from 563,000 students). One can ask any kind of question about any text — without regard to text difficulty.
What are reading comprehension standards? They tend to be numbered lists of cognitive processes or question types. Standards require students “to quote accurately from text,” to “determine two or more main ideas of a text,” or to “explain how main ideas are supported by key details,” and so on. But if question types (or standards) don’t distinguish reading performance and text difficulty does, then standards should make the ability to interpret hard texts a central requirement.
And, this is exactly what the common core standards have done. They make text difficulty a central feature of the standards. In the reading comprehension standards at every grade level and for every type of comprehension (literary, informational, social studies/history, science/technology), there is a standard that says something along the lines of, by the end of the year, students will be able to independently read and comprehend texts written in a specified text complexity band.
The ACT report goes on to describe features that made some texts harder to understand, including the complexity of the relationships among characters and ideas, amount and sophistication of the information detailed in the text, how the information is organized, the author’s style and tone, the vocabulary, and the author purpose. ACT concluded that based on these data, “performance on complex texts is the clearest differentiator in reading between students who are likely to be ready for college and those who are not” (p. 16-17).
Copyright © 2024 Shanahan on Literacy. All rights reserved. Web Development by Dog and Rooster, Inc.