Tuesday, July 5, 2011

Common Core Standards versus Guided Reading, Part II

So why is the common core making such a big deal out of having kids read hard text?

One of the most persuasive pieces of evidence they considered was a report, “Reading: Between the Lines,” published by American College Testing (ACT; 2006). This report shows the primacy of text in reading and the value of having students spend time reading challenging text in the upper grades.

http://https:///www.act.org/research/policymakers/reports/reading.html">

Virtually every reading comprehension test and instructional program makes a big deal out of the different kinds of questions that can be asked about text. You’d be hard pressed these days to find teachers or principals who don’t know that literal recall questions that require a reader to find or remember what an author wrote are supposed to be harder than inferential questions (the ones that require readers to make judgments and recognize the implications of what the author wrote).

Similarly, in our fervor to use data and to facilitate better test performance, it has become common practice to analyze student test performance by question type, and then to try to teach the specific skills required by those questions. There are even commercial programs that you can buy that emphasize practice with main ideas, drawing conclusions, specific details, and the like.

There is only one problem with these schemes, according to ACT: they don’t work. In Reading: Between the Lines, ACT demonstrates that student performance cannot be differentiated in any meaningful way by question type. Students do not perform differently if they are answering literal recall items or inferential items (or other question types like main idea or vocabulary, either). Test performance, according to ACT, is driven by text rather than questions. Thus, if students are asked to read a hard passage, they may only answer a few questions correctly, no matter what types of questions they may be. On the other hand, with an easy enough text, students may answer almost any questions right, again with no differences by question type.

Thus, the ACT report shows that though different questions types make no difference in performance outcomes, but that text difficulty matters quite a bit (and this conclusion based on an analysis of data drawn from 563,000 students). One can ask any kind of question about any text — without regard to text difficulty.

What are reading comprehension standards? They tend to be numbered lists of cognitive processes or question types. Standards require students “to quote accurately from text,” to “determine two or more main ideas of a text,” or to “explain how main ideas are supported by key details,” and so on. But if question types (or standards) don’t distinguish reading performance and text difficulty does, then standards should make the ability to interpret hard texts a central requirement.

And, this is exactly what the common core standards have done. They make text difficulty a central feature of the standards. In the reading comprehension standards at every grade level and for every type of comprehension (literary, informational, social studies/history, science/technology), there is a standard that says something along the lines of, by the end of the year, students will be able to independently read and comprehend texts written in a specified text complexity band.

The ACT report goes on to describe features that made some texts harder to understand, including the complexity of the relationships among characters and ideas, amount and sophistication of the information detailed in the text, how the information is organized, the author’s style and tone, the vocabulary, and the author purpose. ACT concluded that based on these data, “performance on complex texts is the clearest differentiator in reading between students who are likely to be ready for college and those who are not” (p. 16-17).

16 comments:

Julie Niles Petersen said...

This statement surprises me greatly, "You’d be hard pressed these days to find teachers or principals who don’t know that literal recall questions that require a reader to find or remember what an author wrote are supposed to be harder than inferential questions (the ones that require readers to make judgments and recognize the implications of what the author wrote)."

I recently listed to a great podcast from Daniel T. Willingham about the importance of background knowledge (http://download.publicradio.org/podcast/americanradioworks/podcast/arw_4_30_reading.mp3?_kip_ipx=1017638692-1297712350) In that podcast Willingham shares this example:

"I just got a puppy and my landlord is not too happy."

If what you say is true, then readers should be able to more easily infer why the landlord is not happy than they would be able to answer this question, "What did I just get?" Hmmmm...... I am really hoping this was a typo and not that I am missing out on a lot of reading research. If it was not a typo, could you please let me know what research supports your statement?

Tim Shanahan said...

Julie-

No, it is not a typo. It is the text that matters, not the question type. You are certainly correct that you can construct experimental texts for which it may be relatively easier or harder to answer one or another question type. In your example, the inferential question was harder because most readers would get the explicitly stated idea, but they might differ in their background knowledge about landlords and puppies.

However, look at this text:
John put on his sunglasses as he explored the ambit of the range.

Which question is easier, one that asks why John put on his sunglasses (an easy inference) or one that asks what he did (it is explicitly stated, but since most people don't know the meaning of ambit, it is the harder question in this case)?

The best evidence that we have of this isn't these kinds of tortured comparisons with artificial texts, but how hundreds of thousands of readers perform across dozens of naturally occurring texts with hundreds of real questions. That research is cited and linked in the blog that you commented on. Pay special attention to the charts showing the differences between literal and inferential performance at lots of different comprehension levels.

thanks.

tim

Julie Niles Petersen said...

Thank you so much for the response, Dr. Shanahan. I look forward to reading the article you shared.

In the meantime,your example has me confused. I would say that the literal question is easier to answer. I could easily answer, "What did John do?" with, "John put on his sunglasses as he explored the ambit of the range." It does not necessarily mean I understand the sentence, but I can answer it correctly.

On the other hand, without some background knowledge of why people wear sunglasses and an understanding of "ambit" and "range," I could not answer your inferential question, "Why did John put on his sunglasses?" Hmmm...

Finally, I would say that the background knowledge of the reader is most important--without it, appropriate inferences cannot be made.

Again, thank you for your time.

Tim Shanahan said...

But see, Julie, that's where we get into trouble... when we start to say, "I could answer the comprehension question, I just couldn't understand what it meant." That means that the question required dumb pattern matching, without any real understanding of the author's message. That can't be acceptable.

Background knowledge certainly matters, but not just for inferences. How did you know what a puppy was or what it means to get one, or a landlord, or happiness? Interpreting those words requires prior knowledge, if only of the words themselves. That reading comprehension requires the combination of the new (what the author has told you) and the known (what you bring to the text), doesn't make one kind of question generally easier or harder than another.

tim

Julie Niles Petersen said...

Perhaps it is differing definitions of literal comprehension or recall questions that is the problem here??? I certainly do not believe being able to correctly answer questions without understanding is a good thing. However, I do think literal questions can be correctly answered that way. Inferential questions, on the other hand, cannot. Answering them correctly requires an appropriate connection to be made between the text and a reader's prior knowledge--including vocabulary knowledge. This is why I believe answering inferential questions is harder than answering literal questions.

In my opinion, answering a literal question with anything more than what has been explicitly stated suggests that more than literal comprehension was used to answer the question.

P.S. I have been reading the article you shared. I just took a break to see if you had responded and you had. Thank you once again. Your time is much appreciated, Dr. Shanahan!

Tim Shanahan said...

But it isn't the questions that are making comprehension hard or easy here, it is the difficulty of the text.

If an author used rare words, presupposed extensive knowledge and experience, used devices like irony or sarcasm, or organized the information in a complicated way, then you will have difficulty answering the questions about the text, and it doesn't really matter if the questions tapped explicit or implied information.

(Identifying what the Prince did in a chapter of a Russian novel is difficult, but not because the author doesn't explicitly tell what he did. It is difficult because his name could be Prince Muishkin, Prince M., Lef Nicoleivich, my dear friend, the Idiot, and sometime only he or him (which is complicated when there are many hes and hims to choose from). Drawing conclusions about the Prince's actions is not appreciably harder than identifying the Prince's actions.

Julie Niles Petersen said...

I read the article and skimmed through the appendix. Here are my thoughts:

1. I reread what you wrote that originally surprised me. This was my original interpretation of your words, "answering literal recall questions is harder than answering inferential questions (and almost everyone knows it)." After rereading your exact words, the words "supposed to be harder" are starting to throw me off more.

2. After reading the article, I could not find anything specific that supports my understanding of what you wrote. What I took away from the article in regard to literal and inferential comprehension was that those who struggle with literal comprehension also struggle with inferential comprehension and that those who are proficient in one are also proficient in the other. This makes sense to me because I have met many struggling readers who struggle to answer both types of questions correctly, but I would love to read more. Do you know any other studies that replicate these findings?

3. I wholeheartedly agree that, proficiency in understanding complex text is important.

4. This also makes sense, the "degree of text complexity differentiates student performance better than either the comprehension level [literal or inferential] or the kind of textual element [e.g. determining main idea] tested." (p. 16) This statement supports the importance of students having wide knowledge of the world and a large vocabulary, as well as the other items mentioned in the Lyons quote on p. 8.

5. I thank you SO much for your time and for pushing my thinking.

Karen Carroll said...

During guided reading teachers will be scaffolding more because of the complexity of the text, my question is. . . how much time will teachers spend in a guided reading group for students at the second grade level and beyond?

Tim Shanahan said...

Karen--

That is a good question, and one that no one has a real answer to (in terms of research). One idea that seems reasonable is to vary the lengths of the texts (shorter hard texts, easier or more moderate longer texts). Short reads will allow you to stay to current amounts of time (20-60 minutes), but to still go deep.

EdEd said...

Hi Dr. Shanahan,

I realize I'm late to this conversation by over a year! However, I recently stumbled upon your blog and this post, and wanted to respond. Specifically, I have 2 main concerns with using the ACT report as evidence that we should be teaching children texts of greater complexity:

1) While there is no difference between question types (inferential vs. literal), there is a very clear relationship between performance on comprehension questions overall and performance on the ACT. There is also a high correlation between performance on easier passages and overall performance. In other words, while the ACT report does provide evidence that students seem to do equally well on inferential vs. literal comprehension questions, there is no evidence that text complexity is any more of a predictor of ACT performance than any of other variables mentioned.. In other words, the ability to answer inferential comprehension questions is still as important as the ability to answer any question from a complex passage. Neither provides more predictive power. In addition, successfully answering questions from hard passages is no less of a predictor of success than successfully answering questions from easier passages, according to the report.

All of this suggests that ability to answer questions from complex reading passages is no more of a predictor of ACT performance than ability to answer questions from easier passages. As such, there is no evidence that text complexity is any more of an accurate differentiator than any other variable.

2) Even if it were an accurate differentiator, this report provides no evidence related to instructional technique or instructional goal-setting. This report does not provide any evidence to conclude "teaching reading with more complex text is more effective," because there were no experimental or even correlational studies examining how students were taught - simply how they differentially answered questions on an outcome measure (ACT).

Overall, I've found your comments quite interesting regarding lack of evidence support strict teaching on instructional levels. However, I do not find the ACT report to indicate that either 1) complex text is a meaningful differentiator of ACT performance over any other variable measured, or 2) that complex passages should be used rather than passages on a child's instructional level.

Tim Shanahan said...

EdEd--
Never too late to the party... You are incorrect about this claim. Question types were not predictive of reading comprehension, but passage difficulties were. ACT used 6-7 variables to determine three levels of text complexity and these did separate out comprehension performance. They even included a graphic showing how substantial these differences were.

There are some experimental studies showing either that text difficulty alone makes no difference in student learning (O'Connor, Swanson, & Geraghty, 2010) or that students who are placed in more challenging text--more challenging than the "instructional level" do better in terms of learning (Morgan, Wilcox & Eldredge, 2000) as well as the correlational work of studies by William Powell. (Of course, we also have case studies showing the possibilities of successfully teaching struggling students with challenging text such as those reported by Grace Fernald in the 1940s). In any event, placing kids at the instructional level clearly isn't as helpful as has been claimed. Thanks.

EdEd said...

Thanks for your reply Dr. Shanahan - glad to hear the party is still going :). My post ended up being longer than the limit, so I've separated it into 2 posts.

In response to the discussion of question types being predictive of reading comprehension, I'd clarify that question types were not differentially predictive, but all were still predictive. In other words, the more questions (of any kind) a student answered correctly, the higher the ACT score (see 1st graph on page 5 of the report). The implication is that proficiency with answer all forms of comprehension questions is important to performance on assessments such as the ACT. In other words, teaching explicit strategies related to comprehension is an important element of instruction.

Likewise, referencing the 1st graph on page 6, both less and more complex texts were equally predictive of ACT performance. In other words, there is a predictable relationship between overall ACT reading performance and both complex and uncomplicated passages. Given a certain score (x) on a less complex problem, you'd be able to predict overall ACT reading performance. On the other hand, the graph indicates that there is less predictive power at lower levels of performance (given x score below ACT reading benchmark, you would be unable to predict overall ACT reading score).

These data suggest that all reading skills measured across domains - type of question, complexity of passage - were important in performance on ACT reading composite. The better children seem to do in any given skill area, the better the ACT score overall. The exception, as noted before, is that performance with complex texts does not seem to differentiate performance below the ACT reading benchmark cut score, most likely due to a basal effect (a certain level of competence needs to be present before a child starts scoring more highly on complex passages, which is not present with children scoring on the lower end of the benchmark.)

The implication for instruction is that no one particular skill set seems to be favored more highly given the ACT report data. The data indicate that if you are deficient in skills related to answering inferential reasoning questions, for example, you would likely score lower on the ACT. The same would hold true with all skill sets measured.

EdEd said...

Part II

In terms of more general research supporting the use of more complex text, it should be intuitive given learning research generally that a child should always be given the most challenging material possible that is still within the child's instructional level/zone of proximal development (ZPD). This seems to be the fundamental assertion with complex text - if it's possible for a child to engage text (with assistance) that's more complex, it's better to do so, because mastery of more difficult and complex material will result in higher levels of learning. There seem to be two issues confusing the conversation/practice, though: 1) problems with accurately identifying instructional ranges, and 2) using instructional level with oral reading fluency to select text for comprehension-based instruction.

In terms of the first, if we revisit the definition of instructional range, if a child can successfully complete a task (e.g., achieve deep comprehension with a complex text) with appropriate assistance, the task is within the child's instructional range. As such, it isn't correct to say that a child was a given a text 2 or 4 grade levels above a child's instructional level (as occurred in the Morgan et al study, for example) and successfully completed the task. If the task was successfully complete, the task was within the child's instructional level. The problem, then, is not that text given "on grade level" was too easy, but that the instructional level was incorrectly assessed. The true instructional level was, in fact, 2 or 4 grades above (based on highest level of performance).

In reality, I believe the mistake was confusing ORF instructional level with comprehension instructional level, which brings us to my second point above. I believe that folks are saying when they advocate complex text is, "Do not select text for comprehension instruction based on a child's instructional level with ORF." The reason is that some children may be able to comprehend text several levels above their ability to fluently read connected text. As such, from my perspective, the correct advice would not be, "select text that is above a child's instructional level," but "select text at the upper end of a child's instructional level in comprehension, not fluency, and make sure you are accurately defining and assessing 'instructional range' to the best of your ability."

Tim Shanahan said...

EdEd--
I see your point. Indeed, the questions (as opposed to the question types) are predictive. It wouldn't matter if they only asked high level inferential questions or literal questions, etc., they would still be able to predict performance. That is correct.

Let me take the point a step further: it suggests that questions or tasks are important or necessary in assessment and I think that is also true with regard to teaching. Not just having kids read, but having them use the information (to answer questions, discuss, write, report, etc.) is important in developing students' ability to understand what the read. Not a new point, I think Thorndike made it in 1917; but it is important.

Finally, one more step, although different types of questions do not access different or separable skills, that doesn't mean that it isn't a good idea for test makers and teachers to ask a variety of question types (not so much for the purpose of asking questions that exercise different aspects of the reading brain, but more so to ensure that you have plumbed the depths of a particular text).

Thus, it is very reasonable to ask a wide range of questions about what students are reading, but it is not sensible to look for patterns of performance in how they answer or fail to answer those questions (beyond the general and obvious: if the student can't answer the questions he or she failed to understand this text).

Tim Shanahan said...

The research is not showing that the concept of ZPD is wrong, but it is--as you point out--showing that the ways that reading experts have measured this concept have been off base. Teaching students with more challenging text will require different and greater amounts of teacher support, guidance, scaffolding, explanation, and student rereading, but with such instructional support, there is no reason that students will not learn.

I guess it just shows that if you take a deep, complex, and subtle construct and then make up a measure for it that is mechanistic and non-empirical (they could have found out very early that it wasn't working), you are going to make some pretty big mistakes. Unfortunately, for many educators the measures ultimately replaced the construct, so instead of seeing an instructional level as a span of levels requiring a variety of teaching choices, they see instructional level as a very real and specific thing (and to them common core is a very scary and wrong headed proposition).

EdEd said...

First, thanks so much for being willing to take the time to discuss. I believe it shows your commitment to research-to-practice and helping facilitate understanding of what can be some difficult material.

In response to both sets of your comments, I think we're on the same page. In particular, I very much agree with your comments about the measure replacing the construct, which is a very relevant comment even beyond this discussion. Regardless of the discipline, it seems that folks often make that mistake, from IQ tests to state end-of-year tests. I'm not making any comments about the reliability or validity of those measures specifically, just that folks often forget that the concept is not always completely encapsulated by the measure.

Not sure if School Psychology Review is on your radar, but you may find the most recent volume of interest as there is substantial discussion of the very concept of validity - construct validity in particular - and the connection between theory and assessment. It will be interesting to see if that disconnect we sometimes see between construct and assessment could be at least partially mended with new ways of considering validity.