Common Core Standards versus Guided Reading, Part I

  • Common Core State Standards
  • 29 June, 2011
  • 25 Comments

The new common core standards are challenging widely accepted instructional practices. Probably no ox has been more impressively gored by the new standards than the widely-held claim that texts of a particular difficulty level have to be used for teaching if learning is going to happen.

  Reading educators going back to the 1930s, including me, have championed the idea of there being an instructional level. That basically means that students would make the greatest learning gains if they are taught out of books that are at their “instructional” level – meaning that the text is neither so hard that the students can’t make sense of them or so easy that there is nothing in them left to learn.

  These days the biggest proponents of that idea have been Irene Fountas and Gay Su Pinnell, at Ohio State. Their “guided reading” notion has been widely adopted by teachers across the country. The basic premises of guided reading include the idea that children learn to read by reading, that they benefit from some guidance and support from a teacher during this reading, and, most fundamentally, that this reading has to take place in texts that are “just right” in difficulty level. A major concern of the guided-readingistas has been the fear that “children are reading texts that are too difficult for them.”

  That’s the basic idea, and then the different experts have proposed a plethora of methods for determining student reading levels, text difficulty levels, and for matching kids to books, and for guiding or scaffolding student learning. Schemes like Accelerated Reader, Read 180, informal reading inventories, leveled books, high readability textbooks, and most core or basal reading programs all adhere to these basic ideas, even though there are differences in how they go about it.

  The common core is based upon a somewhat different set of premises. They don’t buy that there is an optimum student-text match that facilitates learning. Nor are they as hopeful that students will learn to read from reading (with the slightest assists from a guide), but believe that real learning comes from engagement with very challenging text and a lot of scaffolding. The common core discourages lots of out-of-level teaching, and the use of particularly high readability texts. In other words, it champions approaches to teaching that run counter to current practice.

  How could the common core put forth such a radical plan that contradicts so much current practice?

  The next few entries in this blog will consider why common core is taking this provocative approach and why that might be a very good thing for children’s learning.

  Stay tuned.

Comments

See what others have to say about this topic.

Julie Niles Peterson Jul 01, 2017 11:54 PM

7/5/2011

This statement surprises me greatly, "You’d be hard pressed these days to find teachers or principals who don’t know that literal recall questions that require a reader to find or remember what an author wrote are supposed to be harder than inferential questions (the ones that require readers to make judgments and recognize the implications of what the author wrote)."

I recently listed to a great podcast from Daniel T. Willingham about the importance of background knowledge (http://download.publicradio.org/podcast/americanradioworks/podcast/arw_4_30_reading.mp3?_kip_ipx=1017638692-1297712350) In that podcast Willingham shares this example:

"I just got a puppy and my landlord is not too happy."

If what you say is true, then readers should be able to more easily infer why the landlord is not happy than they would be able to answer this question, "What did I just get?" Hmmmm...... I am really hoping this was a typo and not that I am missing out on a lot of reading research. If it was not a typo, could you please let me know what research supports your statement?

Timothy Shanahan Jul 01, 2017 11:55 PM

7/5/2011

Julie-

No, it is not a typo. It is the text that matters, not the question type. You are certainly correct that you can construct experimental texts for which it may be relatively easier or harder to answer one or another question type. In your example, the inferential question was harder because most readers would get the explicitly stated idea, but they might differ in their background knowledge about landlords and puppies.

However, look at this text:
John put on his sunglasses as he explored the ambit of the range.

Which question is easier, one that asks why John put on his sunglasses (an easy inference) or one that asks what he did (it is explicitly stated, but since most people don't know the meaning of ambit, it is the harder question in this case)?

The best evidence that we have of this isn't these kinds of tortured comparisons with artificial texts, but how hundreds of thousands of readers perform across dozens of naturally occurring texts with hundreds of real questions. That research is cited and linked in the blog that you commented on. Pay special attention to the charts showing the differences between literal and inferential performance at lots of different comprehension levels.

thanks.

tim

Julie Niles Peterson Jul 01, 2017 11:55 PM

7/5/2011

Thank you so much for the response, Dr. Shanahan. I look forward to reading the article you shared.

In the meantime,your example has me confused. I would say that the literal question is easier to answer. I could easily answer, "What did John do?" with, "John put on his sunglasses as he explored the ambit of the range." It does not necessarily mean I understand the sentence, but I can answer it correctly.

On the other hand, without some background knowledge of why people wear sunglasses and an understanding of "ambit" and "range," I could not answer your inferential question, "Why did John put on his sunglasses?" Hmmm...

Finally, I would say that the background knowledge of the reader is most important--without it, appropriate inferences cannot be made.

Again, thank you for your time.

Timothy Shanahan Jul 01, 2017 11:56 PM

7/5/2011

But see, Julie, that's where we get into trouble... when we start to say, "I could answer the comprehension question, I just couldn't understand what it meant." That means that the question required dumb pattern matching, without any real understanding of the author's message. That can't be acceptable.

Background knowledge certainly matters, but not just for inferences. How did you know what a puppy was or what it means to get one, or a landlord, or happiness? Interpreting those words requires prior knowledge, if only of the words themselves. That reading comprehension requires the combination of the new (what the author has told you) and the known (what you bring to the text), doesn't make one kind of question generally easier or harder than another.

tim

Julie Niles Peterson Jul 01, 2017 11:56 PM

7/5/2011

Perhaps it is differing definitions of literal comprehension or recall questions that is the problem here??? I certainly do not believe being able to correctly answer questions without understanding is a good thing. However, I do think literal questions can be correctly answered that way. Inferential questions, on the other hand, cannot. Answering them correctly requires an appropriate connection to be made between the text and a reader's prior knowledge--including vocabulary knowledge. This is why I believe answering inferential questions is harder than answering literal questions.

In my opinion, answering a literal question with anything more than what has been explicitly stated suggests that more than literal comprehension was used to answer the question.

P.S. I have been reading the article you shared. I just took a break to see if you had responded and you had. Thank you once again. Your time is much appreciated, Dr. Shanahan!

Timothy Shanahan Jul 01, 2017 11:57 PM

7/5/2011

But it isn't the questions that are making comprehension hard or easy here, it is the difficulty of the text.

If an author used rare words, presupposed extensive knowledge and experience, used devices like irony or sarcasm, or organized the information in a complicated way, then you will have difficulty answering the questions about the text, and it doesn't really matter if the questions tapped explicit or implied information.

(Identifying what the Prince did in a chapter of a Russian novel is difficult, but not because the author doesn't explicitly tell what he did. It is difficult because his name could be Prince Muishkin, Prince M., Lef Nicoleivich, my dear friend, the Idiot, and sometime only he or him (which is complicated when there are many hes and hims to choose from). Drawing conclusions about the Prince's actions is not appreciably harder than identifying the Prince's actions.

Julie Niles Peterson Jul 01, 2017 11:57 PM

7/5/2011

I read the article and skimmed through the appendix. Here are my thoughts:

1. I reread what you wrote that originally surprised me. This was my original interpretation of your words, "answering literal recall questions is harder than answering inferential questions (and almost everyone knows it)." After rereading your exact words, the words "supposed to be harder" are starting to throw me off more.

2. After reading the article, I could not find anything specific that supports my understanding of what you wrote. What I took away from the article in regard to literal and inferential comprehension was that those who struggle with literal comprehension also struggle with inferential comprehension and that those who are proficient in one are also proficient in the other. This makes sense to me because I have met many struggling readers who struggle to answer both types of questions correctly, but I would love to read more. Do you know any other studies that replicate these findings?

3. I wholeheartedly agree that, proficiency in understanding complex text is important.

4. This also makes sense, the "degree of text complexity differentiates student performance better than either the comprehension level [literal or inferential] or the kind of textual element [e.g. determining main idea] tested." (p. 16) This statement supports the importance of students having wide knowledge of the world and a large vocabulary, as well as the other items mentioned in the Lyons quote on p. 8.

5. I thank you SO much for your time and for pushing my thinking.

beckyg Jul 01, 2017 11:58 PM

9/21/2012

I'm afraid you are misrepresenting guided reading.

Your post:

The basic premises of guided reading include the idea that children learn to read by reading, that they benefit from some guidance and support from a teacher during this reading, and, most fundamentally, that this reading has to take place in texts that are “just right” in difficulty level.

My response:

Children DO learn to read by reading. However, during a guided reading group a teacher is TEACHING FOR reading behaviors, not just giving "some" guidance and support. A teacher must select an INSTRUCTIONAL level text. "Just right" describes a book a child can read independently. An instructional level text is one where the student has something to learn about reading. The teacher knows there is work in the text for the child and teaches and prompts for it when necessary. The goal is to teach the student to take on reading behaviors that will allow him to process and comprehend increasingly complex texts independently. Good instruction in reading strategies will allow students to take on those hard texts. Which, by the way, are not ignored in guided reading. Teachers who know their students will be expected to navigate hard texts will teach them how to go about reading and understanding texts that are difficult.


Your post:

A major concern of the guided-readingistas has been the fear that “children are reading texts that are too difficult for them.”

My response:

No, the fear is that children are given texts that are too difficult for them and expected to read and understand them.

Your posts are dangerous and could be construed as throwing out the whole idea of meeting children where they are at in order to move them forward. In addition, you've painted the teaching that occurs during a guided reading lesson as fluff and not the rigorous instruction that it is. No child will understand hard text without instruction and initial support - i.e. guided reading groups.

Also, I think separating guided reading from the rest of the balanced literacy framework is unfair as well. The framework components work together, and are ideally suited for teaching with the depth and rigor required by the new common core.

Timothy Shanahan Jul 02, 2017 12:00 AM

2/21/2012

But Becky, your claims for guided reading don't actually match up well with the Pinnell and Fountas book on the topic. They are, for instance, very specific about how to get kids into their "instructional level", but there is no information about how or when to move them to more challenging text.

Similarly, it is possible that children are learning routines that are being explicitly taught by the teacher (usually through mini-lessons), but Pinnell and Fountas are quite specific about the need to minimize this by placing students appropriately in text and by thorough preparation for reading (through picture walks and the like).

Finally, the idea of matching students to texts in the way that this approach recommends does not match well with the research. It is not that students do not learn in guided reading (they certainly do), but placing students in relatively more challenging materials and providing greater guidance (not minimizing it as they recommend)leads to greater amounts of learning.

Timothy Shanahan Jul 02, 2017 12:01 AM

2/21/2012

Anonymous--
You can redefine "instructional level" here and use it differently than it used in the field of reading (in which it is operationalized in terms of the degree of oral reading fluency and reading comprehension that students can accomplish on a first read), but then we are talking about something very different. Pinnell and Fountas are specific about how well students need to be able to read books for them to be considered at the students' instructional levels. The problem is that those matches were made up; they don't have empirical support.

When you say that it is dangerous to encourage teachers to teach with harder books than in the past, you are ignoring the fact that we have been making books easier and easier for students in school and it has had the opposite effect. So if you want kids to read well, perhaps the dangerous thing is to keep insisting that they be placed in materials that don't give them much opportunity to learn. Our ability to assess students and texts are far from perfect, and when you try to make the very fine distinctions needed to get students into just the right book, you are likely (a good share of the time) to place students in text where there is nothing to learn about reading. If you place them in more challenging materials, you will definitely have to teach more explicitly and the students will have to grapple with texts more than they do now, but the payoff will be more learning.

Karen Carroll Jul 02, 2017 12:01 AM

9/29/2012

During guided reading teachers will be scaffolding more because of the complexity of the text, my question is. . . how much time will teachers spend in a guided reading group for students at the second grade level and beyond?

Timothy Shanahan Jul 02, 2017 12:02 AM

9/29/2012

Karen--

That is a good question, and one that no one has a real answer to (in terms of research). One idea that seems reasonable is to vary the lengths of the texts (shorter hard texts, easier or more moderate longer texts). Short reads will allow you to stay to current amounts of time (20-60 minutes), but to still go deep.

The Terminatrix Jul 02, 2017 12:02 AM

10/23/2012

Wouldn't it make more sense to teach children explicitly how to interact with text as dictated by the CCSS with text they can actually read and then gradually introduce more challenging text that they can apply these newly learned skills with?

Timothy Shanahan Jul 02, 2017 12:03 AM

10/23/2012

That is perfectly reasonable (to introduce an approach to reading with an easier text and then to take on more challenging texts)... as long as you get to the more challenging texts. What we have seen in schools (and in the advice given to teachers) is a heavy emphasis on placing kids in relatively easy materials, but with little or no attention to moving kids up. Common core sets text levels that students have to reach--but it does not indicate what level texts students need to work with (I would recommend a mix of levels, not only in the common core ranges, but below and above them, too).

EdEd Jul 02, 2017 12:03 AM

11/27/2012

Hi Dr. Shanahan,

I realize I'm late to this conversation by over a year! However, I recently stumbled upon your blog and this post, and wanted to respond. Specifically, I have 2 main concerns with using the ACT report as evidence that we should be teaching children texts of greater complexity:

1) While there is no difference between question types (inferential vs. literal), there is a very clear relationship between performance on comprehension questions overall and performance on the ACT. There is also a high correlation between performance on easier passages and overall performance. In other words, while the ACT report does provide evidence that students seem to do equally well on inferential vs. literal comprehension questions, there is no evidence that text complexity is any more of a predictor of ACT performance than any of other variables mentioned.. In other words, the ability to answer inferential comprehension questions is still as important as the ability to answer any question from a complex passage. Neither provides more predictive power. In addition, successfully answering questions from hard passages is no less of a predictor of success than successfully answering questions from easier passages, according to the report.

All of this suggests that ability to answer questions from complex reading passages is no more of a predictor of ACT performance than ability to answer questions from easier passages. As such, there is no evidence that text complexity is any more of an accurate differentiator than any other variable.

2) Even if it were an accurate differentiator, this report provides no evidence related to instructional technique or instructional goal-setting. This report does not provide any evidence to conclude "teaching reading with more complex text is more effective," because there were no experimental or even correlational studies examining how students were taught - simply how they differentially answered questions on an outcome measure (ACT).

Overall, I've found your comments quite interesting regarding lack of evidence support strict teaching on instructional levels. However, I do not find the ACT report to indicate that either 1) complex text is a meaningful differentiator of ACT performance over any other variable measured, or 2) that complex passages should be used rather than passages on a child's instructional level.

Timothy Shanahan Jul 02, 2017 12:04 AM

11/27/2012

EdEd--
Never too late to the party... You are incorrect about this claim. Question types were not predictive of reading comprehension, but passage difficulties were. ACT used 6-7 variables to determine three levels of text complexity and these did separate out comprehension performance. They even included a graphic showing how substantial these differences were.

There are some experimental studies showing either that text difficulty alone makes no difference in student learning (O'Connor, Swanson, & Geraghty, 2010) or that students who are placed in more challenging text--more challenging than the "instructional level" do better in terms of learning (Morgan, Wilcox & Eldredge, 2000) as well as the correlational work of studies by William Powell. (Of course, we also have case studies showing the possibilities of successfully teaching struggling students with challenging text such as those reported by Grace Fernald in the 1940s). In any event, placing kids at the instructional level clearly isn't as helpful as has been claimed. Thanks.

EdEd Jul 02, 2017 12:04 AM

11/28/2012

Thanks for your reply Dr. Shanahan - glad to hear the party is still going :). My post ended up being longer than the limit, so I've separated it into 2 posts.

In response to the discussion of question types being predictive of reading comprehension, I'd clarify that question types were not differentially predictive, but all were still predictive. In other words, the more questions (of any kind) a student answered correctly, the higher the ACT score (see 1st graph on page 5 of the report). The implication is that proficiency with answer all forms of comprehension questions is important to performance on assessments such as the ACT. In other words, teaching explicit strategies related to comprehension is an important element of instruction.

Likewise, referencing the 1st graph on page 6, both less and more complex texts were equally predictive of ACT performance. In other words, there is a predictable relationship between overall ACT reading performance and both complex and uncomplicated passages. Given a certain score (x) on a less complex problem, you'd be able to predict overall ACT reading performance. On the other hand, the graph indicates that there is less predictive power at lower levels of performance (given x score below ACT reading benchmark, you would be unable to predict overall ACT reading score).

These data suggest that all reading skills measured across domains - type of question, complexity of passage - were important in performance on ACT reading composite. The better children seem to do in any given skill area, the better the ACT score overall. The exception, as noted before, is that performance with complex texts does not seem to differentiate performance below the ACT reading benchmark cut score, most likely due to a basal effect (a certain level of competence needs to be present before a child starts scoring more highly on complex passages, which is not present with children scoring on the lower end of the benchmark.)

The implication for instruction is that no one particular skill set seems to be favored more highly given the ACT report data. The data indicate that if you are deficient in skills related to answering inferential reasoning questions, for example, you would likely score lower on the ACT. The same would hold true with all skill sets measured.

EdEd Jul 02, 2017 12:05 AM

11/28/2012

Part II

In terms of more general research supporting the use of more complex text, it should be intuitive given learning research generally that a child should always be given the most challenging material possible that is still within the child's instructional level/zone of proximal development (ZPD). This seems to be the fundamental assertion with complex text - if it's possible for a child to engage text (with assistance) that's more complex, it's better to do so, because mastery of more difficult and complex material will result in higher levels of learning. There seem to be two issues confusing the conversation/practice, though: 1) problems with accurately identifying instructional ranges, and 2) using instructional level with oral reading fluency to select text for comprehension-based instruction.

In terms of the first, if we revisit the definition of instructional range, if a child can successfully complete a task (e.g., achieve deep comprehension with a complex text) with appropriate assistance, the task is within the child's instructional range. As such, it isn't correct to say that a child was a given a text 2 or 4 grade levels above a child's instructional level (as occurred in the Morgan et al study, for example) and successfully completed the task. If the task was successfully complete, the task was within the child's instructional level. The problem, then, is not that text given "on grade level" was too easy, but that the instructional level was incorrectly assessed. The true instructional level was, in fact, 2 or 4 grades above (based on highest level of performance).

In reality, I believe the mistake was confusing ORF instructional level with comprehension instructional level, which brings us to my second point above. I believe that folks are saying when they advocate complex text is, "Do not select text for comprehension instruction based on a child's instructional level with ORF." The reason is that some children may be able to comprehend text several levels above their ability to fluently read connected text. As such, from my perspective, the correct advice would not be, "select text that is above a child's instructional level," but "select text at the upper end of a child's instructional level in comprehension, not fluency, and make sure you are accurately defining and assessing 'instructional range' to the best of your ability."

Timothy Shanahan Jul 02, 2017 12:05 AM

11/28/2012


EdEd--
I see your point. Indeed, the questions (as opposed to the question types) are predictive. It wouldn't matter if they only asked high level inferential questions or literal questions, etc., they would still be able to predict performance. That is correct.

Let me take the point a step further: it suggests that questions or tasks are important or necessary in assessment and I think that is also true with regard to teaching. Not just having kids read, but having them use the information (to answer questions, discuss, write, report, etc.) is important in developing students' ability to understand what the read. Not a new point, I think Thorndike made it in 1917; but it is important.

Finally, one more step, although different types of questions do not access different or separable skills, that doesn't mean that it isn't a good idea for test makers and teachers to ask a variety of question types (not so much for the purpose of asking questions that exercise different aspects of the reading brain, but more so to ensure that you have plumbed the depths of a particular text).

Thus, it is very reasonable to ask a wide range of questions about what students are reading, but it is not sensible to look for patterns of performance in how they answer or fail to answer those questions (beyond the general and obvious: if the student can't answer the questions he or she failed to understand this text).

Timothy Shanahan Jul 02, 2017 12:06 AM

11/28/2012


The research is not showing that the concept of ZPD is wrong, but it is--as you point out--showing that the ways that reading experts have measured this concept have been off base. Teaching students with more challenging text will require different and greater amounts of teacher support, guidance, scaffolding, explanation, and student rereading, but with such instructional support, there is no reason that students will not learn.

I guess it just shows that if you take a deep, complex, and subtle construct and then make up a measure for it that is mechanistic and non-empirical (they could have found out very early that it wasn't working), you are going to make some pretty big mistakes. Unfortunately, for many educators the measures ultimately replaced the construct, so instead of seeing an instructional level as a span of levels requiring a variety of teaching choices, they see instructional level as a very real and specific thing (and to them common core is a very scary and wrong headed proposition).

EdEd Jul 02, 2017 12:06 AM

11/28/2012

First, thanks so much for being willing to take the time to discuss. I believe it shows your commitment to research-to-practice and helping facilitate understanding of what can be some difficult material.

In response to both sets of your comments, I think we're on the same page. In particular, I very much agree with your comments about the measure replacing the construct, which is a very relevant comment even beyond this discussion. Regardless of the discipline, it seems that folks often make that mistake, from IQ tests to state end-of-year tests. I'm not making any comments about the reliability or validity of those measures specifically, just that folks often forget that the concept is not always completely encapsulated by the measure.

Not sure if School Psychology Review is on your radar, but you may find the most recent volume of interest as there is substantial discussion of the very concept of validity - construct validity in particular - and the connection between theory and assessment. It will be interesting to see if that disconnect we sometimes see between construct and assessment could be at least partially mended with new ways of considering validity.

Anonymous Jul 02, 2017 12:07 AM

2/14/2015

What are your feelings about the Accelerated Reading Program? There seems to be some controversy on whether this program supports the Common Core.

Timothy Shanahan Jul 02, 2017 12:07 AM

2/14/2015
Personally, I don't see AC as a reading program, per se. More of an extra-curricular activity. It definitely isn't aimed at Common Core: the way it places students in text won't prepare them to read texts as challenging as required; they don't teach students how to write about text effectively; they lack adequate foundational skills coverage; their questioning routines are not consistent with the specifics of the standards or the fundamental ideas of close reading.

C Hancock Jul 02, 2017 12:08 AM

5/28/2015

What do you suggest a teacher to do when the principal puts a major emphasis on the Accelerated Reading Program, Guided Reading in groups (levels set by the STAR Reading Test), and requires staff to use leveled books from a very outdated book room?
Crissy Hancock

Timothy Shanahan Jul 02, 2017 12:08 AM

5/28/2015

Crissy--

I'd start a conversation. If I was in a Common Core state or Indiana, my question for the principal would be, "How am I supposed to get kids to the required levels if all my teaching is with easier books than that?"

What Are your thoughts?

Leave me a comment and I would like to have a discussion with you!

Comment *
Name*
Email*
Website
Comments

Common Core Standards versus Guided Reading, Part I

25 comments

One of the world’s premier literacy educators.

He studies reading and writing across all ages and abilities. Feel free to contact him.