Showing posts with label challenging text. Show all posts
Showing posts with label challenging text. Show all posts

Monday, May 18, 2015

An Argument About Matching Texts to Students

A reader wrote:
My main response is toward your general notion of the research surrounding teaching kids "at their level."

First, I think the way you're describing instructional/skill levels obfuscates the issue a bit. Instructional level, by definition, means the level at which a child can benefit from instruction, including with scaffolding. Frustrational, by definition, means the instruction won't work. Those levels, like the terms "reinforcement & punishment" for example, are defined by their outcomes, not intentions. If a child learned from the instruction, the instruction was on the child's "instructional" level.

Where we may be getting confused is that I think you actually are referring to teaching reading comprehension using material that is in a child's instructional level with comprehension, but on a child's frustrational level with reading fluency. This is a much different statement than what I think most teachers are getting from your messages about text complexity, to the point that I think they're making mistakes in terms of text selection.

More generally, I'd argue that there is copious research supporting using "instructional material" to teach various reading skills. Take, for example, all of the research supporting repeated readings. That intervention, by definition, uses material that is on a child's "instructional" level with reading fluency, and there is great support that it works. So, the idea that somehow "teaching a child using material on his/her instructional level is not research supported" just doesn't make sense to me.

In terms of this specific post about how much one can scaffold, I think it largely depends on the child and specific content, as Lexiles and reading levels don't fully define a material's "instructional level" when it comes to comprehension. I know many 3rd graders, for example, that could be scaffolded with material written on an 8th grade level, but the content isn't very complex, so scaffolding is much easier.

The broad point here, Dr. Shanahan, is that we're over-simplifying, therefore confusing, the issue by trying to argue that kids should be taught with reading material on their frustrational level, or on grade level despite actual skill level. People are actually hearing you say that we should NOT attempt to match a child with a text - that skill level or lexile is completely irrelevant - when I believe you know you're saying that "instructional level" is just a bit more nuanced than providing all elements of reading instruction only on a child's oral reading fluency instructional range.

First, you are using the terms “instructional level” and “frustration level” in idiosyncratic ways. These terms are not used in the field of reading education as you claim, nor have they ever been. These levels are used as predictions, not as post-instruction evaluations. If they were used in the manner you suggest, then there would be little or no reason for informal reading inventories and running records. One would simply start teaching everyone with grade level materials, and if a student was found to make no progress, then we would simply lower the text difficulty over time.

My reply:
Of course, that is not what is done at all. Students are tested, instructional levels are determined, instructional groups are formed, and books assigned based on this information.

The claim has been that if you match students to text appropriately (the instructional level) that you will maximize the amount of student learning. This definition of instructional level does allow for scaffolding—in fact, that’s why students are discouraged from trying to read instructional level materials on their own, since there would be no scaffold available.

Fountas and Pinnell, for example, are quite explicit that even with sound book matching it is going to be important to preteach vocabulary, discuss prior knowledge, and engage children in picture walks so that they will be able to read the texts with little difficulty. And, programs like Accelerated Reading limit what books students are allowed to read.

You are also claiming that students have different instructional levels for fluency and comprehension. Informal reading inventories and running records measure both fluency AND reading comprehension. They measure them separately.  But there is no textbook or commercial IRI that suggests to teachers that they should be using different levels of texts to teach these different skills or contents. How accurately the students read the words and answer questions are combined to make an instructional text placement—not multiple text placements.

If we accept your claim that any text that leads to learning is at the “instructional level,” then pretty much any match will do. Students, no matter how they are taught, tend to make some learning gains in reading as annual Title I evaluations have shown again and again. These kids might have only gained .8 years in reading this year (the average is 1.0), but they were learning and by your lights that means we must have placed them appropriately.

Repeated reading has been found to raise reading achievement, as measured by standardized reading comprehension tests, but as Steve Stahl and Melanie Kuhn have shown, such fluency instruction works best—that is, leads to greater learning gains—when students work with books identified as being at their frustration levels rather than at their so-called instructional levels. That’s why in their large-scale interventions they teach students with grade level texts rather than trying to match students to texts based on an invalid construct (the instructional level).

You write: “People are actually hearing you say that we should NOT attempt to match a child with a text -- that skill level or Lexile is completely irrelevant - when I believe you know you're saying that "instructional level" is just a bit more nuanced than providing all elements of reading instruction only on a child's oral reading fluency instructional range.”

In fact, I am saying that beyond beginning reading, teachers should NOT attempt to match students with text. I am also saying that students should be reading multiple texts and that these should range from easy (for the child) to quite difficult. I am saying that the more difficult a text is, the more scaffolding and support the teacher needs to provide—and that such scaffolding should not include reading the text to the student or telling the student what the text says.


I am NOT saying that skill level or Lexile are irrelevant, or that “instructional level” is simply a bit more nuanced then people think. It is useful to test students and to know how hard the texts are for that student; that will allow you to be ready to provide sufficient amounts of scaffolding (and to know when you can demand greater effort and when just more effort will not pay off).

Thursday, December 11, 2014

Second Language Powerpoints

Today I had a marvelous time presenting to Arizona teachers at the OELAS conference. I made a presentation on scaffolding complex texts for English language learners and one on teaching close reading with informational text. I think I have posted the latter before, but since I always change these a bit here is the most recent version. The text complexity presentation overlaps with past presentations on teaching with challenging text, but this version includes lots of examples of scaffolding for Spanish language students. Hope these are useful to you: Powerpoints

Sunday, October 5, 2014

Final Notes on Washington Post Article on Complex Text Requirements

Last week I replied to some of the remarks about text complexity that were made on the Valerie Strauss’s Washington Post column. Here are a couple more.


Fountas and Pinnell are stating what is their take on what the Common Core standards say. What the standards say and what their supporters are advocating are not necessarily the same thing. I think this statement is fully in agreement with what I have said above. 
 
"But standards do not usually prescribe that students must spend all their time reading texts that are extremely hard for them, with no access to books that will help them learn." 
 
As I said in the article, it was Core supporters Petrilli and Shanahan who have made the argument for frustration level text, not necessarily the Common Core standards. The way the standards are being implemented, and the fact that they do ramp up text level expectations with no research to back up that requirement, is problematic. 

This writer makes claims that simply are not true.

He/she claims Mike Petrilli and I have promoted something that is not in the Common Core. That is not case. Let me explain where the idea that students will need to be taught with more challenging text comes from. First, CCSS, unlike the standards they replace, specify the levels of text that children need to be able to read to meet the standards. In the past, standards emphasized reading skills, but neglected the complexity of the language that students needed to negotiate. Teachers could teach the grade level skills, but place kids in out-of-grade-level texts without any concern.

Additionally, CCSS has set the levels for each grade in a way that ensures that the average child will NOT be able to read the texts with 95% accuracy and 75% comprehension. The writer is correct that the standards don’t explicitly say that, but it is easy to check out. For example, MetaMetrics has long set Lexile levels for the grade levels in a way aimed at identifying the texts that students could read with 75-90% comprehension. CCSS has set standards that raise the Lexile levels for each grade level (raising them means that the average student would not be able tor read the texts with that level of comprehension, because the books would be relatively harder.

The other big error in this letter is the claim that there is “no research” supporting the ramping up of text level expectations. Actually, that is not the case. There is a growing body of research showing that our students are not graduating from high school and that students can be taught effectively with more challenging text. In fact, in some of the studies working in harder texts has led to markedly higher achievement.



Russ Walsh calls for teachers to "balance our instruction between independent level, on-level, and frustration level texts." That is, reading experts are (and always have been) recommending that students encounter 'frustration level" texts whether one approves or disapproves of Common Core. 

I think Shanahan is incorrectly characterizing guided reading instruction in the piece you cited above.
 
Fair point. I thinks he sets up a straw man (either students read easier texts without instruction or more difficult texts with instruction) and proceeds to knock it down - so I would have to agree with your criticism.  


These 3 sets of comments are incorrect as well. I would suggest that they go and read Fountas and Pinnell or Allington or Johns or any number of reading experts who have written about instructional level teaching and guided reading. None of these sources recommend teaching students with both instructional and frustration level materials. I have repeatedly over the past few years suggested that more reading strength would be developed by having students read texts at multiple levels and have even designed instructional programs that do this. That approach comes from my analysis of the research on this issue, not from past practices recommended by Russ Walsh or any of these other authorities (in fact, another respondent showed quotes from Fountas and Pinnell showing that they reject the idea of teaching kids with grade level materials—despite the research studies showing students making bigger gains doing that instead of guided reading).

________________________________________________________________________

The original posting and the responses revealed some unfortunate confusion over a couple of terms of reading jargon: balanced literacy and guided reading. Lots of the exchanges looked like folks talking past each other, because they didn't know what these terms referred to. Carol Burris seemed to think that "balanced literacy" referred to balancing frustration and instructional level text (it doesn't), and it is important to recognize that there are at least two definitions of "guided reading." When I (and others) refer to "guided reading" colloquially we confuse teachers as to what the problem is that Common Core is addressing. In an upcoming posting (or two), I will define these terms and try to explain their significance to try to reduce some of this confusion as that can only undermine efforts to better meet kids' educational needs.

Finally, National Public Radio will soon address the complex text issue. Here's hoping that they sow less confusion and misinformation than the Washington Post article.


Saturday, May 3, 2014

Some Updates

This has been a busy time. But here are some links, suggestions, and updates:

Pat Wingert has an article on Common Core in Atlantic this month that I figure in:
Atlantic Magazine: When English Proficiency Isn't Enough

Here are my recent powerpoints as promised: Recent Powerpoints


Monday, November 4, 2013

Who's Right on Text Complexity?

It seems that there is a lot of conflicting information coming out about accuracy and complex text. In the April edition of The Reading Teacher, Richard Allington wrote an article pertaining to struggling readers. In this article he says that there are studies showing the benefits to teaching children using text where their accuracy is high. Our district just raised the running record accuracy rate expectation to 95-98% accuracy based on the current research. Yet, your blog postings pull in the opposite direction. How do teachers know what is right and what is wrong? After all, teachers want to do what is best and most effective towards student learning.
  
What a great question. In my blog post, I cited particular studies and Dick Allington’s focused on a completely different set of studies. This is what teachers find so confusing. 

The experimental studies that I cited randomly assigned students to different treatment groups, so that children were matched to books in different ways, which allows a direct comparison of the impact of these methods—and gives us some certainty that the differences in learning were due to the different ways students were matched with text and not to something else.

Allington cites several correlational studies that examine existing patterns of relationship. These studies show that the lowest readers will tend to be placed in relatively harder texts and that they tend to make the least gains or to be the least motivated.

The problem with correlational studies of this kind is that they don’t allow us to attribute causation. From such evidence we can’t determine what role, if any, the student-book match made in kids’ learning. 

The students may have lagged because of how they were matched to books. But their low learning gains could also be due to other unmeasured instructional or demographic differences (many differences between high and low readers have been documented, but those were not controlled or measured in these studies). It could just be that the lowest readers make the least gains and that it has nothing to do with how they are matched to books. That’s why you need experiments (to determine whether the correlations matter).

I looked at studies that actually evaluated the effectiveness of this instructional practice (and these studies found either that student-text match made no difference or that harder placements led to more learning). While Dick looked at studies that revealed that there was a relationship between these variables, omitting all mention of these contradictory direct tests or of any of the correlational evidence that didn’t support his claims.

There were two experimental studies in his review, but neither of them manipulated this particular variable, so these results are correlational, too. For example, Linnea Ehri and her colleagues created a program in which teachers provided intensive reading support to young struggling readers (mainly explicit instruction in phonological awareness and phonics). However, teachers varied in how much reading they had the students do during the intervention and how they matched children to books; the kids who did a lot of reading of easier materials seemed to learn the most. That is an interesting finding, but it is still just a correlation.

One possibility is that there were other differences that weren’t measured (but that were somehow captured indirectly by the text-match variable). Perhaps the teachers were just responding to the students who were making the biggest gains and were undershooting their levels since they were gaining so fast. That would mean that it wasn’t the student-book match that was leading to learning, but that the better learning was influencing teacher decision-making about student-book match. How could we sort that confusing picture out? With experiments that systematically observe the impact of book placement separate from other variables; such as the the experimental studies that I cited.

A couple of other points worth noting: the kids who gained the least in the Ehri study were placed in texts in the way that you say your school is doing. In the Ehri study, the kids who made the biggest gains were in even easier materials than that; materials that should have afforded little opportunity to learn (which makes my point—there is no magic level that kids have to be placed in text to allow them to learn).

Another important point to remember: Allington’s article made no distinction based on grade levels or student reading levels. His claim is that all struggling readers need to spend much or most of their time reading relatively easy texts, and his most convincing data were drawn from studies of first-graders. However, the Common Core State Standards do not raise text levels for beginning readers. When students are reading at a first-grade level or lower (no matter what their ages), it may be appropriately cautious to keep them in relatively easy materials (though there are some discrepant data on this point too--that suggest that grouping students for instruction in this way damages children more than it helps them).

Experimental studies show that by the time students are reading like second-graders, it is possible for them to learn from harder text (as they did in the Morgan study). If we hold students back at their supposed levels, we are guaranteeing that they cannot reach the levels of literacy needed for college and career readiness by the time they leave high school.



Thursday, August 8, 2013

Powerpoints from Summer Speeches on CCSS

This has been a very busy summer with lots of projects, research analysis, article writing, and, of course, many presentations around the country. These talks have focused on the shifts or changes required by Common Core, the foundational skills preserved by CCSS in the primary grades, disciplinary literacy, challenging text, and close reading. The Powerpoints from those presentations are all now available at this site.

https://sites.google.com/site/summer2013ccss/home/summer-2013-presentations

Tuesday, February 5, 2013

A Question on Text Complexity


I am looking for some clarification on the guided reading discussion. It would seem by many that you are saying that students do not need to work at their “instructional” level while learning reading skills and strategies. What I think you are saying is that once they are beyond decoding text up to a second grade reading level it is no longer necessary to do this. Sounds like once they get here they are reading and they can then move on to being taught comprehension using more complex text with teacher guidance. Is this what you mean? Or are you in fact saying that leveled guided reading of all sorts is not effective? Should students in K and 1 not worry about comprehension at their instructional level and just work with guidance and support from a teacher through higher more complex text?  If so, at what point do we begin to understand what they can actually do on their own?

 Good questions. I hope I can clarify. Let’s try this:

Text difficulty matters a lot with beginning readers. They have to figure out the decoding system and much of this knowledge comes from abstracting patterns from the words they read. By keeping text relatively easy early on, we make it easier for them to figure out the code. Thus, guided reading and other schemes for nurturing beginning readers and bringing them along step-by-step through increasingly difficult text levels make a lot of sense. Definitely use them with beginners.

Once kids reach about a beginning second-grade decoding level, we don’t need to be as scrupulous about text difficulty. Students can learn from a pretty wide range of difficulty levels, and text difficulty is not a reliable predictor of student learning.

If the issue is teaching reading, then matching text complexity with student reading levels is NOT the issue. That’s where guided reading and similar schemes go wrong.

Placing students in more challenging books is a good idea because it increases opportunity to learn (there is more to figure out in challenging texts). This is important since our kids do not read effectively at high enough levels.  

But just placing students in more challenging text makes the same error that guided reading did; it just replaces an over-reliance on one kind of text-student match with another. Increases in text difficulty levels need to be coordinated with increases in the amounts and quality of scaffolding, support, encouragement, and explanation provided by the teacher. If a text is relatively easy for students, as with a traditional guided reading match, then they won’t require much instructional support with that text (though there won’t be much to learn from such texts either). But if the text is relatively difficult for students, teachers will need to be a lot more energetic in their teaching responses.

There is more to be learned from challenging texts, but this means that there needs to be a lot more teaching with such texts. Instead of asking what book level to teach someone at, teachers should ask, “If I place a student in a book this challenging, how much support will I need to provide to enable him/her to learn from this text?"  

Friday, February 1, 2013

How Can I Teach with Books that are Two Years above Student Reading Levels?


I teach 4th grade general education. I have read several of your articles the last few days because I have a growing frustration regarding guided reading. I believe a lot of your ideas about what does not work are correct, but I don't understand what you believe we SHOULD be doing. I am confused about how to give students difficult text books to read without reading it to them. I thought I was doing what I was supposed to be doing. I do not know how to scaffold science or social studies text for students that are 2 years behind without reading it to them. I also feel pressure in these subjects to read it to them because I thought it was more important for them to understand the information thoroughly by reading the text aloud, having thoughtful discussions, and follow up activities. Every time I think I know what I should be doing, I read another article and realize that I am doing that wrong too. So, please give me guidance on how to best teach nonfiction and fiction text to my class whole group. What strategies and types of activities are the best?

I feel your pain. What would it look like to scaffold a fourth-grade lesson from a social studies book with children reading at what formerly we would have referred to as a second-grade level? I think there are a number of possibilities.

First, I would “level” (pun intended) with the kids. That is, I would not try to hide from them that I was going to ask them to read a book that we would in the past have said was too hard for them. The point here is motivation. People like a challenge and kids are people. When you ask them to take on something really hard, let them in on the secret so they know to be proud of themselves when they meet the challenge.

Second, and here I have to be a bit experimental, trying some choices that might turn out not to work—or, more likely, that turn out to be not as efficient as some of the other choices. My first attempt would be to read the chapter we were going to work with, trying to identify anything that might trip the kids up: specific ideas that I thought were especially complicated or subtle or abstract, key background information that they might not know yet, essential vocabulary, sentences that might confuse, cohesive links among the words that could be hard to track, organizational structure that might require highlighting, and so on. Basically, what makes this text hard to comprehend? With that information, I would now make a decision: is the difficulty something to be prevented or monitored?

Sometimes, I will think that a problem is so big that I must get out in front of it. If there is something that you are certain the kids can’t figure out that might discourage them or that wouldn’t be worth the time, then by all means intervene early. If I think the key to understanding this page is a particular vocabulary word, I very well might explain that word before having the kids attempt the page. But often, I would rather have the students give it a try; there is nothing wrong with trying something and failing the first time. I can monitor their success with questions aimed at revealing whether they got that point or not, and I can follow up with assistance. So, if the students don't connect a particular concept and process appropriately because of a confusing cohesive link (like not recognizing that “it” referred to the planentary ring and not gravity), I will get the kids involved in trying to connect the various references throughout the text.

Third, the scaffolding described above will likely require some rereading—either of the whole chapter (fourth grade science and social studies chapters are surprisingly short, so rereading the entire chapter is usually not that big a deal). Thus, they try to read it; I question them and help them work through the problems; and then they reread it (perhaps more than once), to see if they can figure it out the second or third time.

Fourth, let’s say I have tried that and the process has been really slow and labored or the kids are being tripped up, not by the ideas, but by their struggle to recognize and read the words. If this is the case, before I even get to the reading and scaffolding and rereading described above, I would have the students do fluency work. For example, I would have the students mumble read the text (or a part of the text) at their desks. Or, I would partner them up and have them engage in paired reading, taking turns reading one page aloud to their partner, and then listening and helping as the partner tries the next page. That kind of oral reading practice with repetition can be a big help in raising the students’ ability to work with that text. Once they have read it like that once or twice, you’d be surprised at how much better they can read it for comprehension. Thus, they would then be ready for step two above. As I said, you have to be experimental—trying out different combinations and orders of fluency work, reading, scaffolding, and rereading.

This can be painstaking. But, in the end, the students will have read the material that formerly you would have protected them from. They will have both the science or social studies knowledge, but it will have come about because of their own interactions with the text, rather than because you read it to them or told them what it said. By engaging in such efforts (and this is a real effort—it involves a lot of teacher planning, modeling, explaining, etc.), the students become better able to handle harder text than they could at the start. Over time they build the strength to handle more challenging language with less teacher guidance.