Showing posts with label text complexity. Show all posts
Showing posts with label text complexity. Show all posts

Sunday, August 30, 2015

More on the Instructional Level and Challenging Text

Teacher question:
I’ve read your posts on the instructional level and complex texts and I don’t think you understand guided reading. The point of guided reading placements is to teach students with challenging text. That’s why it is so important to avoid texts that students can read at their independent level; to make sure they are challenged. The Common Core requires teaching students with challenging texts—not frustration level texts.

Shanahan response: 
I’m having déjà vu all over again. I feel like I’ve covered this ground before, but perhaps not quite in the way that this question poses the issue.

Yes, indeed, the idea of teaching students at their instructional level is that some texts could be too easy or too hard to facilitate learning. By placing students in between these extremes, it has been believed that more learning would take place. In texts that students find easy (your independent level), there would be little for students to learn—since they could likely recognize all or most of the words and could understand the text fully without any teacher help. Similarly, texts that pose too much challenge might overwhelm or frustrate students so they could not learn. Thus, placing them in instructional level materials would be challenging (there would be something to learn), but not so challenging as to be discouraging.

Or, at least that’s the theory.

So, I do get that the way you seem to be placing kids in books is meant to be challenging. But please don’t confuse this level of challenge with what your state standards are requiring. Those standards are asking that you teach students to read texts of specified levels of difficulty—levels of difficulty that for most kids will exceed what you think of as challenging.

This means that everyone wants kids to be challenged. The argument is about how much challenge. You may think that a student will do best if the texts used for teaching is only so challenging that he/she’d make no more than 5 errors per 100 words of reading, and your state may think the appropriate challenge level is grade level texts that represent a progression that would allow the students to graduate from high school with a particular level of achievement. That means in many circumstances the state would say kids need to read book X, and you’d say, “no way, my kids make too many errors with book X to allow me to teach it successfully.”

The Lexile levels usually associated with particular grade levels are not the ones that the standards have assigned to the grades. The Lexile grade-designations from the past were an estimate of the level of text that the average students could read with 75-89% comprehension. Those levels weren’t claiming that all kids in a particular grade could read such texts successfully, but that the average ones could. Thus, you’d test the individual kids and place them in books with higher or lower Lexiles to try to get them to that magical instructional level.

The new standards, however, have assigned higher Lexile bands to each grade level. That means that even the average kids will not be able to read those texts at an instructional level; some kids might be able to at those grade levels, but not the majority. That means teachers would need to teach students to read books more challenging than what have typically been at their instructional levels. In other words, plenty of kids will need to be taught at their frustration level to meet the standards.

I do get the idea that instructional level is meant to be challenging. But for the majority of kids, teaching kids at their instructional level will not meet the standards. That degree of challenge undershoots the level of challenge established by your state (and that they will test your students at). Perhaps you can take solace in the fact that research has not been able to validate the idea that there is an instructional level; that is, kids can be taught to read successfully with texts more challenging than you’ve apparently used in the past.

Wednesday, May 13, 2015

How Much Text Complexity Can Teachers Scaffold?

How much of a "gap" can be compensated through differentiation? If my readers are at a 400 Lexile level, is there an effective way to use a 820 level chapter book? 

            This is a great question. (Have you ever noticed that usually means the responder thinks he has an answer).

            For years, teachers were told that students had to be taught with books that matched their ability, or learning would be reduced. As a teacher I bought into those notions. I tested every one of my students with informal reading inventories, one-on-one, and then tried to orchestrate multiple groups with multiple book levels. This was prior to the availability of lots of short paperback books that had been computer scored for F & P levels or Lexiles, so I worked with various basal readers to make this work.

            However, a careful look at the research shows me that almost no studies have found any benefits from such matching. In fact, if one sets aside those studies that focused on children who were reading no higher than a Grade 1 level, then the only results supporting specific student-text matches are those arguing for placing students at what we would have traditionally called their frustration level.

            Given this research and that so many state standards now require teachers to enable students to read more challenging texts in grades 2-12, teachers are going to need to learn to guide student reading with higher level text than in the past.

            Theoretically, there is no limit to how much of a gap can be scaffolded. Many studies have shown that teachers can facilitate student success with texts that students can read with only 80% accuracy and 50% comprehension, and I have no doubt, that with even more scaffolding, students could probably bridge even bigger gaps.

            I vividly remember reading a case study of Grace Fernald when I was in graduate school. She wrote about teaching a 13-year-old, a total non-reader, to read with an encyclopedia volume. That sounds crazy, but with a motivated student, and a highly skilled teacher, and a lot of one-on-one instructional time, without too many interruptions… it can work.

            But what is theoretically sound or possible under particularly supportive circumstances does not necessarily work in most classrooms.

            I have no doubt teachers can scaffold a couple of grade levels without too much difficulty. That is, the fifth-grade teacher working with a fifth-grade book can successfully bring along a student who reads at a third-grade level in most classroom situations. But as you make the distance between student and book bigger than that, then I have to know a lot more about the teacher’s ability and resources to estimate whether it will work this time.

           Nevertheless, by preteaching vocabulary, providing fluency practice, offfering guidance in making sense of sentences and cohesion, requiring rereading, and so on, I have no doubt that teachers can successfully scaffold a student across a 300-400 Lexile gap--with solid learning. 

            But specifically, you ask about scaffolding a 400-Lexile reader to an 820-Lexile text. If you had asked about 500 to 920, I wouldn't hesitate: Yes, a teacher could successfully scaffold that gap. I’m more hesitant with the 400 level as the starting point. My reason for this is because 400 is a first-grade reading level. This would be a student who is still mastering basic decoding skills.

            I do not believe that shifting to more challenging text under those circumstances is such a good idea.

            To address this student’s needs, I would ramp up my phonics instruction, including dictation (I want my students to encode the alphabetic system as well as decode it). I might increase the amount of reading he or she is expected to do with texts that highlight rather than obscure how the spelling system works (e.g., decodable text, linguistic text). I would increase work on high frequency words, and I would increase the amount of oral reading fluency work, too. I’d do all of these things.

            But I would not shift him/her to a harder book because of what needs to be mastered at beginning reading levels. We’ll eventually need to do that, but not until the foundations of decoding were more firmly in place. 

           An important thing to remember: no state standards raises the text demands for students in Kindergarten or Grade 1. They do not do this because they are giving students the opportunity to firmly master their basic decoding skills. It isn't the distance between 400 and 820 that concerns me--that kind of a distance can be bridged; but a 400-Lexile represents a limited degree of decoding proficiency, and so I wouldn't want to shift attention from achieving proficiency in reading those basic words.  

Thursday, December 11, 2014

Second Language Powerpoints

Today I had a marvelous time presenting to Arizona teachers at the OELAS conference. I made a presentation on scaffolding complex texts for English language learners and one on teaching close reading with informational text. I think I have posted the latter before, but since I always change these a bit here is the most recent version. The text complexity presentation overlaps with past presentations on teaching with challenging text, but this version includes lots of examples of scaffolding for Spanish language students. Hope these are useful to you: Powerpoints

Sunday, May 18, 2014

IRA 2014 Presentations

I made four presentations at the meetings of the International Reading Association in New Orleans this year. One of these was the annual research review address in which I explained the serious problems inherent in the "instructional level" in reading and in associated approaches like "guided reading" which have certainly outlived their usefulness.

IRA Talks 2014

Tuesday, April 29, 2014

Re-thinking Reading Interventions

Ever wonder why we teach kids with a one-size-fits-all anthology in the regular classroom, but are so careful to teach them at their “reading levels” when they are in a pull-out intervention program?

Me too.

In reading, students need the greatest amount of scaffolding and support when they are reading hard texts, and they need less support when reading easy materials.

But we do the opposite. We have kids reading the hardest materials when there is less support available. And, then when we go to the expense of providing lots of support, we simultaneously place the kids in easier texts.

I’ve written before that research has not been supportive of the idea that we need to teach students at their “reading levels” (except for beginning readers). And there are studies that show students can learn from harder texts, at least when they receive adequate instructional support.

What if we turned the world on its head? What if we worked with harder texts when students were working in small heterogeneous groups with a special teacher, and eased off on the text demands in whole class situations? What if struggling students got more opportunities to read and reread grade-level materials—such as taking on such texts in the interventions and then reading them again in the classroom? I suspect kids would make more growth, and would be more motivated to make growth than in the upside-down approaches that we are now using. 

Friday, December 27, 2013

How Publishers Can Screw Up the Common Core

Lexiles and other readability measures are criticized these days about as much as Congress. But unlike Congress they don’t deserve it.

Everyone knows Grapes of Wrath is harder to read than predicted. But for every book with a hinky readability score many others are placed just right.

These formulas certainly are not perfect, but they are easy to use and they make more accurate guesses than we can without them.

So what’s the problem?

Readability measures do a great job of predicting reading comprehension, but they provide lousy writing guidance.

Let’s say that you have a text that comes out harder than you’d hoped. You wanted it for fourth-grade, but the Lexiles say it’s better for grade 5.

Easy to fix, right? Just divide a few sentences in two to reduce average sentence length, and swap out a few of the harder words for easier synonyms, and voila, the Lexiles will be just what you’d hoped for.

But research shows this kind of mechanical “adjusting” doesn’t actually change the difficulty of the text (though it does mess up the accuracy of the readability rating). This kind of “fix” won’t make the text easier for your fourth-graders, but the grade that you put on the book will be just right. Would you rather feel good or look good?

With all of the new emphasis on readability levels in Common Core, I fear that test and textbook publishers are going to make sure that their measurements are terrific, even if their texts are not.

What should happen when a text turns out to be harder or easier than intended, is that the material should be assigned to another grade level or it should really be revised. Real revisions make more than make mechanical adjustments. Such rewrites engage in the author in trying to improve the text’s clarity.

Such fixes aren’t likely to happen much with reading textbooks, because they tend to be anthologies of texts already published elsewhere. E.B. White and Roald Dahl won’t be approving revisions of their stuff anytime soon, nor will many of the living and breathing authors whose books are anthologized.

But instructional materials and assessment passages that are written—not selected—specifically to teach or test literacy skills are another thing altogether. Don’t be surprised if many of those kinds of materials turn out to be harder or easier than you thought they’d be.

There is no sure way to protect against fitting texts to readability formulas. Sometimes mechanical revisions are pretty choppy, and you might catch that. But generally you can’t tell if a text has been manipulated to come out right. The publishers themselves may not know, since such texts are often written to spec by independent contractors.

Readability formulas are a valuable tool in text selection texts, but they only index text difficulty, they don’t actually measure it (that is, they do not reveal why a text may be hard to understand). Qualitative review of texts and continuous monitoring how well students do with texts in the classroom are important tools for keeping the publishing companies honest on this one. Buyer beware.

Monday, November 4, 2013

Who's Right on Text Complexity?

It seems that there is a lot of conflicting information coming out about accuracy and complex text. In the April edition of The Reading Teacher, Richard Allington wrote an article pertaining to struggling readers. In this article he says that there are studies showing the benefits to teaching children using text where their accuracy is high. Our district just raised the running record accuracy rate expectation to 95-98% accuracy based on the current research. Yet, your blog postings pull in the opposite direction. How do teachers know what is right and what is wrong? After all, teachers want to do what is best and most effective towards student learning.
What a great question. In my blog post, I cited particular studies and Dick Allington’s focused on a completely different set of studies. This is what teachers find so confusing. 

The experimental studies that I cited randomly assigned students to different treatment groups, so that children were matched to books in different ways, which allows a direct comparison of the impact of these methods—and gives us some certainty that the differences in learning were due to the different ways students were matched with text and not to something else.

Allington cites several correlational studies that examine existing patterns of relationship. These studies show that the lowest readers will tend to be placed in relatively harder texts and that they tend to make the least gains or to be the least motivated.

The problem with correlational studies of this kind is that they don’t allow us to attribute causation. From such evidence we can’t determine what role, if any, the student-book match made in kids’ learning. 

The students may have lagged because of how they were matched to books. But their low learning gains could also be due to other unmeasured instructional or demographic differences (many differences between high and low readers have been documented, but those were not controlled or measured in these studies). It could just be that the lowest readers make the least gains and that it has nothing to do with how they are matched to books. That’s why you need experiments (to determine whether the correlations matter).

I looked at studies that actually evaluated the effectiveness of this instructional practice (and these studies found either that student-text match made no difference or that harder placements led to more learning). While Dick looked at studies that revealed that there was a relationship between these variables, omitting all mention of these contradictory direct tests or of any of the correlational evidence that didn’t support his claims.

There were two experimental studies in his review, but neither of them manipulated this particular variable, so these results are correlational, too. For example, Linnea Ehri and her colleagues created a program in which teachers provided intensive reading support to young struggling readers (mainly explicit instruction in phonological awareness and phonics). However, teachers varied in how much reading they had the students do during the intervention and how they matched children to books; the kids who did a lot of reading of easier materials seemed to learn the most. That is an interesting finding, but it is still just a correlation.

One possibility is that there were other differences that weren’t measured (but that were somehow captured indirectly by the text-match variable). Perhaps the teachers were just responding to the students who were making the biggest gains and were undershooting their levels since they were gaining so fast. That would mean that it wasn’t the student-book match that was leading to learning, but that the better learning was influencing teacher decision-making about student-book match. How could we sort that confusing picture out? With experiments that systematically observe the impact of book placement separate from other variables; such as the the experimental studies that I cited.

A couple of other points worth noting: the kids who gained the least in the Ehri study were placed in texts in the way that you say your school is doing. In the Ehri study, the kids who made the biggest gains were in even easier materials than that; materials that should have afforded little opportunity to learn (which makes my point—there is no magic level that kids have to be placed in text to allow them to learn).

Another important point to remember: Allington’s article made no distinction based on grade levels or student reading levels. His claim is that all struggling readers need to spend much or most of their time reading relatively easy texts, and his most convincing data were drawn from studies of first-graders. However, the Common Core State Standards do not raise text levels for beginning readers. When students are reading at a first-grade level or lower (no matter what their ages), it may be appropriately cautious to keep them in relatively easy materials (though there are some discrepant data on this point too--that suggest that grouping students for instruction in this way damages children more than it helps them).

Experimental studies show that by the time students are reading like second-graders, it is possible for them to learn from harder text (as they did in the Morgan study). If we hold students back at their supposed levels, we are guaranteeing that they cannot reach the levels of literacy needed for college and career readiness by the time they leave high school.