Showing posts with label text complexity. Show all posts
Showing posts with label text complexity. Show all posts

Wednesday, May 13, 2015

How Much Text Complexity Can Teachers Scaffold?

How much of a "gap" can be compensated through differentiation? If my readers are at a 400 Lexile level, is there an effective way to use a 820 level chapter book? 

            This is a great question. (Have you ever noticed that usually means the responder thinks he has an answer).

            For years, teachers were told that students had to be taught with books that matched their ability, or learning would be reduced. As a teacher I bought into those notions. I tested every one of my students with informal reading inventories, one-on-one, and then tried to orchestrate multiple groups with multiple book levels. This was prior to the availability of lots of short paperback books that had been computer scored for F & P levels or Lexiles, so I worked with various basal readers to make this work.

            However, a careful look at the research shows me that almost no studies have found any benefits from such matching. In fact, if one sets aside those studies that focused on children who were reading no higher than a Grade 1 level, then the only results supporting specific student-text matches are those arguing for placing students at what we would have traditionally called their frustration level.

            Given this research and that so many state standards now require teachers to enable students to read more challenging texts in grades 2-12, teachers are going to need to learn to guide student reading with higher level text than in the past.

            Theoretically, there is no limit to how much of a gap can be scaffolded. Many studies have shown that teachers can facilitate student success with texts that students can read with only 80% accuracy and 50% comprehension, and I have no doubt, that with even more scaffolding, students could probably bridge even bigger gaps.

            I vividly remember reading a case study of Grace Fernald when I was in graduate school. She wrote about teaching a 13-year-old, a total non-reader, to read with an encyclopedia volume. That sounds crazy, but with a motivated student, and a highly skilled teacher, and a lot of one-on-one instructional time, without too many interruptions… it can work.

            But what is theoretically sound or possible under particularly supportive circumstances does not necessarily work in most classrooms.

            I have no doubt teachers can scaffold a couple of grade levels without too much difficulty. That is, the fifth-grade teacher working with a fifth-grade book can successfully bring along a student who reads at a third-grade level in most classroom situations. But as you make the distance between student and book bigger than that, then I have to know a lot more about the teacher’s ability and resources to estimate whether it will work this time.

           Nevertheless, by preteaching vocabulary, providing fluency practice, offfering guidance in making sense of sentences and cohesion, requiring rereading, and so on, I have no doubt that teachers can successfully scaffold a student across a 300-400 Lexile gap--with solid learning. 

            But specifically, you ask about scaffolding a 400-Lexile reader to an 820-Lexile text. If you had asked about 500 to 920, I wouldn't hesitate: Yes, a teacher could successfully scaffold that gap. I’m more hesitant with the 400 level as the starting point. My reason for this is because 400 is a first-grade reading level. This would be a student who is still mastering basic decoding skills.

            I do not believe that shifting to more challenging text under those circumstances is such a good idea.

            To address this student’s needs, I would ramp up my phonics instruction, including dictation (I want my students to encode the alphabetic system as well as decode it). I might increase the amount of reading he or she is expected to do with texts that highlight rather than obscure how the spelling system works (e.g., decodable text, linguistic text). I would increase work on high frequency words, and I would increase the amount of oral reading fluency work, too. I’d do all of these things.

            But I would not shift him/her to a harder book because of what needs to be mastered at beginning reading levels. We’ll eventually need to do that, but not until the foundations of decoding were more firmly in place. 

           An important thing to remember: no state standards raises the text demands for students in Kindergarten or Grade 1. They do not do this because they are giving students the opportunity to firmly master their basic decoding skills. It isn't the distance between 400 and 820 that concerns me--that kind of a distance can be bridged; but a 400-Lexile represents a limited degree of decoding proficiency, and so I wouldn't want to shift attention from achieving proficiency in reading those basic words.  




Thursday, December 11, 2014

Second Language Powerpoints

Today I had a marvelous time presenting to Arizona teachers at the OELAS conference. I made a presentation on scaffolding complex texts for English language learners and one on teaching close reading with informational text. I think I have posted the latter before, but since I always change these a bit here is the most recent version. The text complexity presentation overlaps with past presentations on teaching with challenging text, but this version includes lots of examples of scaffolding for Spanish language students. Hope these are useful to you: Powerpoints

Sunday, May 18, 2014

IRA 2014 Presentations

I made four presentations at the meetings of the International Reading Association in New Orleans this year. One of these was the annual research review address in which I explained the serious problems inherent in the "instructional level" in reading and in associated approaches like "guided reading" which have certainly outlived their usefulness.

IRA Talks 2014



Tuesday, April 29, 2014

Re-thinking Reading Interventions

Ever wonder why we teach kids with a one-size-fits-all anthology in the regular classroom, but are so careful to teach them at their “reading levels” when they are in a pull-out intervention program?

Me too.

In reading, students need the greatest amount of scaffolding and support when they are reading hard texts, and they need less support when reading easy materials.

But we do the opposite. We have kids reading the hardest materials when there is less support available. And, then when we go to the expense of providing lots of support, we simultaneously place the kids in easier texts.

I’ve written before that research has not been supportive of the idea that we need to teach students at their “reading levels” (except for beginning readers). And there are studies that show students can learn from harder texts, at least when they receive adequate instructional support.


What if we turned the world on its head? What if we worked with harder texts when students were working in small heterogeneous groups with a special teacher, and eased off on the text demands in whole class situations? What if struggling students got more opportunities to read and reread grade-level materials—such as taking on such texts in the interventions and then reading them again in the classroom? I suspect kids would make more growth, and would be more motivated to make growth than in the upside-down approaches that we are now using. 

Friday, December 27, 2013

How Publishers Can Screw Up the Common Core

Lexiles and other readability measures are criticized these days about as much as Congress. But unlike Congress they don’t deserve it.

Everyone knows Grapes of Wrath is harder to read than predicted. But for every book with a hinky readability score many others are placed just right.

These formulas certainly are not perfect, but they are easy to use and they make more accurate guesses than we can without them.

So what’s the problem?

Readability measures do a great job of predicting reading comprehension, but they provide lousy writing guidance.

Let’s say that you have a text that comes out harder than you’d hoped. You wanted it for fourth-grade, but the Lexiles say it’s better for grade 5.

Easy to fix, right? Just divide a few sentences in two to reduce average sentence length, and swap out a few of the harder words for easier synonyms, and voila, the Lexiles will be just what you’d hoped for.

But research shows this kind of mechanical “adjusting” doesn’t actually change the difficulty of the text (though it does mess up the accuracy of the readability rating). This kind of “fix” won’t make the text easier for your fourth-graders, but the grade that you put on the book will be just right. Would you rather feel good or look good?

With all of the new emphasis on readability levels in Common Core, I fear that test and textbook publishers are going to make sure that their measurements are terrific, even if their texts are not.

What should happen when a text turns out to be harder or easier than intended, is that the material should be assigned to another grade level or it should really be revised. Real revisions make more than make mechanical adjustments. Such rewrites engage in the author in trying to improve the text’s clarity.

Such fixes aren’t likely to happen much with reading textbooks, because they tend to be anthologies of texts already published elsewhere. E.B. White and Roald Dahl won’t be approving revisions of their stuff anytime soon, nor will many of the living and breathing authors whose books are anthologized.

But instructional materials and assessment passages that are written—not selected—specifically to teach or test literacy skills are another thing altogether. Don’t be surprised if many of those kinds of materials turn out to be harder or easier than you thought they’d be.

There is no sure way to protect against fitting texts to readability formulas. Sometimes mechanical revisions are pretty choppy, and you might catch that. But generally you can’t tell if a text has been manipulated to come out right. The publishers themselves may not know, since such texts are often written to spec by independent contractors.


Readability formulas are a valuable tool in text selection texts, but they only index text difficulty, they don’t actually measure it (that is, they do not reveal why a text may be hard to understand). Qualitative review of texts and continuous monitoring how well students do with texts in the classroom are important tools for keeping the publishing companies honest on this one. Buyer beware.

Monday, November 4, 2013

Who's Right on Text Complexity?

It seems that there is a lot of conflicting information coming out about accuracy and complex text. In the April edition of The Reading Teacher, Richard Allington wrote an article pertaining to struggling readers. In this article he says that there are studies showing the benefits to teaching children using text where their accuracy is high. Our district just raised the running record accuracy rate expectation to 95-98% accuracy based on the current research. Yet, your blog postings pull in the opposite direction. How do teachers know what is right and what is wrong? After all, teachers want to do what is best and most effective towards student learning.
  
What a great question. In my blog post, I cited particular studies and Dick Allington’s focused on a completely different set of studies. This is what teachers find so confusing. 

The experimental studies that I cited randomly assigned students to different treatment groups, so that children were matched to books in different ways, which allows a direct comparison of the impact of these methods—and gives us some certainty that the differences in learning were due to the different ways students were matched with text and not to something else.

Allington cites several correlational studies that examine existing patterns of relationship. These studies show that the lowest readers will tend to be placed in relatively harder texts and that they tend to make the least gains or to be the least motivated.

The problem with correlational studies of this kind is that they don’t allow us to attribute causation. From such evidence we can’t determine what role, if any, the student-book match made in kids’ learning. 

The students may have lagged because of how they were matched to books. But their low learning gains could also be due to other unmeasured instructional or demographic differences (many differences between high and low readers have been documented, but those were not controlled or measured in these studies). It could just be that the lowest readers make the least gains and that it has nothing to do with how they are matched to books. That’s why you need experiments (to determine whether the correlations matter).

I looked at studies that actually evaluated the effectiveness of this instructional practice (and these studies found either that student-text match made no difference or that harder placements led to more learning). While Dick looked at studies that revealed that there was a relationship between these variables, omitting all mention of these contradictory direct tests or of any of the correlational evidence that didn’t support his claims.

There were two experimental studies in his review, but neither of them manipulated this particular variable, so these results are correlational, too. For example, Linnea Ehri and her colleagues created a program in which teachers provided intensive reading support to young struggling readers (mainly explicit instruction in phonological awareness and phonics). However, teachers varied in how much reading they had the students do during the intervention and how they matched children to books; the kids who did a lot of reading of easier materials seemed to learn the most. That is an interesting finding, but it is still just a correlation.

One possibility is that there were other differences that weren’t measured (but that were somehow captured indirectly by the text-match variable). Perhaps the teachers were just responding to the students who were making the biggest gains and were undershooting their levels since they were gaining so fast. That would mean that it wasn’t the student-book match that was leading to learning, but that the better learning was influencing teacher decision-making about student-book match. How could we sort that confusing picture out? With experiments that systematically observe the impact of book placement separate from other variables; such as the the experimental studies that I cited.

A couple of other points worth noting: the kids who gained the least in the Ehri study were placed in texts in the way that you say your school is doing. In the Ehri study, the kids who made the biggest gains were in even easier materials than that; materials that should have afforded little opportunity to learn (which makes my point—there is no magic level that kids have to be placed in text to allow them to learn).

Another important point to remember: Allington’s article made no distinction based on grade levels or student reading levels. His claim is that all struggling readers need to spend much or most of their time reading relatively easy texts, and his most convincing data were drawn from studies of first-graders. However, the Common Core State Standards do not raise text levels for beginning readers. When students are reading at a first-grade level or lower (no matter what their ages), it may be appropriately cautious to keep them in relatively easy materials (though there are some discrepant data on this point too--that suggest that grouping students for instruction in this way damages children more than it helps them).

Experimental studies show that by the time students are reading like second-graders, it is possible for them to learn from harder text (as they did in the Morgan study). If we hold students back at their supposed levels, we are guaranteeing that they cannot reach the levels of literacy needed for college and career readiness by the time they leave high school.



Sunday, August 21, 2011

Rejecting Instructional Level Theory

A third bit of evidence in the complex text issue has to do with the strength of evidence on the other side of the ledger. In my two previous posts, I have indicated why the common core is embracing the idea of teaching reading with much more complex texts. But what about the evidence that counters this approach?

Many years ago, when I was a primary grade teacher, I was struggling to teach reading. I knew I was supposed to have groups for different levels of kids, but in those days information about how to make those grouping decisions was not imparted to mere undergraduates. I knew I was supposed to figure out which books would provide the optimal learning experience, but I had no technology to do this.

So, I enrolled in a master’s degree program and started studying to be a reading specialist. During that training I learned how to administer informal reading inventories (IRI) and cloze tests and what the criteria were for independent, instructional, and frustration levels. Consequently, I tested all my students, and matched books to IRI levels using the publisher’s readability levels. I had no doubt that it improved my teaching and students’ learning.

I maintained my interest in this issue when I went off for my doctorate. I worked with Jack Pikulski. Jack had written about informal reading inventories (he’d studied with Johnson and Kress), and as a clinical psychologist he was interested in the validity of these measures. He even sent a bunch of grad students to an elementary school to test a bunch of kids, but nothing ever came of that study. Nevertheless, I learned a lot from Jack about that issue.

He had (has) a great clinical sense and he was skeptical of my faith in the value of those instructional level results. He recognized that informal reading inventories were far from perfect instruments and that at best they had general accuracy. They might be able to specify a wide range of materials for a student (say from grade 2 to 4), but that they couldn’t do better than that. (Further complicating things were the readability estimates. These had about the same level of accuracy.)

For Jack, the combination of two such rough guestimates was very iffy stuff. I liked the certainty of it though and clung to that for a while (until my own clinical sense grew more sophisticated).
Early in my scholarly career, I tracked down the source of the idea of independent, instructional, and frustration levels. It came from Emmett Betts’ textbook. He attributed the scheme to a study conducted by one of his doctoral students. I tracked down that dissertation and to my dismay it was evident that they had just made up those designations without any empirical evidence, something I wrote about 30 years ago!

Since then, readability measures have improved quite a bit, but our technologies for setting reading levels have not. Studies by William Powell in the 1960s, 70s, and 80s showed that the data that we were using did not result in an identification of optimum levels of student learning. He suggested more liberal placement criteria, particularly for younger students. More liberal criteria would mean that instead of accepting 95% word reading accuracy as Betts had suggested, Powell identified 85% as the better predictor of learning—which would mean putting kids in relatively more difficult books.

Consequently, I have sought studies that would support the original contention that we could facilitate student learning by placing kids in the right levels of text. Of course, guided reading and leveled books are so widely used it would make sense that there would be lots of evidence as to their efficacy.

Except that there is not. I keep looking and I keep finding studies that suggest that kids can learn from text written at very different levels (like the studies cited below by Morgan and O’Connor).

How can that be? Well, basically we have put way too much confidence in an unproven theory. The model of learning underlying that theory is too simplistic. Learning to read is an interaction between a learner, a text, and a teacher. Instructional level theory posits that the text difficulty level relative to the student reading level is the important factor in learning. But that ignores the guidance, support, and scaffolding provided by the teacher.

If the teacher is doing little to support the students’ transactions with text then I suspect more learning will accrue with somewhat easier texts. However, if reasonable levels of instructional support are available then students are likely to thrive when working with harder texts.

The problem with guided reading and similar schemes is that they are focused on helping kids to learn with minimal amounts of teaching (something Pinnell and Fountas have stated explicitly in at least some editions of their textbooks). But that switches the criterion. Instead of trying to get kids to optimum levels, that is the levels that would allow them to learn most, they have striven to get kids to levels where they will likely learn best with minimal teacher support.

The common core standards push back against the notion that students learn best when they receive the least teaching. The standards people want to know what it takes for kids to learn most, even if the teacher has to be deeply involved. For them, challenging text is the right ground to maximize learning… but the only way that will work is if kids are getting substantial teaching support in the context of that hard text.

P.S. Although Lexiles have greatly improved readability assessment (shrinking standard errors of measurement and improving the amount of comprehension variance that can be explained by text difficulty), and yet we are in no better shape than before since there are no studies indicating that if you teach students at particular Lexile levels more learning will accrue. (I suspect that if future studies go down this road, they will still find that the answer to that issue is variable; it will depend on the amount and quality of instructional support).

Betts, E. A. (1946). Foundations of reading instruction. New York: American Book Company.

Morgan, A., Wilcox, B. R., & Eldredge, J. L. (2000). Effect of difficulty levels on second-grade delayed readers using dyad reading. Journal of Educational Research, 94, 113–119.

O’Connor, R. E., Swanson, H. L., & Geraghty, C. (2010). Improvement in reading rate under independent and difficult text levels: Influences on word and comprehension skills. Journal of Educational Psychology, 102, 1–19.

Pinnell, G. S., & Fountas, I. C. (1996). Guided reading: Good first teaching for all children. Portsmouth, NH: Heinemann.

Powell, W. R. (1968). Reappraising the criteria for interpreting informal inventories. Washington, DC: ERIC 5194164.

Shanahan, T. (1983). The informal reading inventory and the instructional level: The study that never took place. In L. Gentile, M. L. Kamil, & J. Blanchard (Eds.), Reading research revisited, (pp. 577–580). Columbus, OH: Merrill.