Showing posts with label text complexity. Show all posts
Showing posts with label text complexity. Show all posts

Friday, December 27, 2013

How Publishers Can Screw Up the Common Core

Lexiles and other readability measures are criticized these days about as much as Congress. But unlike Congress they don’t deserve it.

Everyone knows Grapes of Wrath is harder to read than predicted. But for every book with a hinky readability score many others are placed just right.

These formulas certainly are not perfect, but they are easy to use and they make more accurate guesses than we can without them.

So what’s the problem?

Readability measures do a great job of predicting reading comprehension, but they provide lousy writing guidance.

Let’s say that you have a text that comes out harder than you’d hoped. You wanted it for fourth-grade, but the Lexiles say it’s better for grade 5.

Easy to fix, right? Just divide a few sentences in two to reduce average sentence length, and swap out a few of the harder words for easier synonyms, and voila, the Lexiles will be just what you’d hoped for.

But research shows this kind of mechanical “adjusting” doesn’t actually change the difficulty of the text (though it does mess up the accuracy of the readability rating). This kind of “fix” won’t make the text easier for your fourth-graders, but the grade that you put on the book will be just right. Would you rather feel good or look good?

With all of the new emphasis on readability levels in Common Core, I fear that test and textbook publishers are going to make sure that their measurements are terrific, even if their texts are not.

What should happen when a text turns out to be harder or easier than intended, is that the material should be assigned to another grade level or it should really be revised. Real revisions make more than make mechanical adjustments. Such rewrites engage in the author in trying to improve the text’s clarity.

Such fixes aren’t likely to happen much with reading textbooks, because they tend to be anthologies of texts already published elsewhere. E.B. White and Roald Dahl won’t be approving revisions of their stuff anytime soon, nor will many of the living and breathing authors whose books are anthologized.

But instructional materials and assessment passages that are written—not selected—specifically to teach or test literacy skills are another thing altogether. Don’t be surprised if many of those kinds of materials turn out to be harder or easier than you thought they’d be.

There is no sure way to protect against fitting texts to readability formulas. Sometimes mechanical revisions are pretty choppy, and you might catch that. But generally you can’t tell if a text has been manipulated to come out right. The publishers themselves may not know, since such texts are often written to spec by independent contractors.


Readability formulas are a valuable tool in text selection texts, but they only index text difficulty, they don’t actually measure it (that is, they do not reveal why a text may be hard to understand). Qualitative review of texts and continuous monitoring how well students do with texts in the classroom are important tools for keeping the publishing companies honest on this one. Buyer beware.

Monday, November 4, 2013

Who's Right on Text Complexity?

It seems that there is a lot of conflicting information coming out about accuracy and complex text. In the April edition of The Reading Teacher, Richard Allington wrote an article pertaining to struggling readers. In this article he says that there are studies showing the benefits to teaching children using text where their accuracy is high. Our district just raised the running record accuracy rate expectation to 95-98% accuracy based on the current research. Yet, your blog postings pull in the opposite direction. How do teachers know what is right and what is wrong? After all, teachers want to do what is best and most effective towards student learning.
  
What a great question. In my blog post, I cited particular studies and Dick Allington’s focused on a completely different set of studies. This is what teachers find so confusing. 

The experimental studies that I cited randomly assigned students to different treatment groups, so that children were matched to books in different ways, which allows a direct comparison of the impact of these methods—and gives us some certainty that the differences in learning were due to the different ways students were matched with text and not to something else.

Allington cites several correlational studies that examine existing patterns of relationship. These studies show that the lowest readers will tend to be placed in relatively harder texts and that they tend to make the least gains or to be the least motivated.

The problem with correlational studies of this kind is that they don’t allow us to attribute causation. From such evidence we can’t determine what role, if any, the student-book match made in kids’ learning. 

The students may have lagged because of how they were matched to books. But their low learning gains could also be due to other unmeasured instructional or demographic differences (many differences between high and low readers have been documented, but those were not controlled or measured in these studies). It could just be that the lowest readers make the least gains and that it has nothing to do with how they are matched to books. That’s why you need experiments (to determine whether the correlations matter).

I looked at studies that actually evaluated the effectiveness of this instructional practice (and these studies found either that student-text match made no difference or that harder placements led to more learning). While Dick looked at studies that revealed that there was a relationship between these variables, omitting all mention of these contradictory direct tests or of any of the correlational evidence that didn’t support his claims.

There were two experimental studies in his review, but neither of them manipulated this particular variable, so these results are correlational, too. For example, Linnea Ehri and her colleagues created a program in which teachers provided intensive reading support to young struggling readers (mainly explicit instruction in phonological awareness and phonics). However, teachers varied in how much reading they had the students do during the intervention and how they matched children to books; the kids who did a lot of reading of easier materials seemed to learn the most. That is an interesting finding, but it is still just a correlation.

One possibility is that there were other differences that weren’t measured (but that were somehow captured indirectly by the text-match variable). Perhaps the teachers were just responding to the students who were making the biggest gains and were undershooting their levels since they were gaining so fast. That would mean that it wasn’t the student-book match that was leading to learning, but that the better learning was influencing teacher decision-making about student-book match. How could we sort that confusing picture out? With experiments that systematically observe the impact of book placement separate from other variables; such as the the experimental studies that I cited.

A couple of other points worth noting: the kids who gained the least in the Ehri study were placed in texts in the way that you say your school is doing. In the Ehri study, the kids who made the biggest gains were in even easier materials than that; materials that should have afforded little opportunity to learn (which makes my point—there is no magic level that kids have to be placed in text to allow them to learn).

Another important point to remember: Allington’s article made no distinction based on grade levels or student reading levels. His claim is that all struggling readers need to spend much or most of their time reading relatively easy texts, and his most convincing data were drawn from studies of first-graders. However, the Common Core State Standards do not raise text levels for beginning readers. When students are reading at a first-grade level or lower (no matter what their ages), it may be appropriately cautious to keep them in relatively easy materials (though there are some discrepant data on this point too--that suggest that grouping students for instruction in this way damages children more than it helps them).

Experimental studies show that by the time students are reading like second-graders, it is possible for them to learn from harder text (as they did in the Morgan study). If we hold students back at their supposed levels, we are guaranteeing that they cannot reach the levels of literacy needed for college and career readiness by the time they leave high school.



Tuesday, October 16, 2012

New Presentation on Scaffolding Challenging Text

Here is the link to my presentation on scaffolding challenging text. Hope it is useful.

https://sites.google.com/site/tscommoncore/text-complexity

Sunday, August 21, 2011

Rejecting Instructional Level Theory

A third bit of evidence in the complex text issue has to do with the strength of evidence on the other side of the ledger. In my two previous posts, I have indicated why the common core is embracing the idea of teaching reading with much more complex texts. But what about the evidence that counters this approach?

Many years ago, when I was a primary grade teacher, I was struggling to teach reading. I knew I was supposed to have groups for different levels of kids, but in those days information about how to make those grouping decisions was not imparted to mere undergraduates. I knew I was supposed to figure out which books would provide the optimal learning experience, but I had no technology to do this.

So, I enrolled in a master’s degree program and started studying to be a reading specialist. During that training I learned how to administer informal reading inventories (IRI) and cloze tests and what the criteria were for independent, instructional, and frustration levels. Consequently, I tested all my students, and matched books to IRI levels using the publisher’s readability levels. I had no doubt that it improved my teaching and students’ learning.

I maintained my interest in this issue when I went off for my doctorate. I worked with Jack Pikulski. Jack had written about informal reading inventories (he’d studied with Johnson and Kress), and as a clinical psychologist he was interested in the validity of these measures. He even sent a bunch of grad students to an elementary school to test a bunch of kids, but nothing ever came of that study. Nevertheless, I learned a lot from Jack about that issue.

He had (has) a great clinical sense and he was skeptical of my faith in the value of those instructional level results. He recognized that informal reading inventories were far from perfect instruments and that at best they had general accuracy. They might be able to specify a wide range of materials for a student (say from grade 2 to 4), but that they couldn’t do better than that. (Further complicating things were the readability estimates. These had about the same level of accuracy.)

For Jack, the combination of two such rough guestimates was very iffy stuff. I liked the certainty of it though and clung to that for a while (until my own clinical sense grew more sophisticated).
Early in my scholarly career, I tracked down the source of the idea of independent, instructional, and frustration levels. It came from Emmett Betts’ textbook. He attributed the scheme to a study conducted by one of his doctoral students. I tracked down that dissertation and to my dismay it was evident that they had just made up those designations without any empirical evidence, something I wrote about 30 years ago!

Since then, readability measures have improved quite a bit, but our technologies for setting reading levels have not. Studies by William Powell in the 1960s, 70s, and 80s showed that the data that we were using did not result in an identification of optimum levels of student learning. He suggested more liberal placement criteria, particularly for younger students. More liberal criteria would mean that instead of accepting 95% word reading accuracy as Betts had suggested, Powell identified 85% as the better predictor of learning—which would mean putting kids in relatively more difficult books.

Consequently, I have sought studies that would support the original contention that we could facilitate student learning by placing kids in the right levels of text. Of course, guided reading and leveled books are so widely used it would make sense that there would be lots of evidence as to their efficacy.

Except that there is not. I keep looking and I keep finding studies that suggest that kids can learn from text written at very different levels (like the studies cited below by Morgan and O’Connor).

How can that be? Well, basically we have put way too much confidence in an unproven theory. The model of learning underlying that theory is too simplistic. Learning to read is an interaction between a learner, a text, and a teacher. Instructional level theory posits that the text difficulty level relative to the student reading level is the important factor in learning. But that ignores the guidance, support, and scaffolding provided by the teacher.

If the teacher is doing little to support the students’ transactions with text then I suspect more learning will accrue with somewhat easier texts. However, if reasonable levels of instructional support are available then students are likely to thrive when working with harder texts.

The problem with guided reading and similar schemes is that they are focused on helping kids to learn with minimal amounts of teaching (something Pinnell and Fountas have stated explicitly in at least some editions of their textbooks). But that switches the criterion. Instead of trying to get kids to optimum levels, that is the levels that would allow them to learn most, they have striven to get kids to levels where they will likely learn best with minimal teacher support.

The common core standards push back against the notion that students learn best when they receive the least teaching. The standards people want to know what it takes for kids to learn most, even if the teacher has to be deeply involved. For them, challenging text is the right ground to maximize learning… but the only way that will work is if kids are getting substantial teaching support in the context of that hard text.

P.S. Although Lexiles have greatly improved readability assessment (shrinking standard errors of measurement and improving the amount of comprehension variance that can be explained by text difficulty), and yet we are in no better shape than before since there are no studies indicating that if you teach students at particular Lexile levels more learning will accrue. (I suspect that if future studies go down this road, they will still find that the answer to that issue is variable; it will depend on the amount and quality of instructional support).

Betts, E. A. (1946). Foundations of reading instruction. New York: American Book Company.

Morgan, A., Wilcox, B. R., & Eldredge, J. L. (2000). Effect of difficulty levels on second-grade delayed readers using dyad reading. Journal of Educational Research, 94, 113–119.

O’Connor, R. E., Swanson, H. L., & Geraghty, C. (2010). Improvement in reading rate under independent and difficult text levels: Influences on word and comprehension skills. Journal of Educational Psychology, 102, 1–19.

Pinnell, G. S., & Fountas, I. C. (1996). Guided reading: Good first teaching for all children. Portsmouth, NH: Heinemann.

Powell, W. R. (1968). Reappraising the criteria for interpreting informal inventories. Washington, DC: ERIC 5194164.

Shanahan, T. (1983). The informal reading inventory and the instructional level: The study that never took place. In L. Gentile, M. L. Kamil, & J. Blanchard (Eds.), Reading research revisited, (pp. 577–580). Columbus, OH: Merrill.

Monday, July 11, 2011

More Evidence Supporting Hard Text

The past couple of blogs have dealt with the challenging text demands required by the new common core standards. Teachers who have been used to moving students to easier texts are in for a rude awakening since the new standards push to have students taught at particular Lexile levels that match grade levels rather than "reading levels."

Last week, I explained the evidence about the importance of text difficulty that was provided by the ACT. This week, I want to expand upon that explanation to show some of the other evidence that the authors of the common core depended upon, evidence that has been persuasively described and summarized by Marilyn Jager Adams in an article published in the American Educator (2010-2011).

Adams synthesized the information from various studies of textbook difficulty and learning, to demonstrate that textbook readabilities for Grades 4–12 have significantly and steadily grown easier since 1919; the difficulty of what adults are expected to read increased during that same time; and there is a relationship between the easing of text difficulty and students’ lower performance on the SAT. Obviously, if these things are true, one would want to ratchet (as the common core does) the difficulty of textbooks back up so that students would be better prepared for the actual reading demands beyond school.

Chall and her colleagues (Chall, Conard, & Harris, 1991) found that despite the fact that SAT passages had been getting easier, scores were declining anyway. Nevertheless, they found that textbooks were getting easier even faster than the SAT, and that reading these easier books appeared to provide poor preparation for dealing with the SAT. Even more convincing was a much larger study (Hayes, Wolfer, & Wolfe, 1996) that examined the readabilities of 800 elementary, middle school, and high school textbooks published between 1919–1991. Hayes and his team correlated the trends in text simplification with student performance on the SAT and found a good fit, concluding that “Long-term exposure to simpler texts may induce a cumulating deficit in the breadth and depth of domain-specific knowledge, lowering reading comprehension and verbal achievement.” Also, the texts used in high school have been found to be significantly easier than the texts students confront after they leave high school; in fact, young people make bigger reading gains during the years following high school than during it (Kirsch & Jungeblut, 1991).

Thus, these correlational data suggest that students will learn more from working with challenging texts than from the so-called “low readability, high interest” books that have become an educational staple. This approach is similar to that taken by athletes: To get stronger, you need to use more physical resistance than your muscles are used to; the more you do, the more you will be capable of doing, so it is essential to increase the workload.

The counter-argument to this heavier-books approach is the widespread belief that there is an optimum difficulty level for texts used to teach students to read. According to instructional level theory, if a text is written at a level that is too difficult for students, then they will become frustrated and discouraged and will not learn. Instructional level theory not only doesn't agree with the idea that learning comes from working with hard books, but claims that little or no learning would accrue if the books are too hard relative to student performance levels.

The evidence that supports the challenging-text approach obviously has some research support, but this is correlational in nature. Students seem to do better when they get a steady diet of more challenging text, but I would feel much better about this evidence if it were experimental and if there wasn't such a long-cherished counterargument. Given that, the next installment will weigh the evidence that supports the idea of there being an optimum level of text difficulty that fosters learning.

Tuesday, July 5, 2011

Common Core Standards versus Guided Reading, Part II

So why is the common core making such a big deal out of having kids read hard text?

One of the most persuasive pieces of evidence they considered was a report, “Reading: Between the Lines,” published by American College Testing (ACT; 2006). This report shows the primacy of text in reading and the value of having students spend time reading challenging text in the upper grades.

http://https:///www.act.org/research/policymakers/reports/reading.html">

Virtually every reading comprehension test and instructional program makes a big deal out of the different kinds of questions that can be asked about text. You’d be hard pressed these days to find teachers or principals who don’t know that literal recall questions that require a reader to find or remember what an author wrote are supposed to be harder than inferential questions (the ones that require readers to make judgments and recognize the implications of what the author wrote).

Similarly, in our fervor to use data and to facilitate better test performance, it has become common practice to analyze student test performance by question type, and then to try to teach the specific skills required by those questions. There are even commercial programs that you can buy that emphasize practice with main ideas, drawing conclusions, specific details, and the like.

There is only one problem with these schemes, according to ACT: they don’t work. In Reading: Between the Lines, ACT demonstrates that student performance cannot be differentiated in any meaningful way by question type. Students do not perform differently if they are answering literal recall items or inferential items (or other question types like main idea or vocabulary, either). Test performance, according to ACT, is driven by text rather than questions. Thus, if students are asked to read a hard passage, they may only answer a few questions correctly, no matter what types of questions they may be. On the other hand, with an easy enough text, students may answer almost any questions right, again with no differences by question type.

Thus, the ACT report shows that though different questions types make no difference in performance outcomes, but that text difficulty matters quite a bit (and this conclusion based on an analysis of data drawn from 563,000 students). One can ask any kind of question about any text — without regard to text difficulty.

What are reading comprehension standards? They tend to be numbered lists of cognitive processes or question types. Standards require students “to quote accurately from text,” to “determine two or more main ideas of a text,” or to “explain how main ideas are supported by key details,” and so on. But if question types (or standards) don’t distinguish reading performance and text difficulty does, then standards should make the ability to interpret hard texts a central requirement.

And, this is exactly what the common core standards have done. They make text difficulty a central feature of the standards. In the reading comprehension standards at every grade level and for every type of comprehension (literary, informational, social studies/history, science/technology), there is a standard that says something along the lines of, by the end of the year, students will be able to independently read and comprehend texts written in a specified text complexity band.

The ACT report goes on to describe features that made some texts harder to understand, including the complexity of the relationships among characters and ideas, amount and sophistication of the information detailed in the text, how the information is organized, the author’s style and tone, the vocabulary, and the author purpose. ACT concluded that based on these data, “performance on complex texts is the clearest differentiator in reading between students who are likely to be ready for college and those who are not” (p. 16-17).

Wednesday, June 29, 2011

Common Core Standards versus Guided Reading, Part I

The new common core standards are challenging widely accepted instructional practices. Probably no ox has been more impressively gored by the new standards than the widely-held claim that texts of a particular difficulty level have to be used for teaching if learning is going to happen.

Reading educators going back to the 1930s, including me, have championed the idea of there being an instructional level. That basically means that students would make the greatest learning gains if they are taught out of books that are at their “instructional” level – meaning that the text is neither so hard that the students can’t make sense of them or so easy that there is nothing in them left to learn.

These days the biggest proponents of that idea have been Irene Fountas and Gay Su Pinnell, at Ohio State. Their “guided reading” notion has been widely adopted by teachers across the country. The basic premises of guided reading include the idea that children learn to read by reading, that they benefit from some guidance and support from a teacher during this reading, and, most fundamentally, that this reading has to take place in texts that are “just right” in difficulty level. A major concern of the guided-readingistas has been the fear that “children are reading texts that are too difficult for them.”

That’s the basic idea, and then the different experts have proposed a plethora of methods for determining student reading levels, text difficulty levels, and for matching kids to books, and for guiding or scaffolding student learning. Schemes like Accelerated Reader, Read 180, informal reading inventories, leveled books, high readability textbooks, and most core or basal reading programs all adhere to these basic ideas, even though there are differences in how they go about it.

The common core is based upon a somewhat different set of premises. They don’t buy that there is an optimum student-text match that facilitates learning. Nor are they as hopeful that students will learn to read from reading (with the slightest assists from a guide), but believe that real learning comes from engagement with very challenging text and a lot of scaffolding. The common core discourages lots of out-of-level teaching, and the use of particularly high readability texts. In other words, it champions approaches to teaching that run counter to current practice.
How could the common core put forth such a radical plan that contradicts so much current practice?

The next few entries in this blog will consider why common core is taking this provocative approach and why that might be a very good thing for children’s learning.

Stay tuned.