Showing posts with label text difficulty. Show all posts
Showing posts with label text difficulty. Show all posts

Monday, November 4, 2013

Who's Right on Text Complexity?

It seems that there is a lot of conflicting information coming out about accuracy and complex text. In the April edition of The Reading Teacher, Richard Allington wrote an article pertaining to struggling readers. In this article he says that there are studies showing the benefits to teaching children using text where their accuracy is high. Our district just raised the running record accuracy rate expectation to 95-98% accuracy based on the current research. Yet, your blog postings pull in the opposite direction. How do teachers know what is right and what is wrong? After all, teachers want to do what is best and most effective towards student learning.
  
What a great question. In my blog post, I cited particular studies and Dick Allington’s focused on a completely different set of studies. This is what teachers find so confusing. 

The experimental studies that I cited randomly assigned students to different treatment groups, so that children were matched to books in different ways, which allows a direct comparison of the impact of these methods—and gives us some certainty that the differences in learning were due to the different ways students were matched with text and not to something else.

Allington cites several correlational studies that examine existing patterns of relationship. These studies show that the lowest readers will tend to be placed in relatively harder texts and that they tend to make the least gains or to be the least motivated.

The problem with correlational studies of this kind is that they don’t allow us to attribute causation. From such evidence we can’t determine what role, if any, the student-book match made in kids’ learning. 

The students may have lagged because of how they were matched to books. But their low learning gains could also be due to other unmeasured instructional or demographic differences (many differences between high and low readers have been documented, but those were not controlled or measured in these studies). It could just be that the lowest readers make the least gains and that it has nothing to do with how they are matched to books. That’s why you need experiments (to determine whether the correlations matter).

I looked at studies that actually evaluated the effectiveness of this instructional practice (and these studies found either that student-text match made no difference or that harder placements led to more learning). While Dick looked at studies that revealed that there was a relationship between these variables, omitting all mention of these contradictory direct tests or of any of the correlational evidence that didn’t support his claims.

There were two experimental studies in his review, but neither of them manipulated this particular variable, so these results are correlational, too. For example, Linnea Ehri and her colleagues created a program in which teachers provided intensive reading support to young struggling readers (mainly explicit instruction in phonological awareness and phonics). However, teachers varied in how much reading they had the students do during the intervention and how they matched children to books; the kids who did a lot of reading of easier materials seemed to learn the most. That is an interesting finding, but it is still just a correlation.

One possibility is that there were other differences that weren’t measured (but that were somehow captured indirectly by the text-match variable). Perhaps the teachers were just responding to the students who were making the biggest gains and were undershooting their levels since they were gaining so fast. That would mean that it wasn’t the student-book match that was leading to learning, but that the better learning was influencing teacher decision-making about student-book match. How could we sort that confusing picture out? With experiments that systematically observe the impact of book placement separate from other variables; such as the the experimental studies that I cited.

A couple of other points worth noting: the kids who gained the least in the Ehri study were placed in texts in the way that you say your school is doing. In the Ehri study, the kids who made the biggest gains were in even easier materials than that; materials that should have afforded little opportunity to learn (which makes my point—there is no magic level that kids have to be placed in text to allow them to learn).

Another important point to remember: Allington’s article made no distinction based on grade levels or student reading levels. His claim is that all struggling readers need to spend much or most of their time reading relatively easy texts, and his most convincing data were drawn from studies of first-graders. However, the Common Core State Standards do not raise text levels for beginning readers. When students are reading at a first-grade level or lower (no matter what their ages), it may be appropriately cautious to keep them in relatively easy materials (though there are some discrepant data on this point too--that suggest that grouping students for instruction in this way damages children more than it helps them).

Experimental studies show that by the time students are reading like second-graders, it is possible for them to learn from harder text (as they did in the Morgan study). If we hold students back at their supposed levels, we are guaranteeing that they cannot reach the levels of literacy needed for college and career readiness by the time they leave high school.



Monday, September 10, 2012

CCSS Allows More than Lexiles


When I was working on my doctorate, I had to conduct a historical study for one of my classes. I went to the Library of Congress and calculated readabilities for books that had been used to teach reading in the U.S. (or in the colonies that became the U.S.). I started with the Protestant Tutor and the New England Primer, the first books used for reading instruction here. From there I examined Webster’s Blue-Backed Speller and its related volumes and the early editions of McGuffey’s Readers.

Though the authors of those have left no record of how those books were created, it is evident that they had sound intuitions as to what makes text challenging. Even in the relatively brief single volume Tutor and Primer, the materials got progressively more difficult from beginning to end. These earliest books ramped up in difficulty very quickly (you read the alphabet on one page, simple syllables on the next, which was followed by a relatively easy read, but then challenge levels would jump markedly).

By the time we get to the three-volume Webster, the readability levels adjust more slowly from book to book with the speller (the first volume) being by far the easiest, and the final book (packed with political speeches and the like) being all but unreadable (kind of like political speeches today).

By the 1920s, psychologists began searching for measurement tools that would allow them to describe the readability or comprehensibility of texts. In other words, they wanted to turn these intelligent intuitions about text difficulty into tools that anyone could use. That work has proceeded by fits and starts over the past century, and has resulted in the development of a plethora of readability measurements.

Readability research has usually focused on the reading comprehension outcome. Thus, they have readers do something with a bunch of texts (e.g., answer questions, do maze/cloze tasks) and then they try to predict these performance levels by counting easy to measure characteristics of the texts (words and sentences). The idea is to use easily measured or counted text features and to then place the texts on a scale from easy to hard that agrees with how readers did with the texts.

Educators stretched this idea of readability to one of learnability. Instead of trying to predict how well readers would understand a text, educators wanted to use readability to predict how well students would learn from such texts. Thus, the idea of “instructional level”: if you teach students with books that appropriately matched their reading levels, the idea was that students would learn more. If you placed them in materials that were relatively easier or harder, there would be less learning. This theory has not held up very well when empirically tested. Students seem to be able to learn from a pretty wide range of text difficulties, depending on the amount of teacher support.

The Common Core State Standards (CCSS) did not buy into the instructional level idea. Instead of accepting the claim that students needed to be taught at “their levels,” the CCSS recognizes that students will never reach the needed levels by the end of high school unless harder texts were used for teaching; not only harder in terms of students’ instructional levels, but harder also in terms of which texts are assigned to which grade levels. Thus, for Grades 2-12, CCSS assigned higher Lexile levels to each grade than in the past (the so-called stretch bands).

Lexiles is a recent schemes for measuring readability. Initially, it was the only readability measure accepted by the Common Core. That is no longer the case. CCSS now provides text guidance for how to match books to grade level using several formulas. This change does not take us back to using easier texts for each grade level. Nor does it back down from encouraging teachers to work with students at levels higher than their so-called instructional levels. It does mean that it will be easier for schools to identify appropriate texts using and of six different approaches—many of which are already widely used by schools.

Of course, there are many other schemes that could have been included by CCSS (there are at least a couple of hundred readability formulas). Why aren’t they included? Will they be going forward?

From looking at what was included, it appears to me that CCSS omitted two kinds of measures. First, they omitted those schemes that have not used often (few publishers still use Dale-Chall or the Fry Graph to specify text difficulties, so there would be little benefit in connecting them to the CCSS plan). Second, they omitted widely used measures that were not derived from empirical study (Reading Recovery levels, Fountas & Pinnell levels, etc.). Such levels are not necessarily wrong—remember educators have intuitively identified text challenge levels for hundreds of years.

These schemes are especially interesting for the earliest reading levels (CCSS provides no guidance for K and 1). For the time being, it makes sense to continue to use such approaches for sorting out the difficulty of beginning reading texts, but then to switch to approaches that have been tested empirically in grades 2 through 12. [There is very interesting research underway on beginning reading texts involving Freddie Hiebert and the Lexile people. Perhaps in the not-too-distant future we will have stronger sources of information on beginning texts].    

Here is the new chart for identifying text difficulties for different grade levels:





Common
Core Band

ATOS
Degrees of
Reading
Power®
2nd 3rd
2.75 5.14
42 54
4th 5th
4.97 7.03
52 60
6th 8th
7.00 9.98
57 67
9th 10th
9.67 12.01
62 72
11th CCR
11.20 14.10
67 74




Common
Core Band

Flesch-
Kincaid

The Lexile
Framework®
2nd 3rd
1.98 5.34
420 820
4th 5th
4.51 7.73
740 1010
6th 8th
6.51 10.34
925 1185
9th 10th
8.32 12.12
1050 1335
11th CCR
10.34 14.2
1185 1385




Common
Core Band

Reading
Maturity

SourceRater
2nd 3rd
3.53 6.13
0.05 2.48
4th 5th
5.42 7.92
0.84 5.75
6th 8th
7.04 9.57
4.11 10.66
9th 10th
8.41 10.81
9.02 13.93
11th CCR
9.57 12.00
12.30 14.50



For more information: