Who's Right on Text Complexity?

  • balanced literacy Common Core State Standards
  • 04 November, 2013
  • 14 Comments

Teacher question:

  It seems that there is a lot of conflicting information coming out about accuracy and complex text. In the April edition of The Reading Teacher, Richard Allington wrote an article pertaining to struggling readers. In this article he says that there are studies showing the benefits to teaching children using text where their accuracy is high. Our district just raised the running record accuracy rate expectation to 95-98% accuracy based on the current research. Yet, your blog postings pull in the opposite direction. How do teachers know what is right and what is wrong? After all, teachers want to do what is best and most effective towards student learning.

Shanahan response:  

What a great question. In my blog post, I cited particular studies and Dick Allington’s focused on a completely different set of studies. This is what teachers find so confusing. 

  The experimental studies that I cited randomly assigned students to different treatment groups, so that children were matched to books in different ways, which allows a direct comparison of the impact of these methods—and gives us some certainty that the differences in learning were due to the different ways students were matched with text and not to something else.

  Allington cites several correlational studies that examine existing patterns of relationship. These studies show that the lowest readers will tend to be placed in relatively harder texts and that they tend to make the least gains or to be the least motivated.

  The problem with correlational studies of this kind is that they don’t allow us to attribute causation. From such evidence we can’t determine what role, if any, the student-book match made in kids’ learning. 

  The students may have lagged because of how they were matched to books. But their low learning gains could also be due to other unmeasured instructional or demographic differences (many differences between high and low readers have been documented, but those were not controlled or measured in these studies). It could just be that the lowest readers make the least gains and that it has nothing to do with how they are matched to books. That’s why you need experiments (to determine whether the correlations matter).

I looked at studies that actually evaluated the effectiveness of this instructional practice (and these studies found either that student-text match made no difference or that harder placements led to more learning). While Dick looked at studies that revealed that there was a relationship between these variables, omitting all mention of these contradictory direct tests or of any of the correlational evidence that didn’t support his claims.

  There were two experimental studies in his review, but neither of them manipulated this particular variable, so these results are correlational, too. For example, Linnea Ehri and her colleagues created a program in which teachers provided intensive reading support to young struggling readers (mainly explicit instruction in phonological awareness and phonics). However, teachers varied in how much reading they had the students do during the intervention and how they matched children to books; the kids who did a lot of reading of easier materials seemed to learn the most. That is an interesting finding, but it is still just a correlation.

  One possibility is that there were other differences that weren’t measured (but that were somehow captured indirectly by the text-match variable). Perhaps the teachers were just responding to the students who were making the biggest gains and were undershooting their levels since they were gaining so fast. That would mean that it wasn’t the student-book match that was leading to learning, but that the better learning was influencing teacher decision-making about student-book match. How could we sort that confusing picture out? With experiments that systematically observe the impact of book placement separate from other variables; such as the the experimental studies that I cited.

  A couple of other points worth noting: the kids who gained the least in the Ehri study were placed in texts in the way that you say your school is doing. In the Ehri study, the kids who made the biggest gains were in even easier materials than that; materials that should have afforded little opportunity to learn (which makes my point—there is no magic level that kids have to be placed in text to allow them to learn).

  Another important point to remember: Allington’s article made no distinction based on grade levels or student reading levels. His claim is that all struggling readers need to spend much or most of their time reading relatively easy texts, and his most convincing data were drawn from studies of first-graders. However, the Common Core State Standards do not raise text levels for beginning readers. When students are reading at a first-grade level or lower (no matter what their ages), it may be appropriately cautious to keep them in relatively easy materials (though there are some discrepant data on this point too--that suggest that grouping students for instruction in this way damages children more than it helps them).

  Experimental studies show that by the time students are reading like second-graders, it is possible for them to learn from harder text (as they did in the Morgan study). If we hold students back at their supposed levels, we are guaranteeing that they cannot reach the levels of literacy needed for college and career readiness by the time they leave high school.

Comments

See what others have to say about this topic.

edededucation Jun 19, 2017 10:12 PM

11/5/2013

Thanks for providing further commentary on the research related to the idea of text complexity, Dr. Shanahan. If you have it easily available, do you have a quick link to the articles you cite and a discussion of that research? Last time I looked at cited articles related to this discussion there wasn't a strong research base for unilaterally giving children reading material "above their instructional level."

Related to that comment, of critical importance I believe is being clear about what we're referring to with "instructional level." I appreciate that you defined the practice of 95% accuracy and higher as being potentially less effective, which is the range I would call "mastery" level. Indeed, I doubt there would be much support for only expecting kids to tackle mastery level material.

Most folks, though, consider "instructional level" to be lower - generally 90-95% accuracy, with further definitions involving rate as well. However, even beyond this particular definition, it's important to consider that "instructional level" means the level at which children can successfully perform, yet below their mastery level, with or without assistance. By definition, if a child can accomplish a task, the task is within either the child's instructional or mastery level.

As such, the essence of your argument is "push children to tackle the most difficult material possible within the child's instructional level." Not only do I think this is supported by research, but it's common sense.

The massive problem, from my experience, is attempting to describe "text complexity" as "giving children reading material more difficult than their instructional level." This is misleading and false. While it's true that some children may be able to tackle text above their decoding instructional level (but within their comprehension instructional level), we're still engaged in the basic task of expecting kids to work with material at the most difficult end of their instructional range.

If we phrased the text complexity discussion as such, I think we'd have a lot more people understanding what was meant.

One final piece of commentary - my experience has been that it would be profoundly inappropriate and unethical to assign children material based solely on their age or grade level with no consideration of available assessment data. While we DO want to challenge children to work at the upper limit of their instructional level (most difficult material still within instructional level), that "upper limit" will fluctuate based on the child. There is no support for the idea that 2 children in 3rd grade - one reading at the 1st grade level and one at the 4th grade level (in all reading areas) - would benefit the same from the same text. Many folks are under the impression that CCSS calls for teachers to ignore individual assessment data and assign reading based solely on grade level. In my opinion, this is unethical and unprofessional, and a huge step backward for the professional community.

Russ Walsh Jun 19, 2017 10:13 PM

11/6/2013
Tim,

As I am sure you are aware, there has been considerable criticism of the CCSS recommended text levels for grades 2 and 3. Indeed, there is good evidence that the pre-CCSS levels may be more appropriate for these young readers than those that have been adjusted "stair-step" style by the CCSS authors and MetaMetrics. There is no evidence that the adjusted levels are appropriate for grades 2 and 3. Hieber and Van Sluys address the issue in their article "Three Assumptions about Text Complexity" in Goodman, Calfee and Goodman's new book, Whose Knowledge Counts in Government Literacy Policies?

I address the issue here: http://russonreading.blogspot.com/2013/10/could-common-core-widen-achievement-gap.html

Timothy Shanahan Jun 19, 2017 10:13 PM

11/6/2013

Russ,
It's good to have opinions on this, but the only data that represent a direct test (experimental study) of harder levels is Morgan, et al., and it doesn't support those opinions. Of course, there are correlational studies as well, at least some of which (Powell) are supportive. There is simply not good evidence showing that particular student-book matches facilitate learning. You need data on learning for that.

tim

Russ Walsh Jun 19, 2017 10:13 PM

11/8/2013

But Tim, haven't we been down this road before. The National Reading Panel tok a very narrow view of what counted as research. They got changes in how teachers taught using "scientifically based research", and increases in time spent on the five areas they identified, but for all that effort, the Abt report found no improvement in student reading comprehension scores. Shouldn't we broaden our concept of what is useful research to include good old fashioned "kid watching?"

Timothy Shanahan Jun 19, 2017 10:14 PM

11/8/2013

We can go that way, Russ, to a world in which all data are equal--but that is not how the scientific community approaches data. In that universe, correlations signal causation, logical analysis trumps data, and pretty much anything goes. That means the way we are doing things now is best and that we'll never be able to get most kids to college and career readiness by the time they leave high school. Personally, I'd place data above opinion--and since well-done research studies show that kids can learn from harder texts in grades 2 and up, I'd not be militating against those findings but trying to understand how they made it work.

Timothy Shanahan Jun 19, 2017 10:14 PM

11/8/2013

Fair enough. I see your point, but I would at least think that any experimental findings would need to be validated in real reading situations (through qualitative and case study research). Rather than a free-for-all I would see it as continually seeking to move what we know forward. I just think the reading process is too complex to be fully understood through quantitative research. I think this leads to reductionist policies as did the NRP Report.

Timothy Shanahan Jun 19, 2017 10:14 PM

11/8/2013

The problem with an issue like this is that one group (I'm simplifying) is saying that you have to match kids to texts in a particular way... and another group says, "No, there is no magic student-text match. Kids can learn from a broad range of texts, even from texts the first group rejects as being too hard."

Your approach says that if kids manage to learn to read when they are placed as the first group says they should, that is evidence that they are right and that we shouldn't be so willing to change from that routine. But, of course, such evidence also supports the second group too (since any student-text match works including the one touted by the first group).

The issue isn't whether kids can learn from an instructional level-match (I promise you kids in Fountas & Pinnell classes, leveled books classes, AR classes, and my primary grade classes, etc. cand and did learn to read).

The issue is whether we can get kids to even higher levels of performance if we place them in somewhat harder texts... And experimental and correlational research says, indeed, under some circumstances placing students in harder text can have that result. I think it makes great sense when you respond to such data by changing standard practice to watch those changes carefully to make sure they are working, but that isn't the same as watching current practice to see if it makes sense to change it.

Russ Walsh Jun 19, 2017 10:15 PM

11/8/2013

That makes sense, but what data was used to determine the new lexile stair steps? Wouldn't an up swinging curve design, establish strong and fluent early literacy and then move to more difficult text be a reasonable approach?

Timothy Shanahan Jun 19, 2017 10:15 PM

11/8/2013

I have to admit i don't understand how they analyzed data to make those determinations. However, I don't think the point is that these are the best levels to teach kids at (in terms of how much they will learn), but these are more aspirational levels that describe how well kids would have to read to get to the levels needed by the end of high school. (The Lexile levels that MetiMetrics has put forth over the years tell the lexiles at which the average student can read with 75-90% comprehension--without teacher assistance. These new levels don't provide such a metric and they make the Lexile people very nervous).

In fact, CCSS does NOT raise any text levels until Grade 2 and at Grade 2 the increases are very modest. So, teachers have 2 or almost 3 years to do just what you say--get kids reasonably fluent before you start ramping up the text levels. That's built into the system already.

Russ Walsh Jun 19, 2017 10:16 PM

11/9/2013

Tim,

Thank you for engaging in the dialogue. I always learn from your carefully considered insights.

Timothy Shanahan Jun 19, 2017 10:17 PM

11/9/2013

Russ--

thanks you raise good questions (as usual).

Timothy Shanahan Jun 19, 2017 10:17 PM

11/9/2013
Edededucation--

I hesitated on whether to post edededucation's comment because they are so filled with misinformation that I didn't want to do a disservice to readers.

The writer is correct that there is not a strong research base for unilaterally giving children reading materials above their instructional level. But he/she does not actually care about following research, because it is also correct that there is not a strong research base for unilaterally giving children reading materials at their instructional level either.

Then eded gets onto an old debating trick that confuses rather than convinces. He/she claims that I'm somehow wrong about what constitutes the instructional level (and indicates that the actual instructional level that is used by "most folks" places students in more complex materials). There are definitely variations in the instructional level criteria (usually no more than 1 or 2 points difference), but the instructional level that I was writing about and that Dick Allington was writing about and that is used by all of the major core reading programs and remedial reading programs, and the IRIs, and the modern readability measures, and by all the highly cited experts (including Fountas & Pinnell) 95-98% accuracy and 75-89% comprehension.

If you use the made up criteria that you indicate is common (interesting that there were neither citations or research showing the source of this fantasy), then you will definitely be placing students in texts that are harder than Allington was claiming was necessary for kids' learning. That puts you closer to CCSS requirements, but I wonder if it puts you close enough to college and career readiness levels.

You indicate that there is no support for placing students in materials as difficult as CCSS calls for. That is actually untrue, and I would encourage you to read past blog entries on this site about text complexity (you'll find the references there as well).

good luck.

Laura Jun 19, 2017 10:17 PM

11/9/2014

Thank you for this discussion, unfortunately the district heads of many schools are not as thoughtful when interpreting data and listen to various consultants instead of even having a rationale discussion. This is what is happening in schools using computer programs to assess Lexile levels for children k-6, they are put in a computer room with or without quick, automated tech skills using the mouse, etc...then after a 30 minute assessment the program spits out a Lexile level/grade equivalent that is actually much much lower than what the child is reading on their own or in programs such as EngageNY. These Lexile levels with grade equivalents are on the "new" and improved report cards and teachers are told to write in scores. There is no discussion with parents, teachers or kids. The unintended consequence...either discourage reading or create a bubbling already unhealthy public education bashing environment. The question is how do computer programs assess accuracy? Is the 75 % frustration level derived from comprehension questions? Or accurate words read? That is not clear, no one knows here yet the fact that districts are putting lexile scores on report cards trumps will trump a teacher's skilled judgement which is based on reality, can you please address this? Thank you, Laura

Timothy Shanahan Jun 19, 2017 10:18 PM

11/19/2014

Laura--

Those kinds of programs are highly accurate. The 75% they are going for is reading comprehension. Essentially what they are saying is that texts that have those characteristics (in terms of vocabulary and sentence complexity) are usually understood by kids at that age/grade with 75-89% comprehension. You are correct that ignores fluency rates (though those comprehension levels are usually associated with oral reading accuracies in the mid 90%s). Those levels are a good starting point, but teachers need to adjust up or down as needed to place kids at traditional instructional levels. Or, if you buy what I have been saying, then this program is probably placing kids at too low a level and you will want to move them to something that would be more challenging. In grades 2 and up, I'd consider that.

What Are your thoughts?

Leave me a comment and I would like to have a discussion with you!

Comment *
Name*
Email*
Website
Comments

Who's Right on Text Complexity?

14 comments

One of the world’s premier literacy educators.

He studies reading and writing across all ages and abilities. Feel free to contact him.