Blast from the Past: This blog first appeared as a series of articles (June 29-August 21, 2011), and this updated version was issued August 23, 2025. The original blogs were among the first to promote the idea of teaching reading with challenging text rather than “instructional level” text. At the time, this was new territory for me. As a teacher I taught with instructional level texts and as a professor I prepared teachers to do the same. In 2011, there was a paucity of research on the issue, but that is no longer the case. On September 12, my new book, Leveled Reading, Leveled Lives (Harvard Education Press, 2025) will officially be published. It provides a comprehensive treatment of this issue, showing in detail that the instructional level doesn’t work as claimed, and explaining why it couldn’t possibly work. It offers substantial guidance in how teachers may successfully teach reading with grade level text (as well as how reading can be guided successfully with such texts in content classes).
One last thing: this updated entry sources the idea of teaching with grade level texts to the Common Core standards. Over the past 15 years, many states have replaced those standards. Nevertheless, for the most part, they retained the Common Core’s text level requirements which means this entry is as relevant to today as it was then.
In 2010, the Common Core State Standards were issued (National Governors Association & Council of Chief State School Officers, 2010). These educational goals, that were soon adopted by most states, differed from previous standards in several important ways. Probably no ox was gored more impressively by these standards than the widely held claim that texts of a particular level of difficulty had to be used if learning was to be accomplished.
Reading educators (including me) since the 1930s, have championed the idea that there is an “instructional level.” Basically, the claim has been that students make the greatest learning gains when taught with books matched to the students’ learning needs in terms of the level of difficulty that they present. Teachers were to teach from texts neither too hard (incomprehensible) or too easy (nothing left to learn in these).
These days the biggest instructional level proponents have been Irene Fountas and Gay Su Pinnell, at The Ohio State University. Their “guided reading” approach has been widely adopted. The basic premises of guided reading include the notion that children learn to read by reading, that they benefit from a small amount of teacher guidance and support during this reading, and -- most fundamentally – that this reading should be done with texts at just the right difficulty level. A major concern of these guided-readingistas has been the fear that “children are reading texts that are too difficult for them.”
Over the decades experts proposed a plethora of methods for determining students’ reading levels and text difficulty levels, along with schemes for matching books and kids. Instructional programs as varied as basal readers, units of study, technology-based instruction, and guided reading have all depended on such approaches.
Common Core Standards are based on a different premise. They reject the idea of an optimum student-text match that facilitates learning. Nor are they as smitten with the idea that students learn to read mainly from reading with minimal teacher support. They expect students to take on challenging texts with whatever amount of scaffolding may be needed to accomplish learning. By design, Common Core discourages much out-of-grade-level teaching or the use of high readability texts. It champions teaching methods that run counter to current practice.
Why make such a big deal out of grade-level text?
One persuasive piece of evidence was a report, “Reading Between the Lines,” published by American College Testing (ACT, 2006). It showed the primacy of text in reading comprehension and the educational value of having students reading challenging text in the upper grades.
Virtually every reading comprehension test and instructional program makes a big deal out of the types of questions asked about text. In our zeal to improve test performance, it is common practice to analyze test performance by question types and then to give students lots of practice with the types of questions they erred on. There are even commercial programs that offer practice with specific question types.
That ACT report reveals a problem with those schemes: they don’t work. They can’t work. Students’ reading performance can’t be differentiated in any meaningful way by type of question. Students perform no differently with literal recall or inferential items (nor with other question types like main idea). If students read a hard passage, they answer fewer questions correctly, no matter the types of questions. They do better with easier texts of course, but that improvement is not accompanied by gains with any particular kind of question.
ACT concluded that, based on data drawn from 563,000 students, “performance on complex texts is the clearest differentiator in reading between students who are likely to be ready for college and those who are not” (p. 16-17).
Reading comprehension standards tend to be presented in numbered lists of cognitive processes or question types. Standards require students “to quote accurately from text,” to “determine two or more main ideas of a text,” or to “explain how main ideas are supported by key details,” and so on. But if question types (or standards) don’t distinguish reading performance and text difficulty does, then standards should make the ability to interpret challenging texts a central requirement.
That is exactly what Common Core did. They made the ability to comprehend texts of specified levels of difficulty a central requirement of what students must accomplish.
The ACT report describes text features that contribute to challenge, including the complexity of the relationships among characters and ideas, amount and sophistication of the information detailed in the text, how the information is organized, the author’s style and tone, the vocabulary, and author’s purpose. Obviously, if we want higher reading achievement we should teach students how to deal with these kinds of text features during reading, rather than practicing with different question types.
These data suggest that students are likely to learn more from working with challenging texts than from the “low readability, high interest” books that have become an education staple. This is an approach more akin to that taken by athletes: To get stronger, you must experience more physical resistance than your muscles are accustomed to.
The counterargument to this is the widespread belief that there is an optimum difficulty level for texts used to teach reading. According to instructional level theory, when texts are too difficult, students become frustrated and learn little. Accordingly, text challenge need be avoided.
Evidence supporting this “easy book” idea is anemic… the best of it is correlational. Such a dearth of empirical support is surprising given the wide acceptance of this theory in practice.
I must admit that as a teacher I thought the approach was commonsensical and bought into it big time, testing each of my students with informal reading inventories and juggling multiple groups of kids who were reading different grade level texts.
When I worked on my PhD, I studied with the late Jack Pikulski. Jack had a great clinical sense, and he was skeptical of my faith in the instructional level. He recognized the limitations in those tests, and he was equally charry about the readability estimates. For Jack, the combination of two such rough guestimates was rather iffy stuff. I preferred the seeming certainty of the approach and clung to it until my own clinical sense grew more sophisticated.
Early in my scholarly career, I read the source of this independent/instructional/frustration level system, the textbook, Foundations of Reading Instruction. In it, Emmett Betts (1946) attributed how he identified these levels to a doctoral study conducted by P. A. Kilgallon, one of his students.
I managed to get a copy of that study – it had never been published – and to my dismay it included no evidence showing that teaching at an instructional level gave students any learning advantage. Essentially, the instructional level was just made up, something I wrote about at the time (Shanahan, 1983).
A later set of studies aimed at validating this idea (e.g., Powell, 1968) concluded that the instructional level was placing students in books too easy to promote optimum learning. Unfortunately, these studies suffered from the same problems as the original Kilgallon investigation.
It took more than 50 years after the appearance of Betts’ book for someone to study the problem experimentally – that means trying it out to see if it worked (Morgan, Wilcox, & Eldredge, 2000). That study – and others that followed (e.g., Brown, 2018, O’Connor, Swanson, & Geraghty, 2010) – concluded that kids made greater progress when taught reading with more challenging books.
We have placed way too much confidence in what was an untested theory, and now which is a failed one. The model of learning underlying this plan is simplistic and our ability to implement it with maximum learning gains is impossible.
Learning depends not only on the learner’s interactions with text, but on the teacher’s input to those interactions. Instructional level theory limits the role of the teacher by focusing on texts selected purposefully to require little such support, and it ignores that such text placements necessarily impose an upper bound limit – a low upper bound limit – on what students can learn from them.
Instead of maximizing student learning, it minimizes teaching. That is probably why recent research has found that the schools with the greatest achievement gains teach with grade level texts, rather than continually dropping back to the kids’ purported levels (TNTP, 2024).
References
ACT. (2006). Reading between the lines. Iowa City, IA: American College Testing.
Betts, E. A. (1946). Foundations of reading instruction. New York: American Book Company.
Morgan, A., Wilcox, B. R., & Eldredge, J. L. (2000). Effect of difficulty levels on second-grade delayed readers using dyad reading. Journal of Educational Research, 94, 113–119.
National Governors Association Center for Best Practices & Council of Chief State School Officers. (2010). Common Core State Standards for English language arts and literacy in history/social studies, science, and technical subjects. Washington, DC: Authors.
O’Connor, R. E., Swanson, H. L., & Geraghty, C. (2010). Improvement in reading rate under independent and difficult text levels: Influences on word and comprehension skills. Journal of Educational Psychology, 102, 1–19.
Pinnell, G. S., & Fountas, I. C. (1996). Guided reading: Good first teaching for all children. Portsmouth, NH: Heinemann.
Powell, W. R. (1968). Reappraising the criteria for interpreting informal inventories. Washington, DC: ERIC 5194164.
Shanahan, T. (1983). The informal reading inventory and the instructional level: The study that never took place. In L. Gentile, M. L. Kamil, & J. Blanchard (Eds.), Reading research revisited, (pp. 577–580). Columbus, OH: Merrill.
Shanahan, T. (2025). Leveled reading, leveled lives. Cambridge, MA: Harvard Education Press.
The New Teacher Project (TNTP). Paths of opportunity: What it will take for all young people to thrive. TNTP, August 8, 2024, https://tntp.org/publication/paths-of-opportunity/.
Reader Comments on Earlier Version
More Reader Common on Earlier Version
LISTEN TO MORE: Shanahan On Literacy Podcast
Comments
See what others have to say about this topic.