What Are the Best Fluency Learning Targets? I Think My School is Overdoing It

Blog Banner
31 January, 2026

text reading fluency

oral reading fluency

Teacher question:

I am a literacy interventionist at an elementary school, and we use DIBELS for our progress monitoring. While I recognize the value of DIBELS as a screening tool, I have concerns about the appropriateness of the current fluency benchmarks my school has adopted. I have found some research that identifies fluency goals calibrated to reading comprehension. Studies by O'Connor (2017) and Cogo-Moreira et al. (2023) identify specific words-per-minute benchmarks to establish a cut-off point for reading speed and accuracy to obtain minimum values for comprehending texts. These wpm goals are much lower than our fluency goals. If the ultimate goal of reading is understanding the text, I wonder if these research-based targets would be more appropriate goals for many of our students.

Shanahan responds:
Several years ago, a friend of mine was developing a remedial reading program. He wanted to set fluency benchmarks.

I hadn’t thought much about that problem. I had chaired the National Reading Panel subcommittee on fluency instruction. I knew that fluency teaching improved fluency and, consequently in most studies, reading comprehension.

But how fluent did kids have to be?

There were no studies that had addressed the problem in quite that way (I thought), but there were some fluency norms that could provide a clue.

My first thought was that they should aim for the 50th percentiles. For example, the norms indicated that the average second grader ends the year able to read about 96 words correct per minute (wcpm) (Hasbrouck & Tindal, 2017). For me, that would have been the second-grade target.

My reasoning was straightforward. My thinking was that kids who reached the 50th percentile would not have a fluency problem. If they were struggling to make sense of text, it wouldn’t be because they struggled too much with the words, etc.

My friend was not satisfied. He wondered, “Why wouldn’t the 40th or 45th percentile be adequate?”

Many years ago, Keith Stanovich (1984) described reading as an “interactive-compensatory” process. What he was getting at was that reading involves a constellation of varied skills and abilities.

Think, for instance, of the “simple view of reading.” That model describes reading comprehension as the product of two sets of abilities: decoding and language comprehension. To read you must translate print to language and then you must do what you do to understand language. Decoding and language abilities must interact.

What happens if readers struggle with one of these skills? According to Stanovich and scads of research, readers try to rely on their relative strength to compensate for the limitation. When readers struggle to decode, they don’t quit in a snit, they make like AI, trying to guess the word, using what they know about the semantics and syntax of language to compensate for decoding limitations.

Fluency – like reading in general – is a bit of a mash up. It relies on both decoding skills and language knowledge (along with executive functioning, reasoning ability, and knowledge). Kids who reach the 50th percentile in fluency are not necessarily all the same. Some kids may rely more on decoding, while others may compensate with some of their other abilities. Achieving average on a fluency test won’t guarantee average decoding skills, but it seems very unlikely that such students would be particularly low in decoding. You can only compensate so much.

The research that you noted is interesting. The researchers were doing exactly what you said, they were trying to identify the degree of fluency that was necessary to enable adequate comprehension.

There is a long line of research on this topic – something those modern researchers seem to be unaware of. Back in the 1940s, Emmett Betts (1946) wasn’t trying to establish a fluency learning objective. No, his purpose was to determine an appropriate level of text to use for instruction. He theorized, without evidence, that kids could only improve their reading when working with texts they comprehended, and he decided, again without any evidence, that an adequate degree of comprehension meant kids could answer 75-89% of the questions about a text.

He concluded, based on the kinds of studies researchers are doing now, that kids were best taught with texts they could read with 95% accuracy. When someone refers to “reading levels” that’s what they mean (or some variation on those criteria).

Later studies (Dunkeld, 1970; Powell, 1968) accepted Betts’s theory but challenged his criteria. They reported a lot of variation across the grades. In other words, different degrees of fluency were needed to ensure the target comprehension levels depending on grade levels.

The more recent studies don’t just consider accuracy – that is, the percentage of words read correctly. They look at a combination of speed and accuracy: the numbers of words students can read correctly per minute (wcpm). This approach provides a more reliable estimate of fluency, especially if conducted with multiple texts or for longer reading durations.

These newer studies, like those from the 1960s and 1970s, are reporting that kids don’t need to be especially fluent to comprehend. For instance, the fluency norms report that at the end of Grade 2, the average student can read about 100wcpm. Various studies say that 43 (Alves, et al., 2020), 47 (Cogo-Moreira, et al., 2023), or 78 wcpm (O’Connor, 2017) are all that is needed to allow successful comprehension.

Or for grade 4: the norms say 133 wcpm, while the amounts of fluency needed to ensure comprehension in these studies is 71 (Alves et al., 2021), 79 (Cogo-Moreira, et al., 2023) and 70 wcpm (O’Connor, 2017).

The researchers who have published these findings are appropriately cautious; they recognize that with different sets of texts or larger and more diverse samples of kids, the results are likely to vary quite a bit. This is because the standard deviations are large for this ability both in their studies and in the norms.

Those DIBELS targets are not just a seat-of-the-pants estimate like my notion of aiming for the 50th percentile. Nor are they an attempt to predict reading comprehension. Their targets are based on the connection of their oral reading fluency scores and performances on state tests – a more distant and generalized measure of reading ability than used in these studies. Basically, their benchmarks are more linked to learning progress than to comprehension (University of Oregon, 2020). Accordingly, their targets are much closer to those averages that I had recommended. For second graders, the norms say 100 wcpm, and DIBELS aims for 94; for grade 3, it’s 112 and 114; for Grade 4, 133 and 125, and so on.

What is it that DIBELS (and other test makers) are claiming with their target criteria? They are not claiming that the accomplishment of those levels of fluency will guarantee high performance on your state tests. Reading is too complex for that.

No, they are saying that if your kids are that fluent, you can check that off as a reason for low reading comprehension performance. Another way of saying this is, that given those levels of fluency, if your kids are also adequate in all their other abilities (like vocabulary, for instance), then they should have good enough reading comprehension.

Personally, if my school was using DIBELS or one of these other testing regimes, I would use their targets. Without those kinds of tools, aiming for the 50%ile may be a bit high, but only a bit. It is a reasonable target. In any event, since there is more than one way to comprehend a text, preparing kids to only be fluent enough to comprehend a given text is just too low a standard if we want our kids to able to read a wide range of texts well enough.

References

Alves, L. M., Santos, L. F. D., Miranda, I. C. C., Carvalho, I. M., Ribeiro, G. L., Freire, L. S. C., Martins-Reis, V. O., & Celeste, L. C. (2021). Reading speed in elementary school and junior high. Evolução da velocidade de leitura no Ensino Fundamental I e II. CoDAS33(5), e20200168. https://doi.org/10.1590/2317-1782/20202020168

Betts, E. (1946). Foundations of reading instruction. New York; American Book Co.

Cogo-Moreira, H., Molinari, G. L., Carvalho, C. A. F., Kida, A. S. B., Lúcio, P. S., & Avila, C. R. B. (2023). Cut-off point, sensitivity and specificity for screening the reading fluency in children. Pontos de corte, sensibilidade e especificidade para rastreamento da fluência leitora em crianças. CoDAS, 35(3), e20210263. https://doi.org/10.1590/2317-1782/20232021263pt

Dunkeld, C. G. (1970). The validity of the informal reading inventory for the designation of instructional reading levels: A study of the relationships between children’s gains in reading achievement and the difficulty of instructional materials. Unpublished doctoral dissertation, University of Illinois at Urbana-Champaign.

Hasbrouck, J. & Tindal, G. (2017). An update to compiled ORF norms (Technical Report No. 1702).  Eugene, OR. Behavioral Research and Teaching, University of Oregon.

O’Connor, R. E. (2017). Reading fluency and students with reading disabilities: How fast is fast enough to promote reading comprehension? Journal of Learning Disabilities, 51(2), 124-136.
https://doi.org/10.1177/0022219417691835

Powell, W. R. (1968). Reappraising the criteria for interpreting informal inventories. In D. L. DeBoer (Ed.), Reading diagnosis and evaluation (pp. 100-109). Newark, DE: International Reading Association.

Stanovich, K. E. (1984). The interactive-compensatory model of reading: A confluence of developmental, experimental, and educational psychology. RASE: Remedial & Special Education, 5(3), 11–19.  https://doi.org/10.1177/074193258400500306

University of Oregon. (2020). Dynamic Indicators of Basic Early Literacy Skills (DIBELS, 8th ed.). Eugene, OR: University of Oregon. https://dibels.uoregon.edu

Shanahan On Literacy Podcast

Comments

See what others have to say about this topic.

What Are your thoughts?

Leave me a comment and I would like to have a discussion with you!

Comment *
Name*
Email*
Website
Julie Brown Jan 31, 2026 03:26 PM

I am the literacy specialist at my school and we use ALO to progress monitor students each week in WPM and accuracy. Since the end of year benchmarks are designed to measure when students are out of risk for fluency failure, around 40%, I implemented a goal for students to reach 60-65 % targets (with 90% accuracy) before they are exited from weekly progress monitoring. My thoughts are to ensure students are fluently stable with grade level text well beyond the ALO goals of 40%. This will also, hopefully, account for summer reading slide.

I also use this for ALOs other measures (FSF, PSF, NWF, and MAZE)

Thoughts on this approach?

Dr. Bill Conrad Jan 31, 2026 03:26 PM

Well done article,Tim.

I would argue that many teachers and administrators fail to understand the reason why fluency contributes to better reading comprehension.

In order to comprehend texr well, students need to be able to read with automaticity. If students struggle to decode words, most of their mental energy will be devoted to this task. If students can decode words with automaticity, more mental energy can be devoted to comprehension.

It is an important indicator as you suggest. However, it is not the whole story.

The general lack of professionalism in teaching will lead most teachers to mechanically administer the fluency tests without a deeper understanding of how fluency contributes to comprehension.

Working backwards from the fluency levels of 3rd grade students who read at grade level seems to me to be the best approach to assigning benchmark reading scores in earlier grades. Developers of tests like DIBELS take on this crucial psychometric work. It is well beyond the capacity of local school districts to engage in this statistical work.

No?

Jo Anne Grosd Jan 31, 2026 03:36 PM

I like this!
I believe orthographic mapping improves Fluency.
Julie
Your comment about weekly monitoring re Fluency, that’s IMO not realistic.
Monthly is more like it.

Timothy Shanahan Jan 31, 2026 03:38 PM

Jo Anne--
Not even monthly. Given the standard error of those tests and the rate of growth of students, three times a year should be sufficient -- at the beginning of each semester and end of the year.

tim

Timothy Shanahan Jan 31, 2026 03:42 PM

Julie--
Weekly fluency monitoring is unreasonable and wasteful. Use that time to teach fluency rather than to test it. No reader makes meaningful measurable growth in fluency in a week. Ask your test provider what the standard error of measurement is of their test and take a look at the growth rates in fluency in the fluency norms. You need to have growth greater than that standard error before you'll be able to get measurable growth. Don't waste kids' learning time.
tim

Meredith Jan 31, 2026 04:21 PM

Hi!! I have been trying to communicate to my first grade teachers the importance of having their students read DIBELS passages for exposure. They have been assessing them using decodable passages from a curriculum. Decodable passages should be used for a time. Could you give me some talking points to help them understand? Thank you! ????

Dr. Bill Conrad Jan 31, 2026 04:34 PM

Meredith:
DIBELS is a valid and reliable monitoring assessment tool.

It is not intended to be used instructionally as that would destroy its assessment validity.

Your assessment illiteracy is showing. Time to take a refresher course on assessment! No?

Comments

What Are the Best Fluency Learning Targets? I Think My School is Overdoing It

7 comments

One of the world’s premier literacy educators.

He studies reading and writing across all ages and abilities. Feel free to contact him.

Timothy Shanahan is one of the world’s premier literacy educators. He studies the teaching of reading and writing across all ages and abilities. He was inducted to the Reading Hall of Fame in 2007, and is a former first-grade teacher.  Read more

60 E Monroe St #6001
CHICAGO, Illinois 60603-2760
Subscribe