What Are the Best Fluency Learning Targets? I Think My School is Overdoing It

Blog Banner
31 January, 2026

text reading fluency

oral reading fluency

Teacher question:

I am a literacy interventionist at an elementary school, and we use DIBELS for our progress monitoring. While I recognize the value of DIBELS as a screening tool, I have concerns about the appropriateness of the current fluency benchmarks my school has adopted. I have found some research that identifies fluency goals calibrated to reading comprehension. Studies by O'Connor (2017) and Cogo-Moreira et al. (2023) identify specific words-per-minute benchmarks to establish a cut-off point for reading speed and accuracy to obtain minimum values for comprehending texts. These wpm goals are much lower than our fluency goals. If the ultimate goal of reading is understanding the text, I wonder if these research-based targets would be more appropriate goals for many of our students.

Shanahan responds:
Several years ago, a friend of mine was developing a remedial reading program. He wanted to set fluency benchmarks.

I hadn’t thought much about that problem. I had chaired the National Reading Panel subcommittee on fluency instruction. I knew that fluency teaching improved fluency and, consequently in most studies, reading comprehension.

But how fluent did kids have to be?

There were no studies that had addressed the problem in quite that way (I thought), but there were some fluency norms that could provide a clue.

My first thought was that they should aim for the 50th percentiles. For example, the norms indicated that the average second grader ends the year able to read about 96 words correct per minute (wcpm) (Hasbrouck & Tindal, 2017). For me, that would have been the second-grade target.

My reasoning was straightforward. My thinking was that kids who reached the 50th percentile would not have a fluency problem. If they were struggling to make sense of text, it wouldn’t be because they struggled too much with the words, etc.

My friend was not satisfied. He wondered, “Why wouldn’t the 40th or 45th percentile be adequate?”

Many years ago, Keith Stanovich (1984) described reading as an “interactive-compensatory” process. What he was getting at was that reading involves a constellation of varied skills and abilities.

Think, for instance, of the “simple view of reading.” That model describes reading comprehension as the product of two sets of abilities: decoding and language comprehension. To read you must translate print to language and then you must do what you do to understand language. Decoding and language abilities must interact.

What happens if readers struggle with one of these skills? According to Stanovich and scads of research, readers try to rely on their relative strength to compensate for the limitation. When readers struggle to decode, they don’t quit in a snit, they make like AI, trying to guess the word, using what they know about the semantics and syntax of language to compensate for decoding limitations.

Fluency – like reading in general – is a bit of a mash up. It relies on both decoding skills and language knowledge (along with executive functioning, reasoning ability, and knowledge). Kids who reach the 50th percentile in fluency are not necessarily all the same. Some kids may rely more on decoding, while others may compensate with some of their other abilities. Achieving average on a fluency test won’t guarantee average decoding skills, but it seems very unlikely that such students would be particularly low in decoding. You can only compensate so much.

The research that you noted is interesting. The researchers were doing exactly what you said, they were trying to identify the degree of fluency that was necessary to enable adequate comprehension.

There is a long line of research on this topic – something those modern researchers seem to be unaware of. Back in the 1940s, Emmett Betts (1946) wasn’t trying to establish a fluency learning objective. No, his purpose was to determine an appropriate level of text to use for instruction. He theorized, without evidence, that kids could only improve their reading when working with texts they comprehended, and he decided, again without any evidence, that an adequate degree of comprehension meant kids could answer 75-89% of the questions about a text.

He concluded, based on the kinds of studies researchers are doing now, that kids were best taught with texts they could read with 95% accuracy. When someone refers to “reading levels” that’s what they mean (or some variation on those criteria).

Later studies (Dunkeld, 1970; Powell, 1968) accepted Betts’s theory but challenged his criteria. They reported a lot of variation across the grades. In other words, different degrees of fluency were needed to ensure the target comprehension levels depending on grade levels.

The more recent studies don’t just consider accuracy – that is, the percentage of words read correctly. They look at a combination of speed and accuracy: the numbers of words students can read correctly per minute (wcpm). This approach provides a more reliable estimate of fluency, especially if conducted with multiple texts or for longer reading durations.

These newer studies, like those from the 1960s and 1970s, are reporting that kids don’t need to be especially fluent to comprehend. For instance, the fluency norms report that at the end of Grade 2, the average student can read about 100wcpm. Various studies say that 43 (Alves, et al., 2020), 47 (Cogo-Moreira, et al., 2023), or 78 wcpm (O’Connor, 2017) are all that is needed to allow successful comprehension.

Or for grade 4: the norms say 133 wcpm, while the amounts of fluency needed to ensure comprehension in these studies is 71 (Alves et al., 2021), 79 (Cogo-Moreira, et al., 2023) and 70 wcpm (O’Connor, 2017).

The researchers who have published these findings are appropriately cautious; they recognize that with different sets of texts or larger and more diverse samples of kids, the results are likely to vary quite a bit. This is because the standard deviations are large for this ability both in their studies and in the norms.

Those DIBELS targets are not just a seat-of-the-pants estimate like my notion of aiming for the 50th percentile. Nor are they an attempt to predict reading comprehension. Their targets are based on the connection of their oral reading fluency scores and performances on state tests – a more distant and generalized measure of reading ability than used in these studies. Basically, their benchmarks are more linked to learning progress than to comprehension (University of Oregon, 2020). Accordingly, their targets are much closer to those averages that I had recommended. For second graders, the norms say 100 wcpm, and DIBELS aims for 94; for grade 3, it’s 112 and 114; for Grade 4, 133 and 125, and so on.

What is it that DIBELS (and other test makers) are claiming with their target criteria? They are not claiming that the accomplishment of those levels of fluency will guarantee high performance on your state tests. Reading is too complex for that.

No, they are saying that if your kids are that fluent, you can check that off as a reason for low reading comprehension performance. Another way of saying this is, that given those levels of fluency, if your kids are also adequate in all their other abilities (like vocabulary, for instance), then they should have good enough reading comprehension.

Personally, if my school was using DIBELS or one of these other testing regimes, I would use their targets. Without those kinds of tools, aiming for the 50%ile may be a bit high, but only a bit. It is a reasonable target. In any event, since there is more than one way to comprehend a text, preparing kids to only be fluent enough to comprehend a given text is just too low a standard if we want our kids to able to read a wide range of texts well enough.

References

Alves, L. M., Santos, L. F. D., Miranda, I. C. C., Carvalho, I. M., Ribeiro, G. L., Freire, L. S. C., Martins-Reis, V. O., & Celeste, L. C. (2021). Reading speed in elementary school and junior high. Evolução da velocidade de leitura no Ensino Fundamental I e II. CoDAS33(5), e20200168. https://doi.org/10.1590/2317-1782/20202020168

Betts, E. (1946). Foundations of reading instruction. New York; American Book Co.

Cogo-Moreira, H., Molinari, G. L., Carvalho, C. A. F., Kida, A. S. B., Lúcio, P. S., & Avila, C. R. B. (2023). Cut-off point, sensitivity and specificity for screening the reading fluency in children. Pontos de corte, sensibilidade e especificidade para rastreamento da fluência leitora em crianças. CoDAS, 35(3), e20210263. https://doi.org/10.1590/2317-1782/20232021263pt

Dunkeld, C. G. (1970). The validity of the informal reading inventory for the designation of instructional reading levels: A study of the relationships between children’s gains in reading achievement and the difficulty of instructional materials. Unpublished doctoral dissertation, University of Illinois at Urbana-Champaign.

Hasbrouck, J. & Tindal, G. (2017). An update to compiled ORF norms (Technical Report No. 1702).  Eugene, OR. Behavioral Research and Teaching, University of Oregon.

O’Connor, R. E. (2017). Reading fluency and students with reading disabilities: How fast is fast enough to promote reading comprehension? Journal of Learning Disabilities, 51(2), 124-136.
https://doi.org/10.1177/0022219417691835

Powell, W. R. (1968). Reappraising the criteria for interpreting informal inventories. In D. L. DeBoer (Ed.), Reading diagnosis and evaluation (pp. 100-109). Newark, DE: International Reading Association.

Stanovich, K. E. (1984). The interactive-compensatory model of reading: A confluence of developmental, experimental, and educational psychology. RASE: Remedial & Special Education, 5(3), 11–19.  https://doi.org/10.1177/074193258400500306

University of Oregon. (2020). Dynamic Indicators of Basic Early Literacy Skills (DIBELS, 8th ed.). Eugene, OR: University of Oregon. https://dibels.uoregon.edu

Shanahan On Literacy Podcast

Comments

See what others have to say about this topic.

What Are your thoughts?

Leave me a comment and I would like to have a discussion with you!

Comment *
Name*
Email*
Website
Julie Brown Jan 31, 2026 03:26 PM

I am the literacy specialist at my school and we use ALO to progress monitor students each week in WPM and accuracy. Since the end of year benchmarks are designed to measure when students are out of risk for fluency failure, around 40%, I implemented a goal for students to reach 60-65 % targets (with 90% accuracy) before they are exited from weekly progress monitoring. My thoughts are to ensure students are fluently stable with grade level text well beyond the ALO goals of 40%. This will also, hopefully, account for summer reading slide.

I also use this for ALOs other measures (FSF, PSF, NWF, and MAZE)

Thoughts on this approach?

Dr. Bill Conrad Jan 31, 2026 03:26 PM

Well done article,Tim.

I would argue that many teachers and administrators fail to understand the reason why fluency contributes to better reading comprehension.

In order to comprehend texr well, students need to be able to read with automaticity. If students struggle to decode words, most of their mental energy will be devoted to this task. If students can decode words with automaticity, more mental energy can be devoted to comprehension.

It is an important indicator as you suggest. However, it is not the whole story.

The general lack of professionalism in teaching will lead most teachers to mechanically administer the fluency tests without a deeper understanding of how fluency contributes to comprehension.

Working backwards from the fluency levels of 3rd grade students who read at grade level seems to me to be the best approach to assigning benchmark reading scores in earlier grades. Developers of tests like DIBELS take on this crucial psychometric work. It is well beyond the capacity of local school districts to engage in this statistical work.

No?

Jo Anne Grosd Jan 31, 2026 03:36 PM

I like this!
I believe orthographic mapping improves Fluency.
Julie
Your comment about weekly monitoring re Fluency, that’s IMO not realistic.
Monthly is more like it.

Timothy Shanahan Jan 31, 2026 03:38 PM

Jo Anne--
Not even monthly. Given the standard error of those tests and the rate of growth of students, three times a year should be sufficient -- at the beginning of each semester and end of the year.

tim

Timothy Shanahan Jan 31, 2026 03:42 PM

Julie--
Weekly fluency monitoring is unreasonable and wasteful. Use that time to teach fluency rather than to test it. No reader makes meaningful measurable growth in fluency in a week. Ask your test provider what the standard error of measurement is of their test and take a look at the growth rates in fluency in the fluency norms. You need to have growth greater than that standard error before you'll be able to get measurable growth. Don't waste kids' learning time.
tim

Meredith Jan 31, 2026 04:21 PM

Hi!! I have been trying to communicate to my first grade teachers the importance of having their students read DIBELS passages for exposure. They have been assessing them using decodable passages from a curriculum. Decodable passages should be used for a time. Could you give me some talking points to help them understand? Thank you! ????

Dr. Bill Conrad Jan 31, 2026 04:34 PM

Meredith:
DIBELS is a valid and reliable monitoring assessment tool.

It is not intended to be used instructionally as that would destroy its assessment validity.

Your assessment illiteracy is showing. Time to take a refresher course on assessment! No?

Christine Anselmi Jan 31, 2026 05:36 PM

There are, of course, situations where students are on target with fluency but reading comprehension is low. Tim, you mentioned that "to read you must translate print to language and then you must do what you do to understand language." But did you mean to say "to comprehend you must....."? To read you really only have to do one thing...decode. To comprehend, you must decode, understand language, interpret, access prior knowledge, visualize etc. If a student is fluent but not comprehending adequately it can be a number of issues: lack of vocabulary understanding, passive reading (not monitoring while reading), lack of background knowledge, not visualizing what he/she is reading, or other factors.
I would love more of your insights into the factors that handicap comprehension when fluency is adequate, especially around visualization. I learned (a little too late) from my now 23 year old son (who say he hates to read) that he does not picture anything in his mind when he reads. It never occurred to me that something like that was "a thing" with which some students struggle. I realize I am not necessarily commenting on this week's post, but asking about information on a completely different topic - how to access and then support students who have reading visualization problems. Visualization seems like a critical part of how someone would access meaning from text.
Thanks! It's always a pleasure to read your posts!

Sophie Turner Jan 31, 2026 06:46 PM

In regard to the fluency monitoring that Julie and Jo Anne were discussing, what should be measured weekly, if not fluency, to determine the learner’s response to intervention and whether changes to the intervention are needed?

Timothy Shanahan Jan 31, 2026 10:26 PM

Sophie-
There is no test that can reliably show growth weekly. None. Using fluency as the example (but we could focus on decoding or sight vocabulary or reading comprehension, etc.), let's say you test a youngster on Friday and he reads 50 wcpm. Then next Friday, you test him again, and now he reads 52 wcpm (or 48 wcpm). Does that 2 word improvement (or decline) indicate that your intervention is working (or not). Those tests have standard errors of measure of 10 words or more. That means that if you give that test again and again to that student without any learning or forgetting taking place, 67% of the test scores will fall between 40-60. A 48, 50, 52 are, as far as we can tell, the same score -- there is no meaningful difference.

If you want a weekly monitoring of the quality of your intervention, look to see if the kids have learned what you taught this week. If there were sight words you taught, how many or what percentage of those do the kids still know. If you taught certain spelling patterns, can the kids decode or spell those words or those kinds of words. If you had the students practicing fluency, can they still read those passages fluently (and better than they did when you started)? Those don't allow me to predict how well kids will do on their benchmark tests at the end of the semester or how well they are likely to do on their state tests, but they will reveal whether the things that you are teaching are actually getting learned. If you are teaching the right stuff, it will work out fine.

tim

Sebastian Wren Jan 31, 2026 07:59 PM

In our intervention program in Texas (Literacy First), we set our "exit criteria" a little lower than typical 50th percentile norms -- if 2nd grade students are reading around 70 to 80 words per minute with unfamiliar text, we see that as a student who is NOT on grade level, but who also does NOT need an intervention outside the class. Our goal is to support the students' decoding and fluency skills to a point where they can access and succeed with high-quality classroom instruction. We have had to set our own norms and criteria in Spanish, but they are not terribly different from the norms we use in English. In a perfect world, we would provide targeted and individualized interventions to students who are significantly below grade level norms, and we would continue that support until they reach, approximately, the 40th percentile in standardized measures of reading skill. From there, the classroom teacher can work with the student to help them fully close the gap with grade-level expectations.

Victoria Haliburton Jan 31, 2026 08:24 PM

HUGE problem: the Lake Wobegon Effect. Garrison Keillor describes his mythical community of Lake Wobegon "where all of the kids are above average". If you think about that for a minute you will realize that is impossible. If you set your benchmark at the fiftieth percentile, you *guarantee* that half of all students won't meet that benchmark. Think that one through again. That's exactly what fiftieth percentile means. Nope.

Timothy Shanahan Jan 31, 2026 10:13 PM

Victoria--
What you describe is true -- but only if certain assumptions are in place. One assumption is that currently everyone is doing their best to accomplish sufficient fluency. That certainly is not true when it comes to oral reading fluency. Large numbers of kids get little support in becoming fluent. A second assumption that you are making is that we are renorming fluency constantly -- that means that anyone who manages to increase their words correct per minute may stay in the same place (or may even slip lower) with regard to percentages. That, too, is not the case here. Our norms are almost 10 years old (and when we compare them with earlier norms, there is not much of a change). It is possible to get many more kids and many more percentages of kids over those 2017 averages. Let's say we got all second graders to exceed the current 50th percentile (about 100 wcpm). New norms might indicate low/no increases for them, but they would still be reading fluently enough to ensure reasonably high comprehension.

tim

Timothy Shanahan Jan 31, 2026 11:19 PM

Christine-
I see where you are coming from, so in a way I do mean for reading comprehension. However, I meant it the way I wrote it because I don't accept reading a text without comprehension to be reading. Reading is a complete act of using print to figure out an author's meaning. The list of items in your note is a good one -- there are several things that can facilitate or undermine reading comprehension, including: decoding, text reading fluency, vocabulary, morphology, syntax, cohesion, discourse structure, prior knowledge, inattention, graphics, literary devices, etc. But, basically, the simple view is correct (all those items fit into decoding or language comprehension). Making sure that students are proficient with fluency cannot guarantee comprehension, but without it, comprehension will be very unlikely. It is an enabler -- a necessary but insufficient condition).
tim

Timothy Shanahan Jan 31, 2026 11:26 PM

Meredith--

Absolutely not. Your students should NOT be practicing DIBELS passages (or any other test passages). That may make some students appear to be reading better than they are -- which means that we will be able to hide from the students and parents that the student needs help (definitely not a good thing to do for the student). Fluency practice should take place with texts that will NOT be used for assessment purposes.

tim

Lauren Jan 31, 2026 11:55 PM

A little off topic, but is there a significant difference in fluency with students' ability to read out loud vs. reading to themselves silently? I would think that they would read more with better comprehension. Reading out loud can come with challenges in the affective domain: Embarrassment, anxiety, social pressure etc... Obviously it would be difficult to assess, but I was just curious if anyone has looked into this? Do I have students who do poorly on oral reading fluency tests, but read quite a bit better silently to themselves, or is it a pretty close match?

Ami LaRoche Feb 01, 2026 12:12 AM

Hi Tim, long time lurker, first time poster. I was wondering about your response to Sophie. I agree 100% that week to week scores are not good indicators of improvement, but can’t we see a lack of progress faster with weekly check-ins than with biweekly or monthly? I’m talking solely about students in Tier 3 intensive interventions, not the general classroom.

If it takes (as I’ve been told again and again) 5 to 7 data points to see a reliable trend, then why wait for 10 weeks or 10 months to see something might not be working? If I have 5 to 7 data points over 5 to 7 weeks, I can see if that slope is flat, negative, or trending up but still well below the aimline. Then I know I need to reach out to an instructor and ask them what they are seeing during their sessions with that student.

That was an exaggeration, of course, most teachers will see that a student is struggling long before those ten weeks are up. But not all.

I worry that if we relied more on mastery of weekly content for RTI progress monitoring, then everyone could be in for a big, unpleasant surprise when that middle or end of year benchmark comes up and the student who was scoring 100% on the weekly mastery quiz of the three irregularly spelled high-frequency words they learned hasn’t had that translate into grade-level gains.

Why not build those informal formative assessments for understanding and mastery into the lessons, but still grab a one-minute probe to see how that instruction is also impacting a child’s ability to catch up to their peers?

If the ultimate goal is to get students reading at a rate of fluency with accuracy that allows them to comprehend grade-level text at least as competently as their 40th - 60th %ile peers, then wouldn’t a probe that measures and predicts (with enough data points) the likelihood of that happening on a test later on be useful? Could there be a risk of students becoming proficient at applying the skills they learned that week to a very similar-looking quiz, but then when given material that’s unfamiliar to them, they struggle to apply these skills?

I realize this isn’t a risk if one is teaching the ‘right stuff’, but what if you aren’t?

How do you know before it’s too late, especially if you are a newer, less experienced teacher?

I’ve definitely seen situations where instructional success was insulated. Students could read highly controlled, decodable text but fell apart with authentic text. I’ve also seen situations where a student has suddenly taken off with a skill, and we knew after a month that they were much stronger than they had maybe looked on a beginning of year assessment (we work in a high-needs district, sometimes a kid is hungry, tired, or stressed that day and they don’t do as well as they are capable of on a test).

I think formative assessment tells us if instruction is being absorbed, but progress monitoring does a better job of telling us if those skills are being generalized.

Ok, that's absurdly long, now I’m ready for you to pick this apart. Thank you for providing this information and a platform for all of us ‘unprofessional’ teachers to connect with someone who can translate the research.

Kirsten Feb 01, 2026 01:06 AM

Christine ,
Regarding visualizing during reading, get a copy of Nanci Bell's book Visualizing and Verbalizing for Language Comprehension and Thinking.

Sophie Turner Feb 01, 2026 02:09 AM

How do you recommend crafting IEP goals for students with specific learning disabilities in the area of reading, given the need and time demands of assessing these goals weekly, if not more frequently?

This would, of course, vary greatly by age, stage, and current strengths and weaknesses, but any advice would be much appreciated.

Here are some of the goals I have serviced and the challenges I have experienced.

2nd Grade Student, low average orthographic fluency and choice on WIAT, all else average or above average: "By the end of the IEP year, given controlled texts or word lists which contain single or multi-syllabic words with previously taught phonetic patterns (r-control, silent e patterns, common vowel teams, suffix -ed), STUDENT will decode the words with at least 80% accuracy on 4 of 5 trials, as measured by performance assessments."

Challenges
- I think the concepts listed (r-control, silent-e patterns, common vowel teams, and suffix -ed) are too few to cover in one year of instruction for most 2nd graders, and certainly this student.
- As it is written, a trial could be reading word list of: car, bike, feet, and hunted OR turbo, flagpoles, coached; using controlled texts makes it even easier. It's so broad that you can't conclude anything by looking at the accuracy from one trial to another.

7th Grade Student, reads grade level texts with consistently 95% or more accuracy and at about 100 CWPM: "STUDENT will encode and decode words (single and/or multisyllabic) containing various phoneme/grapheme correspondences and morphology (prefixes, suffixes, bases, roots) within an anticipated scope and sequence, using strategies that support his multisensory learning style (word decoding with written mark-ups) with at least 90% accuracy in 3 out of 4 trials by the end of this IEP as measured by performance measurements."
- The criteria are so broad that the student could demonstrate mastery on 4 trials before ever receiving intervention.
- How do you say if a student is or is not using a strategy to read the word accurately, much less a strategy that supports "his multisensory learning style"?

4th Grade Student, reads grade level texts with consistently 95% or more accuracy and at about 80 CWPM: "Given a grade level text, STUDENT will read 101 correct words per minute with at least 97% accuracy in 2 out of 3 trials by the end of the IEP year as measured by running records."
- Requires 3 minutes out of instructional time for assessment. I have been assessing weekly, but based on the number of trials, I could walk that down some.

Thanks,
Sophie

Mat Brigham Feb 01, 2026 06:02 AM

A great blog post thank you Tim!

What happens if we have ELL learners in our classes? Should they be expected to meet the same WCPM even if it’s evident from their spoken English that their regular speaking is at a slower pace than native speakers? Surely that will mean that their fluency will also be then slower. Is it fair to say they are dysfluent if the cause is their relatively slow speaking rate?

For such ELL students should we accept a lower WCPM rate?

Many thanks
Mat

Timothy Shanahan Feb 01, 2026 05:48 PM

Matt-
What we should expect and how we should deal with this are two very different things. I will expect that boys will do less well than girls, kids who come to English as a second language will do less well than those who have been speaking English their whole life, black kids will do less well than white kids, kids whose parents have limited education and limited funds will do less well than those who have more advantaged backgrounds, kids who have been retained in the past will do less well than those who succeed every year, kids who are being medicated will do less well than those with no need of pharmaceutical controls, and so on. I expect those outcomes, because we have a ton of evidence that says those patterns are often true, and yet what is generally the case, is not always the case and so I need to be observant and steadfast in my efforts to disrupt those unfortunate patterns (celebrating when the patterns fail to hold and working hard to give those kids every opportunity of breaking those patterns themselves). So, I expect that the ELL kids may not accomplish the same levels of fluency their native English speakers do (this discrepancy is even more likely if the test includes prosody).

But just because, on the basis of demographics, I may expect a lower normative pattern for kids from some groups, does not mean that I should alter the goals that I have for them. If I had data showing that one group read well enough if they reached 50wcpm but the other group needed to reach 75wcpm, then I would have different goals. But those various groups above tend have lower patterns of performance not just on decoding and fluency tests but on things like reading comprehension performance on state tests, performance on college entry exams, and so on. Trying to build fluency (or vocabulary, domain knowledge, decoding, etc.) to the levels that would allow these individuals to meet their current group norms means that we are ensuring long term difficulties for them.

Recognize that some kids are less advantaged than others. That they don't usually reach the same percentiles that more advantaged kids do doesn't mean they can't and doesn't mean that we shouldn't try to get them to the norms that seem to predict the outcomes that we want for everyone.

tim

Bill Keeney Feb 01, 2026 03:40 PM

Using State tests, which as usually CBM and measure all kinds of marginally relevant skills (identify the simile, guess this word in context)--as opposed to nationally normed reading tests is using a suspect measure of comprehension. In fact, any comprehension test that relies on multiple-choice items and then assigns a random percentage "right" (70%-89%?-- based on what?) as a benchmark of comprehension are just wrong on the face of them. Research shows they vary widely, often as much as 50%, in their evaluation of comprehension, so validating them against other suspect tests doesn't help. Research has also shown that text recall and retelling are better measures, but no one uses them because they are hard to administer. Instead, the tail wags the dog. The test tells whether the reading is comprehending rather than the readers informing the test as to whether it is valid. PS, Harbrouck says that 40-60% on her scale is the "Norm" that we can shoot for. What I would say is that after that, there needs to be lots of guide AND independent reading to hone and develop the attendant comprehension skills: vocabulary and background knowledge. Finally, after 5th grade the wcpm measure hits an asymptote (50th percentile = 140, 136, 140 in 6, 7, 8th), so beyond that WCPM doesn't mean much. NAIS uses prosody as a second measure. Even more telling in those grades and up. Again, hard to administer, so usually skipped. The problem is that we rely on inaccurate or misleading tests of comprehension and we don't have a real good view of what comprehension "is," how it develops beyond wcpm, or how to measure when it does.

Monica C. Feb 01, 2026 04:40 PM

Hi Tim,
Thank you for this post as it's extremely relevant to what my intervention team and I have been trying to navigate this year. We use Aimsweb to benchmark and progress monitoring for ORF. At the start of this year, Aimsweb renormed their fluency measures, showing a marked drop in comparison to the previous ORF norms. Have you seen/heard about this? For example, under the previous norms (prior to SY 25/26), a second grade student in the Fall was expected to read ~45 wcpm at the 25th%tile. With the new norms, the 25th%tile in the Fall for 2nd grade is now ~23 wcpm (that is a 22 word difference!). The drop is just as drastic across all grade levels. My team and I have felt like this is an extreme overcorrection (in the wrong way). We usually serve students in T2 interventions starting at the 25th%tile and below. With these new norms in place though, our data indicates a very small population of students that would qualify for T2 intervention (and we're a Title 1 school!). What do you recommend in this situation? Would you stand by these new norms? Should we continue to reference the old norms and/or Hasbrouck-Tindal (2017)? Do you know if any other publishers/assessment companies are also going to renorm? Ultimately, we fear that these new norms are hiding students that could truly benefit from a robust T2 intervention. I am grateful for your time and appreciate any thoughts you may have. Thank you!

Timothy Shanahan Feb 01, 2026 05:30 PM

Monika--
I did not know about that but looked into it to respond to you. The test maker in this case did the right thing -- it renormed its test on a regular basis to ensure that it was providing normative data. It found, as you reported, that students have not been doing as well in reading as in the recent past. These results are consistent in what we are seeing in national test scores so it makes sense that the norms are lower.

However, does it make sense to charge educational goals on this basis. I doubt it. It is possible to link benchmarks to desired outcomes. For example, a study might show that there is a 95% chance that kids who score a certain level on one test will succeed on another. (Something like, kids who read 95 wcpm by spring of 3rd grade are likely to meet standards on a state test...). If that kind of information changes a lot -- now kids who get 75 wcpm are still reaching the goal -- it would make sense to alter the benchmark. But that isn't what it going on here.

Above I describe how I was guessing at a reasonable benchmark... just picking a percentile norm in the hope that it would be sufficient for fluency ability to contribute enough to overall achievement (or, negatively, to say, since these kids are average in fluency that is UNlikely to be the problem overall). That sounds like what the AIMSweb people are doing too. Which would be fine if we were happy with how the average readers were doing. Right now, we are not happy. Scores have dropped in the recent past... dropping the fluency benchmarks is a surrender -- we accept that our kids don't read well enough, but we're going to make sure that fluency levels are high enough to ensure that our kids will continue to read this poorly.

If I were you, I wouldn't change my literacy benchmarks just because of those unfortunate normative changes (if we were happy with overall achievement, I would go along with the new norms, but that isn't the case). Aim for the 40th or 45th percentile on the Hasbrouk & Tindal norms and I think you'll be happy. I suspect that if you go for their much lower norms you will get a nasty surprise up the road on your state tests.

tim

Timothy Shanahan Feb 01, 2026 05:56 PM

Sophie-
What you are asking for can't be accomplished in any meaningful way. It is impossible to evaluate what you want to evaluate -- there are not measures of such fine precision that would allow the kinds of slight improvements to be monitored in any meaningful way. With any reading test that we have you will find that testing and retesting with no teaching in between will result in changes of score -- testing a child at 11:30 and 11:40 with no teaching, and you'll see different scores. Does that mean that learning was happening in that time or that forgetting was? That's the reason why you give the test 3 times -- trying to get a slightly more reliable score, which is good, but doesn't solve the problem.

It sounds like you are over testing. If you are making big changes in your teaching based on that information there is a good chance that you are undermining your efforts.
Tim

Timothy Shanahan Feb 01, 2026 06:19 PM

Ami-
What you are trying to do is noble, it just isn't possible with the measures that we have. Study after study has reported serious problems with the kind of slope analysis that you are depending on (Briggs, 2011; Christ, 2006; Francis, 2008; Cummings et al., 2011; Poncy, 20-0-5; Christ, 2013, Ardoin & Christ, 2009; Compton et al., 2008; Cho et al., 2018 , etc.). A big part of the problem is our inability to make multiple texts exactly equal (I know these test makers work hard at that, but analyses of their tests show those reliability problems). Studies that have reported some success with slope analysis (Christ, 2003) but that isn't what you may be imagining (these slopes aren't measured weekly but every 10 weeks).
By all means, observe what your students are doing and if they seem to be getting what you are teaching and whether their abilities seem to be going in the right direction (in other words, be a teacher) but those tests aren't giving you any new information when used that often.

tim

Timothy Shanahan Feb 01, 2026 06:28 PM

Lauren-
There is a bit of evidence on that suggesting that it is a pretty good match. The correlations between oral reading fluency scores and silent reading speed (the measure of silent fluency) are quite high, usually in the .80s and .90s. One thing that can undermine that correlation a bit are the readers who pretend to read during silent reading (one study found that to be 16%), meaning that you will be more certain with oral reading fluency tests. Of course, as students advance in reading, say by 8th grade, then as many kids reach the highest point of wcpm, the correlation diminishes for that reason.

Basically, if kids are reading, silent reading tests can tell you as much as the oral tests. If they are pretending to read, then not so much.

tim

Maralyn B. Feb 01, 2026 06:40 PM

This was an excellent read, as many of my middle students struggle with fluency, decoding and comprehension! I am surprised that no one mentions the visualizing component. After working with the Linda Mood Bell program (Visualizing and Verbalizing) I see how most of my struggling readers, do not make visual representations of the text. A part of this skill is embedded in the Wilson Reading Program and I see how multiple reads are needed for fluency, visualizing and ultimately comprehension. What is your opinion of this skills?

Timothy Shanahan Feb 01, 2026 06:43 PM

Maralyn-
There is some evidence that teaching kids to use visualization as a strategy can have a positive impact on reading comprehension. It is not one of the larger impacts for comprehension strategies. I'm not against some time devoted to it, but wouldn't necessarily make a big deal out of it. Not a lot of evidence of its value.

tim

Jennifer Jazyk Feb 02, 2026 02:45 PM

Bravo on this article, and bringing the importance of fluency to indpendent reading comprehension. With that, in my many years of working as a literacy consultant in strugggling and exemplary schools, the trap I see is that teachers of primary age students fall into the phonics and decoding black hole. They forget to teach comprehension. This is the greatest disservice to our young learners. We should always remember at these young ages listening comprehension is far greater (3-5 levels greater) that reading comprehension. We should be exposing these young kids to great literature and informational texts and in tandum with all the decoding and phonic be explicitly teacher reading comprehension strateiges. If we don't build schema and vocabulary along side fluency rates, we set kids up to be far less successful. The mystery of why one school fails and another excels when given the same exact resources is usally never hard for me to solve when helping a struggling school. Fluency/decoding is a relatively quick and easy remediation task (5-7 well structured and focused minutes a day can make a HUGE impact). Comprehension and vocabulary growth however take more time and effort to close achievement gaps, and we can't wait until a child is fluent to focus on them. We have to maximize their listening potential with rich books they love and get excited about. Hearing great text read fluently is motivating too. Rant over.

Daniel Ervin Feb 03, 2026 05:01 PM

I heard Jan Hasbrouck share a thought on this that I'd be very interested to get your take on. This is from episode 153 of the "Melissa and Lori Love Literacy" podcast, and Hasbrouck is discussing an analysis of a 2018 IES funded study of oral reading fluency, I think from NAEP. In the conversation, she says that she'd previously suggested that the 50th percentile is a good fluency benchmark to shoot for, but that this study caused her to rethink that, and that advanced readers tend to be in the 75th percentile and up. If I understood correctly, she was describing a correlational relationship that's suggestive, but not necessarily causal. But she thought the evidence was strong enough to suggest that placing some attention on reading fluency even above the 50th percentile is helpful for many students (though not beyond 150 WCPM with high-school-level text). Thanks!!

Timothy Shanahan Feb 03, 2026 05:49 PM

Daniel--
Jan's reasoning is similar to mine but with a different conclusion. Mine is closer to the results of the studies that have linked fluency to the performance on other tests (like state reading tests). She could be right, I could be right -- it would take direct study to make that determination.

tim

Gaynor Chaoman Feb 05, 2026 03:31 AM

I recently heard an English language academic comment about an observation in England in 1940 , of lower socio-economic 12-15 year olds reading Dickens, Stevenson and Chesterton which apparently university undergraduates now fail to be able to read or comprehend.
Maybe this isn't completely on topic , but recently I have heard teachers complain that students do little outside class reading anymore. In the past when I was younger , children certainly did considerably more reading for enjoyment . You just can't cover in class the reading millage which I believe is a big factor required for acquiring fluency.
How do you overcome this obstacle ? Rewards for reading a certain number of chapter books ? Stern notes sent home to parents to restrict devices and warnings of the disastrous effects devices have on a child's concentration and deep thinking- both issues for fluency?
Katherine Burbalsinghe a traditionalist who has a low income , immigrant roll at her London school comes down hard on devices in class and in homes . Her students achieve astoundingly high academic standards.
All these advances in structured literacy and more effective teaching could very well be cancelled out by not monitoring this issue of devices?

Timothy Shanahan Feb 05, 2026 02:00 PM

Gaynor-
Some inner city teachers have been successful in talking to their students about this problem, helping students to obtain books of interest, and providing them with opportunities to use/share in class what they gain from their outside reading. The idea is to make reading part of these students' lives.

tim

Lisa Feb 08, 2026 02:58 PM

Hi Tim,

Thank you for always providing an informative and balanced perspective. I am wondering if you have seen any of the studies around students with identified reading disabilities and their reading speed. Is there research that suggests that for some students, comprehension can actually be negatively impacted by trying to meet grade-level wcpm norms? I have seen growth over time with many of my students when looking at Read Naturally data (cold reads) but these are based on passages considered one to two years below grade-level. They have shown progress via my own progress monitoring as well (QPS, PA assessments, decoding/encoding check-ins, etc.). They also tend to show more growth on DIBELS progress monitoring with me, rather than when they are benchmarked with another adult. We are required to progress monitor our students "in the red" every two weeks and students "in the yellow" every four weeks. I worry that DIBELS is given too much weight when it comes to acknowledging the personal growth my students make and I also worry that focusing too heavily on reading speed may work against our ultimate goal which is to have my students grow in confidence and ultimately comprehend what they read.

Thank you,
Lisa

Timothy Shanahan Feb 08, 2026 03:25 PM

Lisa--

No, I have not seen such research. However, I have no doubt that if students are learning to read fast, rather than to read fluently, that their comprehension is likely to suffer. There are likely two major reasons for misfocusing. First, is how I wrote the fluency report for the National Reading Panel -- referring to accuracy, speed, and prosody. It should have been accuracy, automaticity, and prosody. Automaticity improves speed, but speed can be accomplished through hurrying and ignoring meaning. The idea of accuracy is that the students get so good at recognizing/decoding the words that their speed will improve. Second, the way we tend to measure fluency -- words correct per minute (wcpm). WCPM can be a useful proxy for those first two components of fluency -- but it is just that, a proxy (not a direct measure). Teachers do a lot of silly things trying to improve speed -- the only ones that make sense are improving decoding, increasing sight vocabulary, and some kind of guided fluency work. If you see a student who is improving WCPM but what they are reading doesn't sound like language -- then you are not really improving fluency in a way that will allow it to contribute to comprehension.

tim

Comments

What Are the Best Fluency Learning Targets? I Think My School is Overdoing It

36 comments

One of the world’s premier literacy educators.

He studies reading and writing across all ages and abilities. Feel free to contact him.

Timothy Shanahan is one of the world’s premier literacy educators. He studies the teaching of reading and writing across all ages and abilities. He was inducted to the Reading Hall of Fame in 2007, and is a former first-grade teacher.  Read more

60 E Monroe St #6001
CHICAGO, Illinois 60603-2760
Subscribe