How Much Reading Gain Should be Expected from Reading Interventions?

  • early interventions remedial reading
  • 12 February, 2017

This week’s challenging question:

I had a question from some schools people that I’m not sure how to answer. I wonder if anyone has data on what progress can be expected of students in the primary grades getting extra help in reading.

Let’s assume that the students are getting good/appropriate instruction, and the data were showing that 44% of students (originally assessed as “far below”) across grades 1-3 were on pace to be on grade level after 2 years of this extra help.

Is this expected progress for such students or less than what has been shown for effective early reading interventions?

Shanahan’s answer:

           This is a very complicated question. No wonder the field has largely ducked it. Research is very clear that amount of instruction matters in achievement (e.g., Sonnenschein, Stapleton, & Benson, 2010), and there are scads of studies showing that various ways of increasing the amount of teaching can have a positive impact on learning (e.g., preschool, full-day kindergarten, afterschool programs, summer school programs).

           Although many think that within-the-school-day interventions are effective because the intervention teachers are better or the methodology is different, but there is good reason to think that the effects are mediated by the amount of additional teaching that the interventions represent. (Title I programs have been effective when delivered after school and summer, but not so much with the daytime within school (Weiss, Little, Bouffard, Deschenes, & Malone, 2009); there are concerns about RtI programs providing interventions during reading instruction instead of in addition to it (Balu, Zhu Doolittle, Schiller, Jenkins, & Gersten, 2015)).

           Research overwhelmingly has found that a wide-range of reading interventions work—that is the kids taught by them outperform similar control group kids on some measure or other—but such research has been silent about the size of gains that teachers can expect from them (e.g., Johnson & Allington, 1991). There are many reasons for such neglect:

(1)  Even though various interventions “work” there is a great deal of variation in effectiveness from study to study.

(2)  There is a great deal of variation within studies too—just because an intervention works over all, doesn’t mean it works with everybody who gets it, just that it did better on average.

(3)  There is a great deal of variation in the measures used to evaluate learning in these studies—for example, if an early intervention does a good job improving decoding ability or fluency, should that be given as much credibility as one that evaluated success with a full-scale standardized test that included comprehension, like the accountability tests schools are evaluated on?

(4)  Studies have been very careful to document learning by some measure or other, but they have not been quite as rigorous when it comes to estimating the dosages provided. In my own syntheses of research, I have often had to provide rough guestimates as to the amounts of extra teaching that were actually provided to students (that is, how much intervention was delivered).

(5)  Even when researchers have done a good job of documenting numbers and lengths of lessons delivered, it has been the rare intervention that was evaluated across an entire school year—and, I can’t think of any examples, off hand, of any such studies longer than that. That matters because it raises the possibility of diminishing returns. What I mean is that a program with a particular average effect size over a 3-month period may have a lower size of effect when carried out for six or 12 months. (Such a program may continue to increase the learning advantage over those longer periods, but the average size of the advantage might be smaller).

           Put simply? This is a hell of a thing to try to estimate—as useful as it would be for schools.

           One interesting approach to this problem is the one put forth by Fielding, Kerr, & Rosier, 2007. They estimated that the primary grade students in their schools were making an average year’s gain of one year for 60-80 minutes per day of reading instruction. Given this, they figured that students who were behind and were given additional reading instruction through pullout interventions, etc. would require about that many extra minutes of teaching to catch up. So, they monitored kids’ learning and provided interventions, and over a couple of years of that effort, managed to pull their schools up from about 70% of third graders meeting or exceeding standards to about 95%—and then they maintained that for several years.

           Fielding and company’s general claim is that the effects of an intervention should be in proportion to the effects of regular teaching… thus, if most kids get 90 minutes per day teaching and, on average, they gain a year’s worth on a standardized measure, then giving some of the kids an extra 30 minutes teaching per day, should move those kids an additional 3-4 months. That would mean that they would pick up an extra grade level for every 2-3 years of intervention. I’m skeptical about the accuracy of that, but it is an interesting theory.  

           Meta-analyses have usually reported the average effect sizes for various reading interventions to be about .40 (e.g., Hattie, 2009). For example, one-to-one tutoring has a .41 effect (Elbaum, Vaughn, Tejero Hughes, & Watson Moody, 2000.

           However, those effects estimates can vary a great deal, depending on when the studies were done (older studies tend to have less rigorous procedures and higher effects, etc.), by the kind of measures used (comprehension outcomes tend to be lower than those obtained for foundational skills, and standardized tests tend to result in lower effects than experimenter-made ones), etc.

           For example, in a review of such studies with students in grades 4-12, the average effect size with standardized tests was only .21 (Scammacca, Roberts, Vaughn, & Stuebing, 2015); and in another sample of studies, the impact on standardized comprehension tests was .36 (Wanzek, Vaughn, Scammacca, Gatlin, Walker, & Capin, 2016).

           You can see how rough these estimates are, but let’s just shoot in the middle someplace… .25-.30 (a statistic I obviously just made up, but you can see the basis on which I made it up—relying most heavily on the best studies, the best and most appropriate measures).

           What does that mean? As long as we are talking about primary grade kids and typical standardized reading tests, the usual size of a standard deviation is about 1 year. In other words, if you took a 3rd grade Gates-MacGinitie and tested an average group of second and third graders with it, you’d find about 1 standard deviation difference in scores between the grade level groups. (Those connections between amount of time and standard deviation change as you move up the grades, so you can’t easily generalize up the grades what I am claiming here).

           Thus, if you have a second-grader who is one full year behind at the beginning of the year (that is the class gets a 2.0 grade equivalent score in reading, but this child gets a 1.0), and the student is in a good classroom program and an effective intervention, we should see the class accomplishing a 3.0 (that would be the year’s gain for the year’s instruction), and the laggard student should score at a 2.25-2.30.

           All things equal, if we kept up this routine for 3-4 years, this child would be expected to close the gap. That sounds great, but think of all the assumptions behind it: (1) the student will make the same gain from classroom teaching that everyone else does; (2) the intervention will be effective; (3) the intervention will be equally effective each year—no one will back off on their diligence just because the gap is being closed, and what was helpful to a second-grader will be equally helpful with a third-grader; (4) the intervention will continue to be offered year-to-year; and (5) that the tests will be equally representative of the learning elicited each year.

           That tells you how much gain the group should make. Your question doesn’t tell how far behind the kids were when they started, nor does it tell how much gain was made by the 56% who didn’t reach grade level… so moving 44% of them to grade level in 2 years may or may not be very good. I could set up the problem—plugging in some made up numbers that would make the above estimates come out perfectly, which would suggest that their intervention is having average effectiveness… or I could plug in numbers that might lead you to think that this isn’t an especially effective intervention.

           I have to admit, from all of this, I don’t know whether their intervention is a good one or not. However, this exercise suggests to me that I’d be seeking an intervention that provides at least, on average, a quarter to a third of a standard deviation in extra annual gain for students. And, that has some value.


Balu, R., Zhu, P., Doolittle, F., Schiller, E., Jenkins, J., & Gersten, R. (2015). Evaluation of response to intervention practices for elementary school reading. Washington, DC: U.S. Department of Education.

Elbaum, B., Vaughn, S., Tejero Hughes, M., & Watson Moody, S. (2000). How effective are one-to-one tutoring programs in reading for elementary students at risk for reading failure? A meta-analysis of the intervention research. Journal of Educational Psychology, 92, 605-619.

Fielding, L., Kerr, N., & Rosier, P. (2007). Annual growth for all students… Catch up growth for those who are behind. Kennewick, WA: New Foundation Press.

Hattie, J. (2009). Visible learning. New York: Routledge.

Johnson, P., & Allington, R. (1991). Remediation. In R. Barr, M. L. Kamil, P. B. Mosenthal, & P.D. Pearson (Eds.), Handbook of reading research (vol. 3, pp. 1013-1046). New York: Longman.

Scammacca. N.K., Roberts, G., Vaughn, S., & Stuebing, K.K. (2015). A meta-analysis of interventions for struggling readers in grades 4-12: 1980-2011. Journal of Learning Disabilities, 48, 369-390.

Sonnenschein, S., Stapleton, L. M., & Besnon, A. (2010). The relation between the type and of instruction and growth in children’s reading competencies. American Educational Research Journal, 47, 358-389.

Weiss, H.B., Little, P.M.D., Bouffard, S.M., Deschenes, S.N., & Malone, H.J. (2009). The federal role in out-of-school learning: After-school, summer school learning, and family instruction as critical learning supports. Washington, DC: Center on Education Policy.

Wanzek, J., Vaughn, S., Scammacca, N., Gatlin, B., Walker, M.A., & Capin, P. (2016). Meta-analyses of the effects of tier 2 type reading interventions in grades K-3. Educational Psychology Review, 28, 551-5



See what others have to say about this topic.

Sonja Apr 05, 2017 05:55 PM

Thanks for this post. I am trying to see how much of a gain my son who's in 7th grade can get from after school tutorials, but we can only work about 2 hours a week. This would suggest that the benefit wouldn't be that much. However, I wonder if the type of intervention isn't going to make a difference? If the instruction is closing a gap, then there might be greater than a 1 to 1 gain. 2/12/17

Dennis Ashendorf Apr 05, 2017 05:56 PM

Fade over time is another issue. E.D. Hirsch covers this extensively in his work. 2/12/17

Timothy Shanahan Apr 05, 2017 05:57 PM


First, from your query, I can't tell relatively how much impact this intervention can have--but the focus shouldn't be on the relative growth (compared to what can come from a classroom reading program), but the absolute growth that can be accomplished. Two hours per week is considerable and I would strongly encourage you to keep it up. As I pointed out in the entry, interventions may have more or less effect on an individual. Effects on individual students are not predictable from group success rates. Tutoring has been found to have an average effect across a lot of studies, but any individual in any of those studies could make remarkable gains (or none at all). Keep your son motivated--make sure he can see the learning (and keep his tutor motivated too--if he/she can see your son's success it will be easier to keep the lessons vigorous).

Good luck to him and to you.

tim 2/12/17

EdEd Apr 05, 2017 05:58 PM

This is a great post Dr. Shanahan, and one that I think will find increasing importance in the years to come as accountability continues to be at the forefront of education. It's been interesting, for example, how we've largely underestimated the divisor when calculating rate of improvement, or the "dosage" as you mentioned. We often report effect sizes and WCM results, but we don't often mention how many minutes, units of instruction, etc. it took for us to get there. We talk about which intervention or program is more effective, but what if the more effective one took twice as long? What would happen if we doubled the dosage of the less effective one?

As you mentioned, I think we shouldn't limit ourselves to a false rule that all rates of learning will be the same - that a minute spent in any instructional environment will be the same as a minute spent in any other. We know that some intervention is not only more effective, but more efficient. Although the brute power of "more time" as an intervention can be powerful, as we all know we don't always have that luxury when kids are years behind their peers. Continuing to search for interventions that are lean and efficient is a worthwhile quest. The good news is that we're actually, maybe, starting to pay more attention to this all-too-important variable. 2/13/17

Ann Apr 05, 2017 05:59 PM

Thank you for your excellent article detailing intervention info. With a $100,000 grant from the MN legislature, the nonprofit Rock 'n' Read Project is currently running a state pilot reading intervention in four schools that is showing dramatic results with a software program called TUNEin to READING that uses singing to boost reading. Well-conducted research studies (U of South Florida--Dr. Susan Homan and Dr. Marie Biggs) have already found that students gain 1 year (avg) in 13.5 hours of singing/reading, and students who sang/read with the software made greater year-to-year gains over consecutive five years on the FL state reading assessments. The Rock 'n' Read Project interim report to the MN Dept of Ed and MN Legislature concludes that in just 7.5-24 hours of program usage, the pilot students are making (avg) of nearly .5 year reading gain--nearly twice as much gain as struggling readers in those schools who are not using the software. See 2/14/17

SpellReadinMaryland Apr 05, 2017 06:01 PM

Thanks for the post, Dr. Shanahan. I have seen interventions both in private settings and school environments. I would say that so long as the intervention does not supersede instructional time from the core instruction the intervention can yield great benefits to students. The problem for schools though is where to find the time during the busy school day. On the private side, I have found that twice a week for 60 minutes is helpful for elementary students. At the secondary level, 90 minutes has worked out well if the student has substantial gaps in all areas of reading. 2/15/17

Timothy Shanahan Apr 05, 2017 06:05 PM

Ann, I wonder if this was delivered the same way that it would have been--in terms of the amount of training and researcher support--as it would if my school district bought it? 2/12/17

Ann Apr 05, 2017 06:06 PM

Tim, you asked, "I wonder if this was delivered the same way that it would have been--in terms of the amount of training and researcher support--as it would if my school district bought it?"

The Rock 'n' Read Project provides the same training as the company does, with this additional support:

1) Typing student names into the software program

2) Providing an on-site person the first few days the students use the program

3) Administering a pre-, mid- and post- reading assessment to all 2nd-5th grade students in the school using FastBridge online aReading assessment developed by Theodore Chist at U of MN (if the school doesn't already use it).

4) Providing an analysis of reading gains on the FastBridge assessment (with comparisons between those using the program and those who are not.

What Are your thoughts?

Leave me a comment and I would like to have a discussion with you!

Comment *

How Much Reading Gain Should be Expected from Reading Interventions?


One of the world’s premier literacy educators.

He studies reading and writing across all ages and abilities. Feel free to contact him.