Showing posts with label Interventions. Show all posts
Showing posts with label Interventions. Show all posts

Sunday, February 12, 2017

How Much Reading Gain Should be Expected from Reading Interventions?

This week’s challenging question:
I had a question from some schools people that I’m not sure how to answer. I wonder if anyone has data on what progress can be expected of students in the primary grades getting extra help in reading. 

Let’s assume that the students are getting good/appropriate instruction, and the data were showing that 44% of students (originally assessed as “far below”) across grades 1-3 were on pace to be on grade level after 2 years of this extra help.

Is this expected progress for such students or less than what has been shown for effective early reading interventions?

Shanahan’s answer:
            This is a very complicated question. No wonder the field has largely ducked it. Research is very clear that amount of instruction matters in achievement (e.g., Sonnenschein, Stapleton, & Benson, 2010), and there are scads of studies showing that various ways of increasing the amount of teaching can have a positive impact on learning (e.g., preschool, full-day kindergarten, afterschool programs, summer school programs).

            Although many think that within-the-school-day interventions are effective because the intervention teachers are better or the methodology is different, but there is good reason to think that the effects are mediated by the amount of additional teaching that the interventions represent. (Title I programs have been effective when delivered after school and summer, but not so much with the daytime within school (Weiss, Little, Bouffard, Deschenes, & Malone, 2009); there are concerns about RtI programs providing interventions during reading instruction instead of in addition to it (Balu, Zhu Doolittle, Schiller, Jenkins, & Gersten, 2015)).

            Research overwhelmingly has found that a wide-range of reading interventions work—that is the kids taught by them outperform similar control group kids on some measure or other—but such research has been silent about the size of gains that teachers can expect from them (e.g., Johnson & Allington, 1991). There are many reasons for such neglect:

(1)  Even though various interventions “work” there is a great deal of variation in effectiveness from study to study.

(2)  There is a great deal of variation within studies too—just because an intervention works over all, doesn’t mean it works with everybody who gets it, just that it did better on average.

(3)  There is a great deal of variation in the measures used to evaluate learning in these studies—for example, if an early intervention does a good job improving decoding ability or fluency, should that be given as much credibility as one that evaluated success with a full-scale standardized test that included comprehension, like the accountability tests schools are evaluated on?

(4)  Studies have been very careful to document learning by some measure or other, but they have not been quite as rigorous when it comes to estimating the dosages provided. In my own syntheses of research, I have often had to provide rough guestimates as to the amounts of extra teaching that were actually provided to students (that is, how much intervention was delivered).

(5)  Even when researchers have done a good job of documenting numbers and lengths of lessons delivered, it has been the rare intervention that was evaluated across an entire school year—and, I can’t think of any examples, off hand, of any such studies longer than that. That matters because it raises the possibility of diminishing returns. What I mean is that a program with a particular average effect size over a 3-month period may have a lower size of effect when carried out for six or 12 months. (Such a program may continue to increase the learning advantage over those longer periods, but the average size of the advantage might be smaller).

            Put simply? This is a hell of a thing to try to estimate—as useful as it would be for schools. 

            One interesting approach to this problem is the one put forth by Fielding, Kerr, & Rosier, 2007. They estimated that the primary grade students in their schools were making an average year’s gain of one year for 60-80 minutes per day of reading instruction. Given this, they figured that students who were behind and were given additional reading instruction through pullout interventions, etc. would require about that many extra minutes of teaching to catch up. So, they monitored kids’ learning and provided interventions, and over a couple of years of that effort, managed to pull their schools up from about 70% of third graders meeting or exceeding standards to about 95%—and then they maintained that for several years.

            Fielding and company’s general claim is that the effects of an intervention should be in proportion to the effects of regular teaching… thus, if most kids get 90 minutes per day teaching and, on average, they gain a year’s worth on a standardized measure, then giving some of the kids an extra 30 minutes teaching per day, should move those kids an additional 3-4 months. That would mean that they would pick up an extra grade level for every 2-3 years of intervention. I’m skeptical about the accuracy of that, but it is an interesting theory.  

            Meta-analyses have usually reported the average effect sizes for various reading interventions to be about .40 (e.g., Hattie, 2009). For example, one-to-one tutoring has a .41 effect (Elbaum, Vaughn, Tejero Hughes, & Watson Moody, 2000.

            However, those effects estimates can vary a great deal, depending on when the studies were done (older studies tend to have less rigorous procedures and higher effects, etc.), by the kind of measures used (comprehension outcomes tend to be lower than those obtained for foundational skills, and standardized tests tend to result in lower effects than experimenter-made ones), etc.

            For example, in a review of such studies with students in grades 4-12, the average effect size with standardized tests was only .21 (Scammacca, Roberts, Vaughn, & Stuebing, 2015); and in another sample of studies, the impact on standardized comprehension tests was .36 (Wanzek, Vaughn, Scammacca, Gatlin, Walker, & Capin, 2016).

            You can see how rough these estimates are, but let’s just shoot in the middle someplace… .25-.30 (a statistic I obviously just made up, but you can see the basis on which I made it up—relying most heavily on the best studies, the best and most appropriate measures).

            What does that mean? As long as we are talking about primary grade kids and typical standardized reading tests, the usual size of a standard deviation is about 1 year. In other words, if you took a 3rd grade Gates-MacGinitie and tested an average group of second and third graders with it, you’d find about 1 standard deviation difference in scores between the grade level groups. (Those connections between amount of time and standard deviation change as you move up the grades, so you can’t easily generalize up the grades what I am claiming here).

            Thus, if you have a second-grader who is one full year behind at the beginning of the year (that is the class gets a 2.0 grade equivalent score in reading, but this child gets a 1.0), and the student is in a good classroom program and an effective intervention, we should see the class accomplishing a 3.0 (that would be the year’s gain for the year’s instruction), and the laggard student should score at a 2.25-2.30.

            All things equal, if we kept up this routine for 3-4 years, this child would be expected to close the gap. That sounds great, but think of all the assumptions behind it: (1) the student will make the same gain from classroom teaching that everyone else does; (2) the intervention will be effective; (3) the intervention will be equally effective each year—no one will back off on their diligence just because the gap is being closed, and what was helpful to a second-grader will be equally helpful with a third-grader; (4) the intervention will continue to be offered year-to-year; and (5) that the tests will be equally representative of the learning elicited each year.

            That tells you how much gain the group should make. Your question doesn’t tell how far behind the kids were when they started, nor does it tell how much gain was made by the 56% who didn’t reach grade level… so moving 44% of them to grade level in 2 years may or may not be very good. I could set up the problem—plugging in some made up numbers that would make the above estimates come out perfectly, which would suggest that their intervention is having average effectiveness… or I could plug in numbers that might lead you to think that this isn’t an especially effective intervention.

            I have to admit, from all of this, I don’t know whether their intervention is a good one or not. However, this exercise suggests to me that I’d be seeking an intervention that provides at least, on average, a quarter to a third of a standard deviation in extra annual gain for students. And, that has some value.

References
Balu, R., Zhu, P., Doolittle, F., Schiller, E., Jenkins, J., & Gersten, R. (2015). Evaluation of response to intervention practices for elementary school reading. Washington, DC: U.S. Department of Education.
Elbaum, B., Vaughn, S., Tejero Hughes, M., & Watson Moody, S. (2000). How effective are one-to-one tutoring programs in reading for elementary students at risk for reading failure? A meta-analysis of the intervention research. Journal of Educational Psychology, 92, 605-619.
Fielding, L., Kerr, N., & Rosier, P. (2007). Annual growth for all students… Catch up growth for those who are behind. Kennewick, WA: New Foundation Press.
Hattie, J. (2009). Visible learning. New York: Routledge.
Johnson, P., & Allington, R. (1991). Remediation. In R. Barr, M. L. Kamil, P. B. Mosenthal, & P.D. Pearson (Eds.), Handbook of reading research (vol. 3, pp. 1013-1046). New York: Longman.
Scammacca. N.K., Roberts, G., Vaughn, S., & Stuebing, K.K. (2015). A meta-analysis of interventions for struggling readers in grades 4-12: 1980-2011. Journal of Learning Disabilities, 48, 369-390.
Sonnenschein, S., Stapleton, L. M., & Besnon, A. (2010). The relation between the type and of instruction and growth in children’s reading competencies. American Educational Research Journal, 47, 358-389.
Weiss, H.B., Little, P.M.D., Bouffard, S.M., Deschenes, S.N., & Malone, H.J. (2009). The federal role in out-of-school learning: After-school, summer school learning, and family instruction as critical learning supports. Washington, DC: Center on Education Policy.
Wanzek, J., Vaughn, S., Scammacca, N., Gatlin, B., Walker, M.A., & Capin, P. (2016). Meta-analyses of 
      the effects of tier 2 type reading interventions in grades K-3. Educational Psychology Review, 28, 
      551-5

Thursday, July 28, 2016

How to Screw Up Student Learning Under RtI

I am a classroom teacher (grade 3) and a follower of your blog.  I also have an M.A. in Reading. Last year our new principal told us that our RtI students do not need to be in the classroom during grade level instruction. I strongly disagree. I think that these students benefit from scaffolded grade level instruction and benefit from the kind of thinking and reading the class is being asked to do during this time.  

Am I wrong to insist my students be in the room during regular reading instruction? If so, please set me straight.  


Dear Perplexed:

The point of RtI is not to REPLACE classroom reading instruction, but to supplement it.

RtI is used to help determine if a student might be suffering from reading/learning disabilities. The reason that student would be referred for intervention support would be because of some concern about the student’s daily progress.

Consequently, we ADD a targeted intervention to the teaching the student is receiving in order to determine whether it promotes greater progress.

If you use the intervention to replace regular instruction then that student would not receive a more intensive and extensive learning experience than what was already provided. All you would be doing is just trading one treatment for another. Not the idea of RtI and not an approach that has been successful in raising reading achievement.

Using the brief intervention to interrupt or replace the longer classroom instruction means that you won’t find out if the student would respond to the extra tuition, because no extra teaching is offered.

Big mistake to pull kids out of their classroom instruction for an intervention unless it has already been determined that they child requires a special education placement (in other words, the student hadn’t responded to the regular teaching plus the intervention). However, even special education programs—depending on how serious the learning problem—may be used as additional teaching rather than replacement teaching.

I definitely side with you in this. I think your principal is making a big mistake—both undermining kids’ learning progress and making it impossible to determine whether the student has a learning problem.


Sunday, March 29, 2015

Middle School Interventions

We are a K-12 district and are revamping our grade 6 through grade 8 instructional supports, which include a 40 minute additional session of reading and/or math instruction  anywhere from 3 to 5 days a week. This extra instruction is provided to any student below the 50th percentile on the MAP assessments ---roughly 2/3 of our student population in our 5 middle schools.  

Where we are struggling is in determining whether this additional instructional time  (taught during later periods in the day  by different teachers from the core instruction) should be based on addressing gaps in foundational skills or supporting grade level curriculum.  

In the 4 years we have been using this system of support we have changed our position, from filling in holes to supporting core instruction and our results have been inconclusive on which method leads to the greatest growth. We are torn between raising the rigor of instruction to offer students more “time” grappling with the harder material and using a Leveled Literacy program that has delivered good results to us in the primary grades. Help.


What you are trying to do is terrific for the kids. You see some students who aren’t keeping up and you want to beef up the amount of reading support that they get. That makes great sense to me and seems to be very much in line with the research. Additional teaching is a great idea.

However, the 1-49%ile span for this group is simply too broad and too differentiated a swath of kids with whom to take a single approach. If I were calling the shots I’d treat those below the 30th or 35th %iles differently than those who are a little bit behind.

I suspect that as you move down the continuum of kids you’ll start to find those with substantial gaps in their foundational skills (decoding and fluency basically). That is much less likely to be true for those who are almost at the 50th%ile. In discussions of learning disability, various experts (e.g., Joe Torgesen, Jack Fletcher, Reid Lyon) treat the 35%ile as being a dividing point between kids who are garden variety stragglers and those who might have a real learning disability. This will likely vary a bit by grade level and test, so rather than giving you a hard-and-fast rule, I’m suggesting that the cut-point be somewhere around the 30-35th%ile.

Above that cutoff, and I would definitely just give these kids extra time with the demanding grade-level materials. Below that line, and I would want to provide at least some explicit instruction in foundational skills. (I don’t know what assessment information you have on these kids, but if such data reveals particular foundation gaps for students reading below the 35th%ile, I’d be even more certain that offering such teaching is a good idea.)

What should the instruction look like for these groups?

For those who are in that 35-49%ile span, that is kids who are at grade level to about 2-3 grade levels below level, I would have them doing more work with the grade level texts they are reading in class. This work should give kids opportunities to read the material again—but with greater or different scaffolding and support. Students might read this material before it is read in class (to give them a boost) or after, to ensure that they make as much progress with it as possible. I would consider activities like repeated reading (that is, oral fluency practice with repetition), rereading and writing about the ideas in the texts, going through the texts more thoroughly trying to interpret the most complex sentences or to follow the cohesive links among the ideas.

For the students below the 30-35th%ile—who are low in decoding (probably the majority of them), I’d provide a systematic program of instruction that offers at least some explicit phonics instruction. I very much like the idea of using a program that has been found to be effective by the What Works Clearinghouse (that won’t guarantee it will work for you, but that it has worked elsewhere tells you it is possible to make it work effectively).

As important as phonics instruction can be to someone who lacks basic decoding skills, I’d recommend against overdoing it. The National Reading Panel found that phonics instruction for poor readers beyond grade 2 tended to improve their decoding skills (which is good), but without commensurate impacts on spelling and reading comprehension (which is not so good). I think it is important to make such decoding instruction part of a larger effort that addresses reading comprehension, vocabulary, writing, and oral reading fluency.

How best to balance this effort will depend a lot on what else the kids are getting. For example, if the really low decoders are already being instructed in these skills in Special Education, then I wouldn’t double up here. That would just free time space for other kinds of reading help.

Another possibility may be to offer these students some of the same grade level instruction noted above, but in smaller groupings to enable the teachers to offer greater support to these kids who are further behind. Beyond beginning reading levels, there is no evidence students need to work with low-level texts—at least when there is sufficient scaffolding to guide them through such reading. Perhaps these students would work on decoding and fluency using a set program part of the time, and working with regular classroom materials with greater amounts of scaffolding than would be available to the other, better-performing students. 

(One last thought. It is terrific that the intervention program you have identified is working well with your primary kids. That's great, but it does not mean that I would necessarily adopt it for use in my middle school. I'd go with a program either aimed specifically at these older students or I'd try out the materials with them to see their reaction. Often, terrific decoding programs are too babyish to gain much buy in from the older kids. It would even be better if WWC indicated that the program had worked effectively with middle-schoolers.)