Showing posts with label early interventions. Show all posts
Showing posts with label early interventions. Show all posts

Sunday, February 12, 2017

How Much Reading Gain Should be Expected from Reading Interventions?

This week’s challenging question:
I had a question from some schools people that I’m not sure how to answer. I wonder if anyone has data on what progress can be expected of students in the primary grades getting extra help in reading. 

Let’s assume that the students are getting good/appropriate instruction, and the data were showing that 44% of students (originally assessed as “far below”) across grades 1-3 were on pace to be on grade level after 2 years of this extra help.

Is this expected progress for such students or less than what has been shown for effective early reading interventions?

Shanahan’s answer:
            This is a very complicated question. No wonder the field has largely ducked it. Research is very clear that amount of instruction matters in achievement (e.g., Sonnenschein, Stapleton, & Benson, 2010), and there are scads of studies showing that various ways of increasing the amount of teaching can have a positive impact on learning (e.g., preschool, full-day kindergarten, afterschool programs, summer school programs).

            Although many think that within-the-school-day interventions are effective because the intervention teachers are better or the methodology is different, but there is good reason to think that the effects are mediated by the amount of additional teaching that the interventions represent. (Title I programs have been effective when delivered after school and summer, but not so much with the daytime within school (Weiss, Little, Bouffard, Deschenes, & Malone, 2009); there are concerns about RtI programs providing interventions during reading instruction instead of in addition to it (Balu, Zhu Doolittle, Schiller, Jenkins, & Gersten, 2015)).

            Research overwhelmingly has found that a wide-range of reading interventions work—that is the kids taught by them outperform similar control group kids on some measure or other—but such research has been silent about the size of gains that teachers can expect from them (e.g., Johnson & Allington, 1991). There are many reasons for such neglect:

(1)  Even though various interventions “work” there is a great deal of variation in effectiveness from study to study.

(2)  There is a great deal of variation within studies too—just because an intervention works over all, doesn’t mean it works with everybody who gets it, just that it did better on average.

(3)  There is a great deal of variation in the measures used to evaluate learning in these studies—for example, if an early intervention does a good job improving decoding ability or fluency, should that be given as much credibility as one that evaluated success with a full-scale standardized test that included comprehension, like the accountability tests schools are evaluated on?

(4)  Studies have been very careful to document learning by some measure or other, but they have not been quite as rigorous when it comes to estimating the dosages provided. In my own syntheses of research, I have often had to provide rough guestimates as to the amounts of extra teaching that were actually provided to students (that is, how much intervention was delivered).

(5)  Even when researchers have done a good job of documenting numbers and lengths of lessons delivered, it has been the rare intervention that was evaluated across an entire school year—and, I can’t think of any examples, off hand, of any such studies longer than that. That matters because it raises the possibility of diminishing returns. What I mean is that a program with a particular average effect size over a 3-month period may have a lower size of effect when carried out for six or 12 months. (Such a program may continue to increase the learning advantage over those longer periods, but the average size of the advantage might be smaller).

            Put simply? This is a hell of a thing to try to estimate—as useful as it would be for schools. 

            One interesting approach to this problem is the one put forth by Fielding, Kerr, & Rosier, 2007. They estimated that the primary grade students in their schools were making an average year’s gain of one year for 60-80 minutes per day of reading instruction. Given this, they figured that students who were behind and were given additional reading instruction through pullout interventions, etc. would require about that many extra minutes of teaching to catch up. So, they monitored kids’ learning and provided interventions, and over a couple of years of that effort, managed to pull their schools up from about 70% of third graders meeting or exceeding standards to about 95%—and then they maintained that for several years.

            Fielding and company’s general claim is that the effects of an intervention should be in proportion to the effects of regular teaching… thus, if most kids get 90 minutes per day teaching and, on average, they gain a year’s worth on a standardized measure, then giving some of the kids an extra 30 minutes teaching per day, should move those kids an additional 3-4 months. That would mean that they would pick up an extra grade level for every 2-3 years of intervention. I’m skeptical about the accuracy of that, but it is an interesting theory.  

            Meta-analyses have usually reported the average effect sizes for various reading interventions to be about .40 (e.g., Hattie, 2009). For example, one-to-one tutoring has a .41 effect (Elbaum, Vaughn, Tejero Hughes, & Watson Moody, 2000.

            However, those effects estimates can vary a great deal, depending on when the studies were done (older studies tend to have less rigorous procedures and higher effects, etc.), by the kind of measures used (comprehension outcomes tend to be lower than those obtained for foundational skills, and standardized tests tend to result in lower effects than experimenter-made ones), etc.

            For example, in a review of such studies with students in grades 4-12, the average effect size with standardized tests was only .21 (Scammacca, Roberts, Vaughn, & Stuebing, 2015); and in another sample of studies, the impact on standardized comprehension tests was .36 (Wanzek, Vaughn, Scammacca, Gatlin, Walker, & Capin, 2016).

            You can see how rough these estimates are, but let’s just shoot in the middle someplace… .25-.30 (a statistic I obviously just made up, but you can see the basis on which I made it up—relying most heavily on the best studies, the best and most appropriate measures).

            What does that mean? As long as we are talking about primary grade kids and typical standardized reading tests, the usual size of a standard deviation is about 1 year. In other words, if you took a 3rd grade Gates-MacGinitie and tested an average group of second and third graders with it, you’d find about 1 standard deviation difference in scores between the grade level groups. (Those connections between amount of time and standard deviation change as you move up the grades, so you can’t easily generalize up the grades what I am claiming here).

            Thus, if you have a second-grader who is one full year behind at the beginning of the year (that is the class gets a 2.0 grade equivalent score in reading, but this child gets a 1.0), and the student is in a good classroom program and an effective intervention, we should see the class accomplishing a 3.0 (that would be the year’s gain for the year’s instruction), and the laggard student should score at a 2.25-2.30.

            All things equal, if we kept up this routine for 3-4 years, this child would be expected to close the gap. That sounds great, but think of all the assumptions behind it: (1) the student will make the same gain from classroom teaching that everyone else does; (2) the intervention will be effective; (3) the intervention will be equally effective each year—no one will back off on their diligence just because the gap is being closed, and what was helpful to a second-grader will be equally helpful with a third-grader; (4) the intervention will continue to be offered year-to-year; and (5) that the tests will be equally representative of the learning elicited each year.

            That tells you how much gain the group should make. Your question doesn’t tell how far behind the kids were when they started, nor does it tell how much gain was made by the 56% who didn’t reach grade level… so moving 44% of them to grade level in 2 years may or may not be very good. I could set up the problem—plugging in some made up numbers that would make the above estimates come out perfectly, which would suggest that their intervention is having average effectiveness… or I could plug in numbers that might lead you to think that this isn’t an especially effective intervention.

            I have to admit, from all of this, I don’t know whether their intervention is a good one or not. However, this exercise suggests to me that I’d be seeking an intervention that provides at least, on average, a quarter to a third of a standard deviation in extra annual gain for students. And, that has some value.

References
Balu, R., Zhu, P., Doolittle, F., Schiller, E., Jenkins, J., & Gersten, R. (2015). Evaluation of response to intervention practices for elementary school reading. Washington, DC: U.S. Department of Education.
Elbaum, B., Vaughn, S., Tejero Hughes, M., & Watson Moody, S. (2000). How effective are one-to-one tutoring programs in reading for elementary students at risk for reading failure? A meta-analysis of the intervention research. Journal of Educational Psychology, 92, 605-619.
Fielding, L., Kerr, N., & Rosier, P. (2007). Annual growth for all students… Catch up growth for those who are behind. Kennewick, WA: New Foundation Press.
Hattie, J. (2009). Visible learning. New York: Routledge.
Johnson, P., & Allington, R. (1991). Remediation. In R. Barr, M. L. Kamil, P. B. Mosenthal, & P.D. Pearson (Eds.), Handbook of reading research (vol. 3, pp. 1013-1046). New York: Longman.
Scammacca. N.K., Roberts, G., Vaughn, S., & Stuebing, K.K. (2015). A meta-analysis of interventions for struggling readers in grades 4-12: 1980-2011. Journal of Learning Disabilities, 48, 369-390.
Sonnenschein, S., Stapleton, L. M., & Besnon, A. (2010). The relation between the type and of instruction and growth in children’s reading competencies. American Educational Research Journal, 47, 358-389.
Weiss, H.B., Little, P.M.D., Bouffard, S.M., Deschenes, S.N., & Malone, H.J. (2009). The federal role in out-of-school learning: After-school, summer school learning, and family instruction as critical learning supports. Washington, DC: Center on Education Policy.
Wanzek, J., Vaughn, S., Scammacca, N., Gatlin, B., Walker, M.A., & Capin, P. (2016). Meta-analyses of 
      the effects of tier 2 type reading interventions in grades K-3. Educational Psychology Review, 28, 
      551-5

Monday, January 19, 2015

Why Does He Want to Hurt Kindergartners?

Two groups that are strong advocates in early childhood education (Defending the Early Years and the Alliance for Childhood), released a report called Reading Instruction in Kindergarten: Little to Gain and Much to Lose (see http://deyproject.org/2015/01/13/our-new-report-reading-instruction-in-kindergarten-little-to-gain-and-much-to-lose/). They claim there is no research base for the importance of learning to read in kindergarten (so that its inclusion in the Common Core as a goal for K is potentially harmful).   

I think they are wrong about the research here, but wanted to seek out your reaction. Does research suggest that learning to read, especially as indicated in the Common Core, is associated with long-term positive or negative effects? 

Great question. This is one that I’ve been thinking about since I was 5-years-old (no, really). My mom asked my kindergarten teacher if she should do anything with me to help and the teacher discouraged any efforts in that regard. At the time, the “experts” believed that any early academic learning was damaging to children—to their academic futures and to their psyches.

When I became a first-grade teacher, we were still holding back on such teaching, at least during the first-semester of grade one. We didn't want to cause the mental disabilities, academic failure, and vision problems predicted by the anti-teaching types. 

These days we are doing a great job of protecting poverty children and minority children from this kind of damage. Of course, many of us middle-class white parents are risking our own kids. It is not uncommon these days for suburban kids to enter first-grade, and even kindergarten, knowing how to read. As I’ve written before, I taught both of my kids to read before they entered school.

There are not now, and there never have been data showing any damage to kids from early language or literacy learning—despite the overheated claims of the G. Stanley Halls, Arnold Gessells, Hans Furths and David Elkinds (and many others).

Let me first admit that if you seek studies that randomly assign kids either to kindergarten literacy instruction and no kindergarten literacy instruction and then follow those kids through high school or something… there are no such studies and I very much doubt that there will be. Given how strong the evidence is on the immediate benefits of early literacy instruction I don’t think a scholar could get ethics board approval to conduct such a study.

That it wouldn’t be ethical to withhold such teaching for research purposes should give pause. If it isn’t ethical to do it for research, should it be ethical to do so for philosophical reasons? Yikes.

What we do have is a lot of data showing that literacy instruction improves the literacy skills of the kids who receive that instruction in preschool and kindergarten, and another body of research showing that early literacy skills predict later reading and academic achievement (and, of course, there is another literature showing the connections between academic success and later economic success). There are studies showing that the most literate kids are the ones who are emotionally strongest and there is even research on Head Start programs showing that as we have improved the early literacy skills in those programs, emotional abilities have improved as well.

And, as for the claim that early teaching makes no difference, I wonder why our fourth-graders are performing at the highest levels ever according to NAEP?

The studies showing the immediate benefits to literacy and language functioning from kindergarten instruction are summarized in the National Early Literacy Panel Report which is available on line.

And here are some of studies showing the long-term benefits of early literacy achvievement:

Early reading performance is predictive of later school success (Cunningham & Stanovich, 1997; Duncan, Dowsett, Claessens, Magnuson, et al., 2007; Juel, 1988; Snow, Tabors, & Dickinson, 2001; Smart, Prior, Sansor, & Oberkind, 2005). This means that young children’s reading performances tend to be pretty stable: kindergarten literacy development is predictive of 1stgrade performance; 1st grade predicts achievement in various upper grades and the performance at each of these levels is predictive of later levels.

If a youngster is behind in reading in grade 3, then he/she would likely still be behind in high school, which can have a serious and deleterious impact on content learning (science, history, literature, math), high school graduation rates, and economic viability (the students’ college and career readiness). 

The research seems clear to me: teach kids reading early and then build on those early reading skills as they progress through school. Don’t expect early skills alone to transfer to higher later skills; you have to teach students more literacy as they move up the grades (something that has not always happened).

Cunningham, A. E., & Stanovich, K. E., (1997). Early reading acquisition and the relation to reading experience and ability ten years later. Developmental Psychology, 33, 934-945.
Duncan, G. J., Dowsett, C. J., Claessens, A., Magnuson, K., Huston, A. C., et al. (2007). School readiness and later achievement. Developmental Psychology, 43, 1428-1446.
Juel, C. (1988). Learning to read and write: A longitudinal study of 54 children from first through fourth grades. Journal of Educational Psychology, 81, 437-447.
Smart, D., Prior, M., Sansor, A., & Oberkind, F. (2005). Children with reading difficulties: A six year follow-up from early primary to secondary school. Australia Journal of Learning Difficulties, 10, 63-75.

Snow, C. E., Tabors, P. O., & Dickinson, D. K. (2001). Language development in the preschool years. In D. K. Dickinson & P. O. Tabors (Eds.), Beginning literacy with language: Young children learning at home and school (pp. 1–26). Baltimore: Paul H. Brookes. (preschool literacy and language predicts 7th grade performance)

Sunday, October 19, 2014

Would you rather have $50,000 or $25,000? Explaining the impact of full-day kindergarten

             Lots of interest, all of a sudden, in full-day kindergarten… I’ve had several questions about that scheme during the past few days. I’m not sure why, but it is well worth discussing yet again.

            What I’ve been asked has varied, but it always seems to come back to, “Is full-day kindergarten better than half-day kindergarten?” I get why that is being asked, and I’m too polite to sneer openly, but what a silly question.

            Should we set your salary at $50,000 or $25,000? Could I pour you a half-glass of wine (or, if the waiter were optimistic, a half-full glass)? Would you prefer to win the first half of the game or the whole game? 

            There have been two sizeable meta-analyses of the whole-day/full-day controversy—one with an educational thrust and the other from the health care side of the house. Both have reached the same conclusions: Full-day kindergarten provides students with stronger academic preparation in reading, language, and mathematics. Full-day kindergarten provides students with stronger social-emotional support (yes, the full-dayers develop greater self-confidence).

            But both research reviews also conclude that these pluses usually fade by age 8. Providing 5-year-olds with more teaching early on is advantageous in producing good first-graders, but it is unlikely to improve high school graduation rates. At least the way we do it now.

            How can I be so blithe in my allegiance to such a short-term positive?

            Frankly, I think we expect too much of early interventions. It shows a real misunderstanding of the power and value of teaching.

            Many years ago I used the metaphor comparing teaching with insulin therapy and vaccines. We usually argue the merits of early interventions as being the latter. We tell policymakers that if they invest more in the early years, there won’t be educational or social needs later.

            But education is not a vaccine. If we teach something and it provides an advantage, that advantage will go away if we then teach that something to someone else.

           Back in the 1970s, Dolores Durkin taught preschoolers to read. She then tracked their progress. When these early readers entered kindergarten, they spent the year working on letter names. Not surprisingly, by the end of the year, their classmates who had spent the year studying this aspect of literacy partially caught up. A couple more years of that and the benefits of early learning were dissipated.

            I started asking would you rather have $25,000 or $50,000. That’s silly, too, but imagine if my answer were: $25,000 because in 3 or 4 years the advantage would be gone. You would have spent all that money and there’d likely be no material difference between the groups.

            Full-day kindergarten can be a good investment. But only if we save and invest the benefits to be derived from it. (Imagine if with your extra $25,000 you had invested some of that; then there would clearly be an ongoing benefit of the extra dough.

            In education that would mean continuing to build on those early gains. Full-day kindergartners need first-grade curricula and instruction aimed at taking them from where they are (as a result of the full-day teaching) and then accelerating these children forward again.  

            What we do instead as a result of early interventions (full-day kindergarten, parent programs, Reading Recovery, etc.)? Typically, we throw these children back into the mix, providing them the same instruction they would have received had there been no intervention. And, we invest in various programs aimed at trying to “catch up” the children who did not receive that early intervention (which is why programs like Head Start can appear to be ineffective).


            Build quality on quality, use instruction to accelerate children forward continually, and you will see the long-term benefits of full-day kindergarten and other effective early interventions.
Cooper, H., Allen, A.B., Patall, E.A., and Dent, A.L. (2010). Effects on full-day kindergarten on academic achievement and social development. Review of Educational Research, 80(1), 34-70. 
Durkin, D. (1974-1975). A six year study of children who learned to read in school at the age of four.  Reading Research Quarterly,10(1), 9-61. 
Hahn, R.A., Rammohan, V., Truman, B.I., Milstein, B., Johnson, R.L. et al. (2014). Effects of full- day kindergarten on the long-term health prospects of children in low-income and racial/ethnic-minority populations: A community guide systematic review. American Journal of Preventive Medicine, 46(3), 312-323.