Thursday, February 23, 2017

How Should We Combine Reading and Writing?

Teacher question:
          So today I was conducting a workshop. I was told the teachers wanted information about reading/writing connections. Easy, right? Then I was told that they departmentalize K-6! At every grade they have a reading teacher and a different writing teacher. Any thoughts, comments, best practices, or research that would go against or support this practice? I know what I believe to be correct, but would love to have your opinions in this conversation. 

Shanahan response: 
            Wowee! For the past several years I’ve been complaining about how schools are organizing themselves with regard to reading and writing. These days, the most common elementary school organization seems to be the 90-minute reading block, with writing taught some other time of the day (if at all). And, in middle schools and high schools many schools have readers’ and writers’ workshops—managed by different teachers.

            I think both of those schemes are dopey and counterproductive.

            But you’ve found a structure that is even worse!

           These folks sound like the type of people that would separate Romeo and Juliet... Yin and Yang....Lennon and McCartney... love and marriage... Bert and Ernie...spaghetti and meatballs... You get the idea.

            Reading and writing are related in many ways. And, though teachers can take advantage of these relationships in ways that can improve achievement, doing that would be very difficult and inefficient when taught separately as in your example.

            The combination of reading and writing doesn’t just change instruction—it can affect the curriculum itself. For instance, the Common Core State Standards require teachers to teach kids how to combine reading and writing for various purposes.

            I wondered if this is a CCSS state? (your letter didn’t specify). If so, that would be one of my big questions—how are they teaching kids to write about reading? Perhaps those goals can be accomplished within this odd organizational plan, but that would require a great deal of cross-classroom planning (the kind of planning that tends to impinge on teachers’ personal time—and that rarely happens, no matter what the theory).

            Admittedly, I’m aware of no studies that directly measure the impact of such organization, and the organizational studies that do exist suggest that organizational plans usually don’t matter much in terms of learning). I guess I could praise this district at least for teaching writing—there are still too many places that haven’t figured out the need for that yet.

            However, a major purpose for teaching writing is its strong impact on reading achievement. Recently, some administrators who had been discouraging writing in their districts contacted me. Their concern was that writing took up a lot of time and their state was heavily stressing reading achievement. Time devoted to writing would “obviously” interfere with reaching their reading goals. They wanted to know why I was telling their teachers that writing was a must.

            I explained to them that there were several reasons behind my urgings.

            First, research shows that reading and writing are closely aligned. That is, reading and writing depend upon many of the same skills, strategies and knowledge—though those are deployed in different ways in reading and writing. In fact, about 70% of the variation in reading and writing abilities are shared.

            For example, to read one has to decode words. That means being able to look at the word, recognize its elements (letters and letter combinations), retrieve associated pronunciations for those letters, and to blend those into a word pronunciation. For that to work, of course, you have to do that very quickly—and eventually with little conscious attention.

            In contrast, to write one has to spell words. That means being able to listen to the pronunciation of a word, to recognize its elements (phonemes—that is language sounds), to retrieve letters that match those sounds, and to recognize whether they are combining properly to make a well-formed word. And, again, fluency is essential.

            Decoding is arguably easier than spelling, but learning to both pronounce and spell words simultaneously helps to increase decoding fluency. It provides a kind of overlearning that enhances one’s ability. The same argument can be made concerning phonological awareness, and the use of vocabulary, grammar, text structure, tone, and other text elements—and the same kinds of connections exist between the routines one uses to pull up background knowledge, to set purposes, to predict, and so on.

            Given the extensive overlaps, it should be evident that combined instruction would be a lot more efficient. When a school is trying to accomplish higher achievement that kind of efficiency and teaching power is indispensable.

            Second, reading and writing are communicative processes, and there cross-modal benefits to be derived from having students engage in each. Readers, who are writers, can end up with insights about what authors are up to and how they exert their effects, something of great value in text interpretation. Likewise, writers by being readers, can gain insights into the needs of other readers. Imagine how that can help one to write better.

            This kind of insight sharing is unlikely without some teacher guidance—and making those kinds of connections across reading and writing experiences depends on sharing those experiences with the students. It would be hard for a teacher to know what came up in the various shared reading experiences that took place in the other class.

            Third, reading and writing can be used in combination to accomplish particular goals. The Common Core emphasizes two particular goals for such combining: using writing to improve learning from text, and using the reading of multiple texts to improve the writing of syntheses or reports. The first of these is the most pertinent to these queries.

            Steve Graham and Michael Hebert (201) carried out a meta-analysis of more than 100 studies in which students wrote about text. They found that writing in various ways about what one had read improved comprehension and learning, and it did so better than reading alone, reading and rereading, or reading and discussing.

            Students should not just be writing about text, they should be learning how to write about text effectively: how to write to text models, how to write summaries, how to write extended critiques and analyses, and how to write syntheses.

            So, my reading of the research says: Teach kids to write and use this instruction to improve reading achievement. Do it separately and you are leaving achievement points on the table. No question this could be accomplished by two different teachers, but what a complicated mess that makes of it. Simplify. 

          (Pass the popcorn and butter, I'm going to watch some Laurel and Hardy films. Some things just go together).

References           
Graham, S., & Hebert, M. (2010). Writing to read: Evidence of how writing can improve reading. Washington, DC: Alliance for Excellent Education.

Shanahan, T. (2004). Overcoming the dominance of communication: Writing to think and learn. In T. L. Jetton and J. A. Dole (Eds.), Adolescent literacy research and  practice. New York: Guilford Press.

Shanahan, T. (2008). Relations among oral language, reading, and writing development. In C.A. MacArthur, S. Graham, and J. Fitzgerald (Eds.), Handbook of Writing Research (pp. 171-186). New York: Guilford Press.

Shanahan, T. (2015). Relationships between reading and writing development. In C.A. MacArthur, S. Graham, and J. Fitzgerald (Eds.), Handbook of writing research (2nd ed., pp. 194-210). New York: The Guilford Press.

Tierney, R. J., and Shanahan, T. (1991). Research on the reading-writing relationship: Interactions, transactions, and outcomes. In R. Barr, M. L. Kamil, P. Mosenthal, and P. D. Pearson (Eds.), Handbook of Reading Research (pp. 246-280). New York: Longman.

Sunday, February 12, 2017

How Much Reading Gain Should be Expected from Reading Interventions?

This week’s challenging question:
I had a question from some schools people that I’m not sure how to answer. I wonder if anyone has data on what progress can be expected of students in the primary grades getting extra help in reading. 

Let’s assume that the students are getting good/appropriate instruction, and the data were showing that 44% of students (originally assessed as “far below”) across grades 1-3 were on pace to be on grade level after 2 years of this extra help.

Is this expected progress for such students or less than what has been shown for effective early reading interventions?

Shanahan’s answer:
            This is a very complicated question. No wonder the field has largely ducked it. Research is very clear that amount of instruction matters in achievement (e.g., Sonnenschein, Stapleton, & Benson, 2010), and there are scads of studies showing that various ways of increasing the amount of teaching can have a positive impact on learning (e.g., preschool, full-day kindergarten, afterschool programs, summer school programs).

            Although many think that within-the-school-day interventions are effective because the intervention teachers are better or the methodology is different, but there is good reason to think that the effects are mediated by the amount of additional teaching that the interventions represent. (Title I programs have been effective when delivered after school and summer, but not so much with the daytime within school (Weiss, Little, Bouffard, Deschenes, & Malone, 2009); there are concerns about RtI programs providing interventions during reading instruction instead of in addition to it (Balu, Zhu Doolittle, Schiller, Jenkins, & Gersten, 2015)).

            Research overwhelmingly has found that a wide-range of reading interventions work—that is the kids taught by them outperform similar control group kids on some measure or other—but such research has been silent about the size of gains that teachers can expect from them (e.g., Johnson & Allington, 1991). There are many reasons for such neglect:

(1)  Even though various interventions “work” there is a great deal of variation in effectiveness from study to study.

(2)  There is a great deal of variation within studies too—just because an intervention works over all, doesn’t mean it works with everybody who gets it, just that it did better on average.

(3)  There is a great deal of variation in the measures used to evaluate learning in these studies—for example, if an early intervention does a good job improving decoding ability or fluency, should that be given as much credibility as one that evaluated success with a full-scale standardized test that included comprehension, like the accountability tests schools are evaluated on?

(4)  Studies have been very careful to document learning by some measure or other, but they have not been quite as rigorous when it comes to estimating the dosages provided. In my own syntheses of research, I have often had to provide rough guestimates as to the amounts of extra teaching that were actually provided to students (that is, how much intervention was delivered).

(5)  Even when researchers have done a good job of documenting numbers and lengths of lessons delivered, it has been the rare intervention that was evaluated across an entire school year—and, I can’t think of any examples, off hand, of any such studies longer than that. That matters because it raises the possibility of diminishing returns. What I mean is that a program with a particular average effect size over a 3-month period may have a lower size of effect when carried out for six or 12 months. (Such a program may continue to increase the learning advantage over those longer periods, but the average size of the advantage might be smaller).

            Put simply? This is a hell of a thing to try to estimate—as useful as it would be for schools. 

            One interesting approach to this problem is the one put forth by Fielding, Kerr, & Rosier, 2007. They estimated that the primary grade students in their schools were making an average year’s gain of one year for 60-80 minutes per day of reading instruction. Given this, they figured that students who were behind and were given additional reading instruction through pullout interventions, etc. would require about that many extra minutes of teaching to catch up. So, they monitored kids’ learning and provided interventions, and over a couple of years of that effort, managed to pull their schools up from about 70% of third graders meeting or exceeding standards to about 95%—and then they maintained that for several years.

            Fielding and company’s general claim is that the effects of an intervention should be in proportion to the effects of regular teaching… thus, if most kids get 90 minutes per day teaching and, on average, they gain a year’s worth on a standardized measure, then giving some of the kids an extra 30 minutes teaching per day, should move those kids an additional 3-4 months. That would mean that they would pick up an extra grade level for every 2-3 years of intervention. I’m skeptical about the accuracy of that, but it is an interesting theory.  

            Meta-analyses have usually reported the average effect sizes for various reading interventions to be about .40 (e.g., Hattie, 2009). For example, one-to-one tutoring has a .41 effect (Elbaum, Vaughn, Tejero Hughes, & Watson Moody, 2000.

            However, those effects estimates can vary a great deal, depending on when the studies were done (older studies tend to have less rigorous procedures and higher effects, etc.), by the kind of measures used (comprehension outcomes tend to be lower than those obtained for foundational skills, and standardized tests tend to result in lower effects than experimenter-made ones), etc.

            For example, in a review of such studies with students in grades 4-12, the average effect size with standardized tests was only .21 (Scammacca, Roberts, Vaughn, & Stuebing, 2015); and in another sample of studies, the impact on standardized comprehension tests was .36 (Wanzek, Vaughn, Scammacca, Gatlin, Walker, & Capin, 2016).

            You can see how rough these estimates are, but let’s just shoot in the middle someplace… .25-.30 (a statistic I obviously just made up, but you can see the basis on which I made it up—relying most heavily on the best studies, the best and most appropriate measures).

            What does that mean? As long as we are talking about primary grade kids and typical standardized reading tests, the usual size of a standard deviation is about 1 year. In other words, if you took a 3rd grade Gates-MacGinitie and tested an average group of second and third graders with it, you’d find about 1 standard deviation difference in scores between the grade level groups. (Those connections between amount of time and standard deviation change as you move up the grades, so you can’t easily generalize up the grades what I am claiming here).

            Thus, if you have a second-grader who is one full year behind at the beginning of the year (that is the class gets a 2.0 grade equivalent score in reading, but this child gets a 1.0), and the student is in a good classroom program and an effective intervention, we should see the class accomplishing a 3.0 (that would be the year’s gain for the year’s instruction), and the laggard student should score at a 2.25-2.30.

            All things equal, if we kept up this routine for 3-4 years, this child would be expected to close the gap. That sounds great, but think of all the assumptions behind it: (1) the student will make the same gain from classroom teaching that everyone else does; (2) the intervention will be effective; (3) the intervention will be equally effective each year—no one will back off on their diligence just because the gap is being closed, and what was helpful to a second-grader will be equally helpful with a third-grader; (4) the intervention will continue to be offered year-to-year; and (5) that the tests will be equally representative of the learning elicited each year.

            That tells you how much gain the group should make. Your question doesn’t tell how far behind the kids were when they started, nor does it tell how much gain was made by the 56% who didn’t reach grade level… so moving 44% of them to grade level in 2 years may or may not be very good. I could set up the problem—plugging in some made up numbers that would make the above estimates come out perfectly, which would suggest that their intervention is having average effectiveness… or I could plug in numbers that might lead you to think that this isn’t an especially effective intervention.

            I have to admit, from all of this, I don’t know whether their intervention is a good one or not. However, this exercise suggests to me that I’d be seeking an intervention that provides at least, on average, a quarter to a third of a standard deviation in extra annual gain for students. And, that has some value.

References
Balu, R., Zhu, P., Doolittle, F., Schiller, E., Jenkins, J., & Gersten, R. (2015). Evaluation of response to intervention practices for elementary school reading. Washington, DC: U.S. Department of Education.
Elbaum, B., Vaughn, S., Tejero Hughes, M., & Watson Moody, S. (2000). How effective are one-to-one tutoring programs in reading for elementary students at risk for reading failure? A meta-analysis of the intervention research. Journal of Educational Psychology, 92, 605-619.
Fielding, L., Kerr, N., & Rosier, P. (2007). Annual growth for all students… Catch up growth for those who are behind. Kennewick, WA: New Foundation Press.
Hattie, J. (2009). Visible learning. New York: Routledge.
Johnson, P., & Allington, R. (1991). Remediation. In R. Barr, M. L. Kamil, P. B. Mosenthal, & P.D. Pearson (Eds.), Handbook of reading research (vol. 3, pp. 1013-1046). New York: Longman.
Scammacca. N.K., Roberts, G., Vaughn, S., & Stuebing, K.K. (2015). A meta-analysis of interventions for struggling readers in grades 4-12: 1980-2011. Journal of Learning Disabilities, 48, 369-390.
Sonnenschein, S., Stapleton, L. M., & Besnon, A. (2010). The relation between the type and of instruction and growth in children’s reading competencies. American Educational Research Journal, 47, 358-389.
Weiss, H.B., Little, P.M.D., Bouffard, S.M., Deschenes, S.N., & Malone, H.J. (2009). The federal role in out-of-school learning: After-school, summer school learning, and family instruction as critical learning supports. Washington, DC: Center on Education Policy.
Wanzek, J., Vaughn, S., Scammacca, N., Gatlin, B., Walker, M.A., & Capin, P. (2016). Meta-analyses of 
      the effects of tier 2 type reading interventions in grades K-3. Educational Psychology Review, 28, 
      551-5

Tuesday, February 7, 2017

The Instructional Level Concept Revisited: Teaching with Complex Text

            Boy, oh, boy! The past couple weeks have brought unseasonably warm temperatures to the Midwest, and unusual flurries of questions concerning teaching children at their, so-called, “instructional levels.” Must be salesman season, or something.

            One of the questions asked specifically about my colleague Dick Allington, since he has published articles and chapters saying that teaching kids with challenging text is a dumb idea. And, a couple of others queries referred to the advertising copy from Teachers College Press (TCP) about their programs. Both Dick and TCP threw the R-word (research) around quite a bit, but neither actually managed to marshal research support for their claims, which means that the instructional level, after 71 years, still remains unsubstantiated.

            What I’m referring to is the long-held belief that kids learn more when they are matched to texts in particular ways. Texts can be neither too hard, nor too easy, or learning is kaput. At least that has been the claim. It sounded good to me as a teacher, and I spent a lot of time testing kids to find out which books they could learn from, and trying to prevent their contact with others.

            According to proponents of the instructional level, if a text is too easy, there will be nothing to learn. Let’s face it, if a reader already knows all the words in a text, and can answer all of the questions already with no teacher support, it wouldn’t seem to provide much learning opportunity. Surprisingly, however, early investigations found just the opposite—the less there was to learn from a book, the greater progress the students seem to make. This was so obviously wrong, that the researchers just made up the criteria separating the independent and instructional levels.

            Likewise, the theory holds out the possibility that some texts can be too hard. In other words, the more there would be to learn in a text, the less the students would be able to learn from it.

            But what is too easy and what is too hard?

            Back in the 1940s, Emmett Betts, reading authority extraordinaire, reported on a research study completed by one of his students. He claimed that the study showed that if you matched kids to text using the criteria he proposed (95-98% word reading accuracy and 75-89% reading comprehension), kids learned more.

            Unfortunately, no such a study was done. Betts sort of just made up the numbers and teachers and professors have rapturously clung to them ever since. Generation after generation of teachers has been told teaching kids at these levels improves learning.  (Though, due to Common Core, at least some programs have been advancing—arbitrarily—new criteria, perhaps in hopes of matching more students to books at the required levels.)

            Over the past decade or so, several researchers have realized that this widely recommended practice is the educational equivalent of fake news, and have started reporting studies on its effectiveness. And, the instructional level has not done well; it either has made no difference—that is the kids taught from grade level materials do as well as those at an instructional level—or the instructional level placements have led to less learning. Instructional level placements have the tendency to limit kids’ exposure to the linguistic and textual features that they don’t yet know how to negotiate; the practice reduces their opportunity to learn. The kids not so protected, often do better.

            It still makes sense to start kids out with relatively easy texts when they are in K-1, since they have to learn to decode. Beginning reading texts should have enough repetition and should provide kids lots of exposure to the most frequent and straightforward spelling patterns in our language. But, once that hurdle is overcome, it makes no sense to teach everybody as if they were 5-years-old. The studies are pretty clear that from a second-grade reading level on, kids can learn plenty when taught with more challenging texts.

            Here are some related questions that have been asked of me over the past 2-3 weeks:

But my kids are learning to read and they have for years. Why change now?
            Because of the opportunity cost; your students could do even better. Students often tell me that they hate reading specifically because they always get placed in what they call the “stupid kid books.” If kids can learn as much or more from the grade level texts—and they can—we should be giving them opportunities to read the texts that are more at their intellectual levels and that match their age-level interests.

Isn’t it true that the studies in which the kids did better varied not just the book levels, but how the students were taught?
            Yes, that is true, and instructional level proponents have raised that as a complaint about these studies. However, no one is claiming that students will just learn more from harder books. As students, confront greater amounts of challenge the teaching demands go up. One suspects that part of the popularity of the instructional level idea is that the teacher doesn't have to do as much (since the kids start out knowing almost all the words and can read the texts with high comprehension with no teacher support).

What about older kids who are still “beginning readers?”
            Anyone—at whatever age level—who is just starting to learn to read, is still going to need to master decoding. Teaching such older students with more demanding texts will just make it harder to master the relations between spelling and pronunciation. Definitely stay with relatively easy books with older readers who are reading at a kindergarten or first-grade level.

Are you saying no more small group teaching?
            No, small group teaching is fine, unless the purpose of that grouping is to teach students with different levels of books. In fact, I think providing small group teaching to students when they are in the harder materials makes greater sense than how we tend to do it now (which is to put kids in easier materials when they work closely with the teacher—I’d do the opposite).

So you don’t believe in differentiation?
            I believe in differentiation, but I don’t believe that means placing kids in different levels of books. There is a large and growing body of research that suggests that we could more profitably vary the amount and type of scaffolding for the needs of different students.  

Dick Allington has admitted that some studies do show that kids can learn more from more challenging texts, but that the scaffolding in these studies is simply too demanding for the average teacher. What do you think?
            Dick was referring to studies done by Alyssa Morgan and Melanie Kuhn (and their colleagues). In both, the frustration level placements led to more learning than the instructional level ones. In the Morgan study, she used paired reading, and the scaffolding was provided by untrained 7-year-olds (though they were the relatively better readers). I suspect most teachers can scaffold as well as a second-grader, and don’t find paired reading interventions to be beyond most teachers’ skills levels. I asked Melanie Kuhn directly about this criticism. She was surprised. Teachers in the original study had so easily used their teaching routines that Kuhn and company decided to collect data for an additional year. I reject the idea that only the most elite teachers can provide this kind of teaching.

So you totally reject the instructional level idea for anyone but beginners?
            No, I’ve come to believe that the instructional level would be a great goal to aim at for at the completion of a lesson. If, when you are finishing up with a text, the kids know 75% or more of the ideas and can read 95% or more of the words, then you have done a terrific job. One of Linnea Ehri’s studies found that the kids who did best ended up with 98% accuracy, for instance. Of course, if you keep starting with texts at those levels, then you would have little to teach. Start kids out with complex texts that they cannot read successfully; then teach them to read those texts well.

Should all the texts that we teach from be at the levels that Common Core set?

            No, I would argue (based on very little direct evidence—so I’m stretching a bit here) that students should read several texts across their school days and school years. This reading should vary greatly in difficulty, from relatively easy texts that would afford students extensive reads with little teacher support, to very demanding texts that could only be accomplished successfully with a great deal of rereading and teacher scaffolding. I believe that much is learned from that kind of varied practice.