Showing posts with label complex text. Show all posts
Showing posts with label complex text. Show all posts

Saturday, February 22, 2014

First-Grade Close Reading

I've been looking for online and workshop information on close reading and everything I've seen and heard has recommended doing close reading on material that is well above kids independent reading level. Your post talks about the futility of doing a close read on preprimer material, which I completely agree with. What do you think about using higher text, say second grade, with second semester first graders in a teacher-supported group lesson?

I recently tried a bit of close reading with my first graders (see the second section of this post if you have time to read: http://firstgradecommoncore.freeforums.net/thread/4/close-reading - if not I completely understand) While I found it valuable, I'm struggling with there being not enough hours in the day and prioritizing the needs of my students.


The reason why I challenged close reading with young children is because of the lack of depth of appropriate texts for them to read. Close reading requires a deep or analytical reading that considers not just what a text says, but how it works as a text (e.g., examining layers of meaning, recognizing the effectiveness of literary devices, interpreting symbolism). Beginning reading texts simply lack this depth of meaning (or are usually too hard for kids to read).

Your email and the youtube link that is included in that imply that the idea of close reading is simply to read a challenging text with comprehension (challenging in this case meaning hard rather than complex—a very important distinction). For example, the video shows students interpreting word meanings in a hard text. A good lesson, yes indeed, but not really a close read.

I definitely would not assign second-grade texts to second-semester first-graders unless they were reading at a second-grade level (that is not uncommon, so if your kids are reading that well, go for it). For more typical first-graders (and those who are struggling), I would not do this. You can definitely engage kids in close listening activities with richer texts read by the teacher (a lot of the reading, by the way, seemed to be done by the teacher in the video that was included here), but that should not take the place of the children’s reading.

I agree with the idea that phonological awareness, phonics, oral reading fluency, writing, and reading comprehension (not close reading) should be the real priorities in grade one… so should oral language, of course, and close listening fits that idea nicely. You’ll have plenty of time to ramp this up when students are reading at a second-grade level.


Friday, January 18, 2013

Q & A On All Things Common Core

Recently, I participated in a webinar for McGraw-Hill about teaching with the common core standards. Participants sent in some questions and I have provided answers to those questions. Thought you might be interested in the wide-ranging conversation. Here is a link to the webinar itself in case you want to start there.

http://www.shanahanonliteracy.com/2012/11/mcgraw-hill-webinar.html

Any suggestions as to how raising text levels will work for students that are learning English? Are the same ideas relevant? I suspect that it isn’t that different across languages in terms of how this works generally or how well it will work. What needs to be scaffolded might differ, however. Usually second language learners will need more vocabulary support or grammar support than will be needed by native speakers (but there can be a lot of individual variation in this). Second language experts have long expressed concerns about text placements that under shot ELL students’ intellectual capacities; that problem will definitely be improved by this approach. For more info on English learners and common core visit http://ell.stanford.edu/

With the huge emphasis on increased text level, it seems that the amount of reading done will decrease significantly. What are your thoughts on this? That is a real possibility and it could be a problem. I think it is something we will need to be vigilant about. I continue to stress the idea that NOT all student reading needs to in the common core ranges and the importance of varied reading difficulty across the school day and school year. Obviously when one is dealing with very hard text, it makes sense to work with smaller doses of that (because it takes longer to figure it out)… with easier text the doses can be bigger. By working with a mix of texts, it is possible to get practice with both the intensity and extensiveness to increase student reading levels and reading stamina.

  David Coleman suggests reading 50% informational and 50% literary text. When we present students with "reach" texts, would you suggest we put more informational than literary texts in their hands? No, I generally wouldn’t say that, though in practice it might turn out that way. Kids will need experience in handling a wide variety of more challenging texts. However, I’ve been looking at the texts that elementary teachers report using with kids. The informational texts that they use tend to be harder than the literary texts… so if the harder texts that are available in your classroom are the informational texts, then these texts might very well be the ones that you use as reach texts.

If the vast majority of students in a classroom is reading two grade levels below current grade level, and the teacher is exposing the students to grade level shared text, is this enough? Should the shared text be ABOVE current grade level in this case? I don’t think there is a specific match of text to students (in terms of text difficulty) that facilitates learning. It will always be three variables: how well the student reads now, how hard the text is, and how much thoughtful support the teacher provides to help the student figure the text out. Working with materials two years harder than we would have used in the past is likely a sufficient distance to allow learning – now it is up to the teacher to provide enough support to encourage learning.

What would be the accuracy percentage you'd recommend when you suggest students read at their frustration level/"reach" level? See previous question. There is no set level. William Powell’s work suggests that these accuracy percentages might vary by grade levels, but that they were often in the mid 80-percents for the students who made the greatest gains (which is much lower than we would have encouraged in the past).

What is the role of literary nonfiction? If you want to prepare students to read well you should give them opportunities to work with a wide variety of text types—so they gain experience dealing with different language, text features, purposes, structures, etc. Literary nonfiction—essays, biographies, speeches, criticism—is wonderful and important. However, literature and non-literary informational text (science, history, etc.) are important, too. I fear that many schools will increase literary nonfiction, but will not increase the reading of non-literary informational text. (I also fear the pressure in some schools for the English Department to take on science and history reading—which makes no sense to me).

Can you put a percent on the maximum amount of time allowed for out-of-level reading? No. We definitely don’t know what the best mix of challenging and less challenging might be.

Do these shifts also apply to early intervention reading programs in all grade levels? Early intervention programs focus on learners in preschool, kindergarten, and grade one. I don’t think it would be a good idea to ramp text difficulty up for these students. Stay with the kinds of materials and student-text matches that we have traditionally used at these levels. (For later interventions, I like the idea of the highly skilled intervention teacher in an advantaged situation—smaller groups of children, for instance, working with harder text. Remember to learn from such text a lot more support is needed, so shifting to difficult text in the high support situation makes greater sense.

If this is true for grades 2-12, is it the role of grades K-1 to teach ALL students to the point of being on grade level expectations of CCSS? Grades PreK-1 have a lot to accomplish. The reason why we don’t ramp up the difficulty level of texts is to ensure that students develop their beginning reading and writing skills (e.g., phonological awareness, decoding, fluency, comprehension). Let’s not try to hurry past that part of the process (by raising the texts levels), but let’s give kids he skills that will allow them to benefit from the more challenging texts they will face later.

Using grade level texts (not a steady diet of out of level) is a big shift in thinking. As a literacy coach, how do I convince teachers that what we have been telling them to do is not the CCSS way anymore? I can feel a revolt coming on! However, it makes good sense to me. Are there studies there about how this shift impacts students' achievement? 
AND this one: 
During the webinar, I asked about research that supported asking students to read above their instructional levels. Dr. Shanahan indicated that there were a few studies. Could you give me the names of some of those researchers?

Here are a couple of past blogs that provide this information.





I work in a small district in Cedar City Utah as a school literacy specialist. Our district does not even have a core reading program that it requires all schools to use. (I use to work in Granite School district in Salt Lake City) My teachers want new curriculum in order to teach these new standards. Any suggestions on how to get the district to realize that new material is a real need with new standards?

 The Common Core is requiring the use of more challenging texts than has been common in the past. It is requiring substantially greater attention to informational text and literary non-fiction. It is requiring greater attention to connections across texts, and to the use of texts that have sufficient intellectual depth to support close readings. I can’t imagine schools reaching the common core without making changes to their texts (how big those changes will need to be will depend on what is in place now, of course).

I would like to ask Dr. Shanahan if the three read, first for key ideas/details, second for craft/structure, and third for integration of knowledge/ideas works for informational text as well as literary? AND Can you briefly describe what a close reading in science might look like?

Yes, attention to those three kinds of thinking makes sense with both kinds of reading though the specifics may differ a bit (a key idea in one type of text is not necessarily a key idea in another). Early on a close reading of science is not that different from other close readings, but as students move up through the grades – and science texts gets more specialized—it can look pretty different. However, the structure of close reading can be pretty similar even when some of the specifics change. Thus, initially, it is important that students be able to identify the main idea and key details. This means students have to learn to focus on the key scientific information that would allow them to summarize the text adequately (so far, not that different from literary reading, and yet what kind of information matters most differs even at this point—character motive is pretty important in literary reading, while material cause or causation without motive is essential to science). A deeper stab at reading science will then require attention to the nature of the author’s language and the structure of the text: this might include teaching students to understand the structure of an experiment or the kind of sentence-to-sentence analysis of text illustrated in Reading in Secondary Content Areas. Then to push even deeper, analyzing the connection among the parts of the text (such as the connections of the data-communication devices, tables and the like, to the prose) or comparing one scientific account with another.

What are your thoughts about using gradated texts? Texts on a variety of levels as a scaffold? I think reading multiple texts on a topic written at different levels of difficulty is a terrific scaffold for dealing with harder text. In the past, if a text was hard for students, reading teachers would have encouraged using a different text to be used “instead of.” The idea here is not to flee from the hard text, but to read some easier “in addition to” texts on the same topic and to climb these easier texts like stair-steps.

Where do learning disabled students fit with regard to these shifts? I think teachers who work with these students may rely less on simply putting kids in easier texts as their response to these students’ needs, and more on trying to help them to deal with whatever they are struggling with.

What recommendations do you have for getting a student, who may be reading 1-2 years below their grade level, to read at their grade level in the shortest amount of time? I would make sure the student had about 3 hours per day of reading and writing work and this should engage the student in reading every day; reading something relatively easy and something challenging. The work with the challenging text needs guidance and support from a teacher with a lot of attention and explicit work on vocabulary. I would also argue for substantial fluency work (that could be with the same challenging text—repeated oral reading of some form or other). Depending on the age and skill level, I might push for explicit decoding instruction. I would encourage/require a lot of writing, too. Yes, it does, but what is a key idea in one kind of text may not be in another.

Sunday, August 21, 2011

Rejecting Instructional Level Theory

A third bit of evidence in the complex text issue has to do with the strength of evidence on the other side of the ledger. In my two previous posts, I have indicated why the common core is embracing the idea of teaching reading with much more complex texts. But what about the evidence that counters this approach?

Many years ago, when I was a primary grade teacher, I was struggling to teach reading. I knew I was supposed to have groups for different levels of kids, but in those days information about how to make those grouping decisions was not imparted to mere undergraduates. I knew I was supposed to figure out which books would provide the optimal learning experience, but I had no technology to do this.

So, I enrolled in a master’s degree program and started studying to be a reading specialist. During that training I learned how to administer informal reading inventories (IRI) and cloze tests and what the criteria were for independent, instructional, and frustration levels. Consequently, I tested all my students, and matched books to IRI levels using the publisher’s readability levels. I had no doubt that it improved my teaching and students’ learning.

I maintained my interest in this issue when I went off for my doctorate. I worked with Jack Pikulski. Jack had written about informal reading inventories (he’d studied with Johnson and Kress), and as a clinical psychologist he was interested in the validity of these measures. He even sent a bunch of grad students to an elementary school to test a bunch of kids, but nothing ever came of that study. Nevertheless, I learned a lot from Jack about that issue.

He had (has) a great clinical sense and he was skeptical of my faith in the value of those instructional level results. He recognized that informal reading inventories were far from perfect instruments and that at best they had general accuracy. They might be able to specify a wide range of materials for a student (say from grade 2 to 4), but that they couldn’t do better than that. (Further complicating things were the readability estimates. These had about the same level of accuracy.)

For Jack, the combination of two such rough guestimates was very iffy stuff. I liked the certainty of it though and clung to that for a while (until my own clinical sense grew more sophisticated).
Early in my scholarly career, I tracked down the source of the idea of independent, instructional, and frustration levels. It came from Emmett Betts’ textbook. He attributed the scheme to a study conducted by one of his doctoral students. I tracked down that dissertation and to my dismay it was evident that they had just made up those designations without any empirical evidence, something I wrote about 30 years ago!

Since then, readability measures have improved quite a bit, but our technologies for setting reading levels have not. Studies by William Powell in the 1960s, 70s, and 80s showed that the data that we were using did not result in an identification of optimum levels of student learning. He suggested more liberal placement criteria, particularly for younger students. More liberal criteria would mean that instead of accepting 95% word reading accuracy as Betts had suggested, Powell identified 85% as the better predictor of learning—which would mean putting kids in relatively more difficult books.

Consequently, I have sought studies that would support the original contention that we could facilitate student learning by placing kids in the right levels of text. Of course, guided reading and leveled books are so widely used it would make sense that there would be lots of evidence as to their efficacy.

Except that there is not. I keep looking and I keep finding studies that suggest that kids can learn from text written at very different levels (like the studies cited below by Morgan and O’Connor).

How can that be? Well, basically we have put way too much confidence in an unproven theory. The model of learning underlying that theory is too simplistic. Learning to read is an interaction between a learner, a text, and a teacher. Instructional level theory posits that the text difficulty level relative to the student reading level is the important factor in learning. But that ignores the guidance, support, and scaffolding provided by the teacher.

If the teacher is doing little to support the students’ transactions with text then I suspect more learning will accrue with somewhat easier texts. However, if reasonable levels of instructional support are available then students are likely to thrive when working with harder texts.

The problem with guided reading and similar schemes is that they are focused on helping kids to learn with minimal amounts of teaching (something Pinnell and Fountas have stated explicitly in at least some editions of their textbooks). But that switches the criterion. Instead of trying to get kids to optimum levels, that is the levels that would allow them to learn most, they have striven to get kids to levels where they will likely learn best with minimal teacher support.

The common core standards push back against the notion that students learn best when they receive the least teaching. The standards people want to know what it takes for kids to learn most, even if the teacher has to be deeply involved. For them, challenging text is the right ground to maximize learning… but the only way that will work is if kids are getting substantial teaching support in the context of that hard text.

P.S. Although Lexiles have greatly improved readability assessment (shrinking standard errors of measurement and improving the amount of comprehension variance that can be explained by text difficulty), and yet we are in no better shape than before since there are no studies indicating that if you teach students at particular Lexile levels more learning will accrue. (I suspect that if future studies go down this road, they will still find that the answer to that issue is variable; it will depend on the amount and quality of instructional support).

Betts, E. A. (1946). Foundations of reading instruction. New York: American Book Company.

Morgan, A., Wilcox, B. R., & Eldredge, J. L. (2000). Effect of difficulty levels on second-grade delayed readers using dyad reading. Journal of Educational Research, 94, 113–119.

O’Connor, R. E., Swanson, H. L., & Geraghty, C. (2010). Improvement in reading rate under independent and difficult text levels: Influences on word and comprehension skills. Journal of Educational Psychology, 102, 1–19.

Pinnell, G. S., & Fountas, I. C. (1996). Guided reading: Good first teaching for all children. Portsmouth, NH: Heinemann.

Powell, W. R. (1968). Reappraising the criteria for interpreting informal inventories. Washington, DC: ERIC 5194164.

Shanahan, T. (1983). The informal reading inventory and the instructional level: The study that never took place. In L. Gentile, M. L. Kamil, & J. Blanchard (Eds.), Reading research revisited, (pp. 577–580). Columbus, OH: Merrill.

Monday, July 11, 2011

More Evidence Supporting Hard Text

The past couple of blogs have dealt with the challenging text demands required by the new common core standards. Teachers who have been used to moving students to easier texts are in for a rude awakening since the new standards push to have students taught at particular Lexile levels that match grade levels rather than "reading levels."

Last week, I explained the evidence about the importance of text difficulty that was provided by the ACT. This week, I want to expand upon that explanation to show some of the other evidence that the authors of the common core depended upon, evidence that has been persuasively described and summarized by Marilyn Jager Adams in an article published in the American Educator (2010-2011).

Adams synthesized the information from various studies of textbook difficulty and learning, to demonstrate that textbook readabilities for Grades 4–12 have significantly and steadily grown easier since 1919; the difficulty of what adults are expected to read increased during that same time; and there is a relationship between the easing of text difficulty and students’ lower performance on the SAT. Obviously, if these things are true, one would want to ratchet (as the common core does) the difficulty of textbooks back up so that students would be better prepared for the actual reading demands beyond school.

Chall and her colleagues (Chall, Conard, & Harris, 1991) found that despite the fact that SAT passages had been getting easier, scores were declining anyway. Nevertheless, they found that textbooks were getting easier even faster than the SAT, and that reading these easier books appeared to provide poor preparation for dealing with the SAT. Even more convincing was a much larger study (Hayes, Wolfer, & Wolfe, 1996) that examined the readabilities of 800 elementary, middle school, and high school textbooks published between 1919–1991. Hayes and his team correlated the trends in text simplification with student performance on the SAT and found a good fit, concluding that “Long-term exposure to simpler texts may induce a cumulating deficit in the breadth and depth of domain-specific knowledge, lowering reading comprehension and verbal achievement.” Also, the texts used in high school have been found to be significantly easier than the texts students confront after they leave high school; in fact, young people make bigger reading gains during the years following high school than during it (Kirsch & Jungeblut, 1991).

Thus, these correlational data suggest that students will learn more from working with challenging texts than from the so-called “low readability, high interest” books that have become an educational staple. This approach is similar to that taken by athletes: To get stronger, you need to use more physical resistance than your muscles are used to; the more you do, the more you will be capable of doing, so it is essential to increase the workload.

The counter-argument to this heavier-books approach is the widespread belief that there is an optimum difficulty level for texts used to teach students to read. According to instructional level theory, if a text is written at a level that is too difficult for students, then they will become frustrated and discouraged and will not learn. Instructional level theory not only doesn't agree with the idea that learning comes from working with hard books, but claims that little or no learning would accrue if the books are too hard relative to student performance levels.

The evidence that supports the challenging-text approach obviously has some research support, but this is correlational in nature. Students seem to do better when they get a steady diet of more challenging text, but I would feel much better about this evidence if it were experimental and if there wasn't such a long-cherished counterargument. Given that, the next installment will weigh the evidence that supports the idea of there being an optimum level of text difficulty that fosters learning.

Tuesday, July 5, 2011

Common Core Standards versus Guided Reading, Part II

So why is the common core making such a big deal out of having kids read hard text?

One of the most persuasive pieces of evidence they considered was a report, “Reading: Between the Lines,” published by American College Testing (ACT; 2006). This report shows the primacy of text in reading and the value of having students spend time reading challenging text in the upper grades.

http://https:///www.act.org/research/policymakers/reports/reading.html">

Virtually every reading comprehension test and instructional program makes a big deal out of the different kinds of questions that can be asked about text. You’d be hard pressed these days to find teachers or principals who don’t know that literal recall questions that require a reader to find or remember what an author wrote are supposed to be harder than inferential questions (the ones that require readers to make judgments and recognize the implications of what the author wrote).

Similarly, in our fervor to use data and to facilitate better test performance, it has become common practice to analyze student test performance by question type, and then to try to teach the specific skills required by those questions. There are even commercial programs that you can buy that emphasize practice with main ideas, drawing conclusions, specific details, and the like.

There is only one problem with these schemes, according to ACT: they don’t work. In Reading: Between the Lines, ACT demonstrates that student performance cannot be differentiated in any meaningful way by question type. Students do not perform differently if they are answering literal recall items or inferential items (or other question types like main idea or vocabulary, either). Test performance, according to ACT, is driven by text rather than questions. Thus, if students are asked to read a hard passage, they may only answer a few questions correctly, no matter what types of questions they may be. On the other hand, with an easy enough text, students may answer almost any questions right, again with no differences by question type.

Thus, the ACT report shows that though different questions types make no difference in performance outcomes, but that text difficulty matters quite a bit (and this conclusion based on an analysis of data drawn from 563,000 students). One can ask any kind of question about any text — without regard to text difficulty.

What are reading comprehension standards? They tend to be numbered lists of cognitive processes or question types. Standards require students “to quote accurately from text,” to “determine two or more main ideas of a text,” or to “explain how main ideas are supported by key details,” and so on. But if question types (or standards) don’t distinguish reading performance and text difficulty does, then standards should make the ability to interpret hard texts a central requirement.

And, this is exactly what the common core standards have done. They make text difficulty a central feature of the standards. In the reading comprehension standards at every grade level and for every type of comprehension (literary, informational, social studies/history, science/technology), there is a standard that says something along the lines of, by the end of the year, students will be able to independently read and comprehend texts written in a specified text complexity band.

The ACT report goes on to describe features that made some texts harder to understand, including the complexity of the relationships among characters and ideas, amount and sophistication of the information detailed in the text, how the information is organized, the author’s style and tone, the vocabulary, and the author purpose. ACT concluded that based on these data, “performance on complex texts is the clearest differentiator in reading between students who are likely to be ready for college and those who are not” (p. 16-17).

Wednesday, June 29, 2011

Common Core Standards versus Guided Reading, Part I

The new common core standards are challenging widely accepted instructional practices. Probably no ox has been more impressively gored by the new standards than the widely-held claim that texts of a particular difficulty level have to be used for teaching if learning is going to happen.

Reading educators going back to the 1930s, including me, have championed the idea of there being an instructional level. That basically means that students would make the greatest learning gains if they are taught out of books that are at their “instructional” level – meaning that the text is neither so hard that the students can’t make sense of them or so easy that there is nothing in them left to learn.

These days the biggest proponents of that idea have been Irene Fountas and Gay Su Pinnell, at Ohio State. Their “guided reading” notion has been widely adopted by teachers across the country. The basic premises of guided reading include the idea that children learn to read by reading, that they benefit from some guidance and support from a teacher during this reading, and, most fundamentally, that this reading has to take place in texts that are “just right” in difficulty level. A major concern of the guided-readingistas has been the fear that “children are reading texts that are too difficult for them.”

That’s the basic idea, and then the different experts have proposed a plethora of methods for determining student reading levels, text difficulty levels, and for matching kids to books, and for guiding or scaffolding student learning. Schemes like Accelerated Reader, Read 180, informal reading inventories, leveled books, high readability textbooks, and most core or basal reading programs all adhere to these basic ideas, even though there are differences in how they go about it.

The common core is based upon a somewhat different set of premises. They don’t buy that there is an optimum student-text match that facilitates learning. Nor are they as hopeful that students will learn to read from reading (with the slightest assists from a guide), but believe that real learning comes from engagement with very challenging text and a lot of scaffolding. The common core discourages lots of out-of-level teaching, and the use of particularly high readability texts. In other words, it champions approaches to teaching that run counter to current practice.
How could the common core put forth such a radical plan that contradicts so much current practice?

The next few entries in this blog will consider why common core is taking this provocative approach and why that might be a very good thing for children’s learning.

Stay tuned.