Showing posts with label text complexity. Show all posts
Showing posts with label text complexity. Show all posts

Sunday, June 26, 2016

Further Explanation of Teaching Students with Challenging Text

Last week I pointed out that from grades 2-12 it wasn’t necessary to match students to text for instruction to proceed effectively. Research has not been kind to the idea of mechanical “instructional level” criteria like 90-95% accuracy (e.g., Jorgenson, Klein, & Kumar, 1977;  Kuhn, Schwanenflugel, Morris, Morrow, et al., 2006; Morgan, Wilcox, & Eldredge, 2000; O’Connor, Swanson, & Geraghty, 2010;  Powell, & Dunkeld, 1971;  Stahl, & Heubach, 2005;  Stanley, 1986).

            Language learning doesn’t work that way.

            That got lots of response, online and off. Some of it quite angry, too. Although I answered many queries and shout outs, I thought a little more formal response this week might be in order. Here are some key ideas when thinking about teaching kids to read with more complex text than we might have dared to use in the past:
           
1. No, easier text is not more motivating.
            Several respondents thought it only common sense that students would be frustrated by harder texts and stimulated by easier ones. I know that feeling. I shared it much of my career until I analyzed the evidence.
            One thing researchers have found repeatedly is that student readers tend to select books at their frustration levels for independent reading (e.g., Donovan, Smolkin,  & Lomax, 2000). Of course, with really low readers, what else could they choose? But this appears to be the case for the better readers, too. I guess their curiosity about the content of the harder materials outweighs their fear of failure. Looking back, I did a lot of that kind of frustration level reading myself as a boy—not always fully understanding what I read, but learning much from the struggle.
            Researchers thought students would lose motivation when reading harder texts (Fulmer & Tulis, 2013). Reality has been more complicated than that. Readers’ motivation does vary across a text reading—but degree of difficulty doesn’t seem to be the source of that variation.
            And, the idea that we want students to be challenged, but not too much—they can miss some specific number of words, but only that number and no more—just hasn’t panned out. When learning and book placement have been studied there has usually been no connection at all or the harder placements have led to more learning (in other words, our relatively easy book matches may be holding kids back, preventing them from exposure to more challenging features of language and meaning).
            If we are going to make these decisions based on our imaginings of how children must feel, then not only should we think of how frustrating it might be to struggle with a text that contains many words you don’t know, but we should consider how boring it must be to always deal with content aimed at younger kids who already can read as well you can.

2. No, not all texts need to be at an instructional level.
            If one challenges the idea of placing kids in instructional level books to facilitate learning (e.g., guided reading, Accelerated Reader), why is the alternative to only place kids in frustration level texts? The idea that all reading should be at the instructional level is wrong in part because of the inherent notion that all reading experience should be at any particular level. Text difficulty should vary; kids should move across a range of texts from easy to difficult.
            In the teaching of most skilled activities (e.g., foreign language, dancing, bicycle racing), the idea is not to protect the learners from harder applications of those skills, but to vary the routines between relatively easy challenges and those that scare and potentially embarrass the learner. If you have any doubt, go learn to do something.

3. No, text level is not the only feature of the learning situation that can be varied.
            Not only should texts vary in difficulty, but the amount of help, guidance, explanation, and scaffolding ought to vary, too. When kids are placed in frustration level texts they need greater support than when they are reading instructional level or independent level texts—just the opposite of what many of our instructional routines provide.
            I should intentionally place kids in easier or harder text and should add or withdraw support based upon need. When kids are in easy texts, the training wheels can be taken off. When they are in harder texts, as a teacher I need to be prepared to offer greater guidance and support. That means easier texts when reading with 30 kids, and harder texts—certainly beyond the normally prescribed levels—when I’m sitting closely with 6-8 kids and can monitor more closely and intervene more easily.
            If your teaching skills are so limited that the only way to protect kids from failure is to keep them always in the shallow water, then so be it. But for most of us, there is a greater range of pedagogical response available that would allow kids to swim often in deeper water without drowning.

4. No, more challenging text will not disrupt kids’ development of decoding skills.
            I heard from some last week that if you placed kids in more challenging texts then they just guessed at words. That might be true if you were to do this with beginning readers, but grade 2 is not beginning reading. Kids should be placed in relatively easy texts initially (grades K-1), texts that have clearly decodable or consistent spelling patterns.
            Then when they start taking on a greater range of texts—when they can read a second grade text, you will usually not see that kind of guessing based only on context. In any event, whatever patterns of reading behavior are elicited by such challenging text matches at that point, they have not been found to slow kids’ reading development or to disrupt their growth in decoding ability from that point. In fact, O’Connor and her colleagues (2010) have not even found it to be an issue with our most struggling readers—those older learning-disabled students who might still be trying to master many of those beginning reading skills.
            I understand the concerns and discomfort in putting kids in frustration level materials given all the reading authorities that have told you not to do that. But a careful review of that advice reveals a shocking neglect of studies of doing just that. No one, however, is saying just throw kids into hard text and hope they make it. One wouldn’t do that with beginning readers, and when kids are ready for such immersion tactics teachers have to teach—it isn’t like those routines where you hope the text is easy enough for kids to learn with a minimum of teacher help. And, finally, much learning comes from practice under varied levels of complication and difficulty—just because traditionally you were told all reading instruction should be at the instructional level doesn’t mean that when teaching with more complex text that you should aspire to such uniformity.

References 
Donovan, C. A., Smolkin, L. B., & Lomax, R. G. (2000). Beyond the independent-level text: Considering the reader-text match in first graders’ self-selections during recreational reading. Reading Psychology, 21, 309-333.

Fulmer, S. M., & Tulis, M. (2013). Changes in interest and affect during a difficult reading task: Relationships with perceived difficulty and reading fluency. Learning and Instruction, 27, 11-20.

Jorgenson, G. W., Klein, N., & Kumar, V. K. (1977). Achievement and behavioral correlates of matched levels of student ability and materials difficulty. Journal of Educational Research, 71, 100-103.

Kuhn, M. R., Schwanenflugel, P. J., Morris, R. D., Morrow, L. M., Woo, D. G., Meisinger, E. B., Sevcik, R, A., Bradley, B. A., & Stahl, S. A. (2006). Teaching children to become fluent and automatic readers. Journal of Literacy Research, 38, 357-387.

Morgan, A., Wilcox, B. R., & Eldredge, J. L. (2000). Effect of difficulty levels on second-grade delayed readers using dyad reading. Journal of Educational Research, 94, 113-119.

O’Connor, R. E., Swanson, L. H., & Geraghty, C. (2010). Improvement in reading rate under independent and difficult text levels: Influences on word and comprehension skills. Journal of Educational Psychology, 102, 1-19.

Powell, W.R., & Dunkeld, C.G. (1971). Validity of the IRI reading levels. Elementary English, 48, 637-642.

Stahl, S. A., & Heubach, K. M. (2005). Fluency-oriented reading instruction. Journal of Literacy Research, 37, 25-60.

Stanley, N.V. (1986). A concurrent validity study of the emergent reading level. Unpublished doctoral dissertation, University of Florida.

Wednesday, February 3, 2016

Are Oral Reading Norms Accurate with Complex Text?

Teacher Question:  
          A question has come up that I don't know how to address and I would love your input.  For years, we have used the Hasbrook/Tindal fluency norms as one of the ways we measure our student's reading progress.  For example, the 4th grade midyear 50th percentile is 112 CWPM.  The fourth grade team has chosen a mid-year running record passage and is finding that many of students have gone down instead of up in their CWPM.  One teacher said that is because the common-core aligned texts are more challenging and that the passage is really the equivalent of what used to be considered a 5th grade passage. She said that the norms were done using text that is easier than what the students are now expected to read. I know that the texts are more complex and challenging and therefore more difficult for the students to read, and that this particular text may not be a good choice to use for an assessment, But it does raise the larger question--are these fluency norms still applicable?

Shanahan response:
         This is a great question, and one that I must admit I hadn’t thought about before you raised it. If average fourth-graders read texts at about 112 words correct per minute by mid-fourth-grade, one would think that their accuracy and/or speed would be affected if they were then asked to read texts that in the past would have been in the fifth-grade curriculum.

         However, while that assumption seems to make sense, it would depend on how those norms were originally established. Were kids asked to read texts characteristic of their grade levels at particular times of the year or was the text agenda wider than that? If the latter, then the complex text changes we are going through would not necessarily matter very much.

         So what’s the answer to your question? I contacted Jan Hasbrouck, the grand lady herself, and put your question to her. Here is her response:

         I guess the most honest answer is "who knows?" I hope that we may actually have an answer to that question by this spring or summer because Jerry Tindal and I are in the process of collecting ORF data to create a new set of norms, which should reflect more current classroom  practice.

         My prediction is that the new ORF norms won't change much from our 2006 norms (or our 1992 norms). My prediction is based on the fact that ORF is, outside of expected measurement error (which Christ & Coolong-Chaffin, 2007 suggest is in the range of 5 wcpm for grades 1 and 2 and 9 wcpm in grades 3-8+), fairly stable. You can see evidence of this on our 2006 norms when looking at the spring 50th %iles for grades 6 (150), grade 7 (150), and grade 8 (151). When you think that these three scores represent approximately 30,000 students reading a variety of grade level passages that pretty darn stable. Other studies of older readers (high school; college) also find that 150 wcpm is a common "average.”

         Of course this stability assumes that the ORF scores were obtained correctly, using the required standardized procedures, which unfortunately is too often not the case. Standardized ORF procedures require that students read aloud for 60 seconds from unpracticed samples of grade level passages, and the performance is scored using the standardized procedures for counting errors. In my experience most educators are doing these required steps correctly. However, I see widespread errors being made in another step in the required ORF protocol: Students must try to do their best reading (NOT their fastest reading)!  In other words, in an ORF assessment the student should be attempting to read the text in a manner that mirrors normal, spoken speech (Stahl & Kuhn, 2002) and with attention to the meaning of the text. 

         What I witness in schools (and hear about from teachers, specialists, and administrators in the field) is that students are being allowed and even encouraged to read as fast as they can during ORF assessments, completely invalidating the assessment. The current (2006) Hasbrouck & Tindal norms were collected before the widespread and misguided push to ever faster reading.  It remains to be seen if students are in fact reading faster. Other data, including NAEP data, suggests that U.S. students are not reading "better."

         And yes, of course the number of words read correctly per minute (wcpm) would be affected if students were asked to read text that is very easy for them or very difficult, but again, ORF is a standardized measure that can serve as an indicator of reading proficiency. 

         Given Jan's response, I assume the norms won’t change much. The reason for this is that they don’t have tight control of the data collection—reading procedures and texts varying across sites (not surprising with data on 250,000 readers). That means that the current norms do not necessarily reflect the reading of a single level of difficulty, and I suspect that the future norms determinations won’t have such tight control either. 

         The norms are averages and they still will be; that suggests using them as rough estimates rather than exact statistics (a point worth remembering when trying to determine if students are sufficiently fluent readers). 

          Last point: your fourth-grade teachers are correct that the texts they are testing with may not be of equivalent difficulty, which makes it difficult to determine whether or not there are real gains (or losses) being made. We've known for a long time that text difficulty varies a great deal from passage to passage. Just because you take a selection from the middle of a fourth-grade textbook, doesn't mean that passage is a good representation of appropriate text difficulty. That is true even if you know the Lexile rating of the overall chapter or article that you have drawn from (since difficulty varies across text). The only ways to be sure would be to do what Hasbrouck and Tindal did--use a lot of texts and assume the average is correct; or measure the difficulty of each passage used for assessment. The use of longer texts (having kids read for 2-3 minutes instead of 1) can improve your accuracy, too.


How to Improve Reading Achievement Powerpoint
         






Sunday, January 10, 2016

Close Reading and the Reading of Complex Text Are Not the Same Thing

          Recently, I was asked to make some presentations. I suggested a session on close reading and another on teaching with complex text. The person who invited me said, “But that’s just one subject… the close reading of complex text. What else will you talk about?”

          Her response puzzled me, but since then I’ve been noting that many people are confounding those two subjects. They really are two separate and separable constructs. That means that many efforts to implement the so-called Common Core standards may be missing an important beat.

          Close reading refers to an approach to text interpretation that focuses heavily not just on what a text says, but on how it communicates that message. The sophisticated close reader carefully sifts what an author explicitly expresses and implies, but he/she also digs below the surface, considering rhetorical features, literary devices, layers of meaning, graphic elements, symbolism, structural elements, cultural references, and allusions to grasp the meaning of a text. Close readers take text as a unity—reflecting on how these elements magnify or extend the meaning.

         Complex text includes those “rhetorical features, literary devices, layers of meaning, graphic elements, symbolism, structural elements, cultural references, and allusions.” (Text that is particularly literal or straightforward is usually not a great candidate for close reading). But there is more to text complexity than that—especially for developing readers.

          Text complexity also includes all the other linguistic elements that might make one text more difficult than another. That includes the sophistication of the author’s diction (vocabulary), sentence complexity (syntax or grammar), cohesion, text organization, and tone.

          A close reader might be interested in the implications of an author’s grammar choices. For example, interpretations of Faulkner often suggest that his use of extended sentences with lots of explicit subordination and interconnection reveals a world that is nearly full determined… in other words the characters (like the readers) do not necessarily get to make free choices.

          And, while that might be an interesting interpretation of how an author’s style helps convey his meaning (prime close reading territory), there is another more basic issue inherent in Faulkner’s sentence construction. The issue of reading comprehension. Readers have to determine what in the heck Faulkner is saying or implying in his sentences. Grasping the meaning of a sentence that goes on for more than a page requires a feat of linguistic analysis and memory that has nothing to do with close reading. It is a text complexity issue. Of course, if you are a fourth-grader, you don’t need a page-long sentence to feel challenged by an author’s grammar.

          Text complexity refers to both the sophisticated content and the linguistic complexity of texts. A book like, “To Kill a Mockingbird” is a good example of sophisticated content, but with little linguistic complexity. It is a good candidate for a close reading lesson, but it won’t serve to extend most kids’ language. While a book like “Turn of the Screw” could be a good candidate for close reading, but only if a teacher is willing to teach students to negotiate its linguistic challenges.

         The standards are asking teachers to do just that: to teach kids to comprehend linguistically complex texts and the carry out close reads. They definitely are not the same thing.

Sunday, August 30, 2015

More on the Instructional Level and Challenging Text

Teacher question:
I’ve read your posts on the instructional level and complex texts and I don’t think you understand guided reading. The point of guided reading placements is to teach students with challenging text. That’s why it is so important to avoid texts that students can read at their independent level; to make sure they are challenged. The Common Core requires teaching students with challenging texts—not frustration level texts.

Shanahan response: 
I’m having déjà vu all over again. I feel like I’ve covered this ground before, but perhaps not quite in the way that this question poses the issue.

Yes, indeed, the idea of teaching students at their instructional level is that some texts could be too easy or too hard to facilitate learning. By placing students in between these extremes, it has been believed that more learning would take place. In texts that students find easy (your independent level), there would be little for students to learn—since they could likely recognize all or most of the words and could understand the text fully without any teacher help. Similarly, texts that pose too much challenge might overwhelm or frustrate students so they could not learn. Thus, placing them in instructional level materials would be challenging (there would be something to learn), but not so challenging as to be discouraging.

Or, at least that’s the theory.

So, I do get that the way you seem to be placing kids in books is meant to be challenging. But please don’t confuse this level of challenge with what your state standards are requiring. Those standards are asking that you teach students to read texts of specified levels of difficulty—levels of difficulty that for most kids will exceed what you think of as challenging.

This means that everyone wants kids to be challenged. The argument is about how much challenge. You may think that a student will do best if the texts used for teaching is only so challenging that he/she’d make no more than 5 errors per 100 words of reading, and your state may think the appropriate challenge level is grade level texts that represent a progression that would allow the students to graduate from high school with a particular level of achievement. That means in many circumstances the state would say kids need to read book X, and you’d say, “no way, my kids make too many errors with book X to allow me to teach it successfully.”

The Lexile levels usually associated with particular grade levels are not the ones that the standards have assigned to the grades. The Lexile grade-designations from the past were an estimate of the level of text that the average students could read with 75-89% comprehension. Those levels weren’t claiming that all kids in a particular grade could read such texts successfully, but that the average ones could. Thus, you’d test the individual kids and place them in books with higher or lower Lexiles to try to get them to that magical instructional level.

The new standards, however, have assigned higher Lexile bands to each grade level. That means that even the average kids will not be able to read those texts at an instructional level; some kids might be able to at those grade levels, but not the majority. That means teachers would need to teach students to read books more challenging than what have typically been at their instructional levels. In other words, plenty of kids will need to be taught at their frustration level to meet the standards.

I do get the idea that instructional level is meant to be challenging. But for the majority of kids, teaching kids at their instructional level will not meet the standards. That degree of challenge undershoots the level of challenge established by your state (and that they will test your students at). Perhaps you can take solace in the fact that research has not been able to validate the idea that there is an instructional level; that is, kids can be taught to read successfully with texts more challenging than you’ve apparently used in the past.