Showing posts with label complex text. Show all posts
Showing posts with label complex text. Show all posts

Wednesday, February 3, 2016

Are Oral Reading Norms Accurate with Complex Text?

Teacher Question:  
          A question has come up that I don't know how to address and I would love your input.  For years, we have used the Hasbrook/Tindal fluency norms as one of the ways we measure our student's reading progress.  For example, the 4th grade midyear 50th percentile is 112 CWPM.  The fourth grade team has chosen a mid-year running record passage and is finding that many of students have gone down instead of up in their CWPM.  One teacher said that is because the common-core aligned texts are more challenging and that the passage is really the equivalent of what used to be considered a 5th grade passage. She said that the norms were done using text that is easier than what the students are now expected to read. I know that the texts are more complex and challenging and therefore more difficult for the students to read, and that this particular text may not be a good choice to use for an assessment, But it does raise the larger question--are these fluency norms still applicable?

Shanahan response:
         This is a great question, and one that I must admit I hadn’t thought about before you raised it. If average fourth-graders read texts at about 112 words correct per minute by mid-fourth-grade, one would think that their accuracy and/or speed would be affected if they were then asked to read texts that in the past would have been in the fifth-grade curriculum.

         However, while that assumption seems to make sense, it would depend on how those norms were originally established. Were kids asked to read texts characteristic of their grade levels at particular times of the year or was the text agenda wider than that? If the latter, then the complex text changes we are going through would not necessarily matter very much.

         So what’s the answer to your question? I contacted Jan Hasbrouck, the grand lady herself, and put your question to her. Here is her response:

         I guess the most honest answer is "who knows?" I hope that we may actually have an answer to that question by this spring or summer because Jerry Tindal and I are in the process of collecting ORF data to create a new set of norms, which should reflect more current classroom  practice.

         My prediction is that the new ORF norms won't change much from our 2006 norms (or our 1992 norms). My prediction is based on the fact that ORF is, outside of expected measurement error (which Christ & Coolong-Chaffin, 2007 suggest is in the range of 5 wcpm for grades 1 and 2 and 9 wcpm in grades 3-8+), fairly stable. You can see evidence of this on our 2006 norms when looking at the spring 50th %iles for grades 6 (150), grade 7 (150), and grade 8 (151). When you think that these three scores represent approximately 30,000 students reading a variety of grade level passages that pretty darn stable. Other studies of older readers (high school; college) also find that 150 wcpm is a common "average.”

         Of course this stability assumes that the ORF scores were obtained correctly, using the required standardized procedures, which unfortunately is too often not the case. Standardized ORF procedures require that students read aloud for 60 seconds from unpracticed samples of grade level passages, and the performance is scored using the standardized procedures for counting errors. In my experience most educators are doing these required steps correctly. However, I see widespread errors being made in another step in the required ORF protocol: Students must try to do their best reading (NOT their fastest reading)!  In other words, in an ORF assessment the student should be attempting to read the text in a manner that mirrors normal, spoken speech (Stahl & Kuhn, 2002) and with attention to the meaning of the text. 

         What I witness in schools (and hear about from teachers, specialists, and administrators in the field) is that students are being allowed and even encouraged to read as fast as they can during ORF assessments, completely invalidating the assessment. The current (2006) Hasbrouck & Tindal norms were collected before the widespread and misguided push to ever faster reading.  It remains to be seen if students are in fact reading faster. Other data, including NAEP data, suggests that U.S. students are not reading "better."

         And yes, of course the number of words read correctly per minute (wcpm) would be affected if students were asked to read text that is very easy for them or very difficult, but again, ORF is a standardized measure that can serve as an indicator of reading proficiency. 

         Given Jan's response, I assume the norms won’t change much. The reason for this is that they don’t have tight control of the data collection—reading procedures and texts varying across sites (not surprising with data on 250,000 readers). That means that the current norms do not necessarily reflect the reading of a single level of difficulty, and I suspect that the future norms determinations won’t have such tight control either. 

         The norms are averages and they still will be; that suggests using them as rough estimates rather than exact statistics (a point worth remembering when trying to determine if students are sufficiently fluent readers). 

          Last point: your fourth-grade teachers are correct that the texts they are testing with may not be of equivalent difficulty, which makes it difficult to determine whether or not there are real gains (or losses) being made. We've known for a long time that text difficulty varies a great deal from passage to passage. Just because you take a selection from the middle of a fourth-grade textbook, doesn't mean that passage is a good representation of appropriate text difficulty. That is true even if you know the Lexile rating of the overall chapter or article that you have drawn from (since difficulty varies across text). The only ways to be sure would be to do what Hasbrouck and Tindal did--use a lot of texts and assume the average is correct; or measure the difficulty of each passage used for assessment. The use of longer texts (having kids read for 2-3 minutes instead of 1) can improve your accuracy, too.


How to Improve Reading Achievement Powerpoint
         






Sunday, January 10, 2016

Close Reading and the Reading of Complex Text Are Not the Same Thing

          Recently, I was asked to make some presentations. I suggested a session on close reading and another on teaching with complex text. The person who invited me said, “But that’s just one subject… the close reading of complex text. What else will you talk about?”

          Her response puzzled me, but since then I’ve been noting that many people are confounding those two subjects. They really are two separate and separable constructs. That means that many efforts to implement the so-called Common Core standards may be missing an important beat.

          Close reading refers to an approach to text interpretation that focuses heavily not just on what a text says, but on how it communicates that message. The sophisticated close reader carefully sifts what an author explicitly expresses and implies, but he/she also digs below the surface, considering rhetorical features, literary devices, layers of meaning, graphic elements, symbolism, structural elements, cultural references, and allusions to grasp the meaning of a text. Close readers take text as a unity—reflecting on how these elements magnify or extend the meaning.

         Complex text includes those “rhetorical features, literary devices, layers of meaning, graphic elements, symbolism, structural elements, cultural references, and allusions.” (Text that is particularly literal or straightforward is usually not a great candidate for close reading). But there is more to text complexity than that—especially for developing readers.

          Text complexity also includes all the other linguistic elements that might make one text more difficult than another. That includes the sophistication of the author’s diction (vocabulary), sentence complexity (syntax or grammar), cohesion, text organization, and tone.

          A close reader might be interested in the implications of an author’s grammar choices. For example, interpretations of Faulkner often suggest that his use of extended sentences with lots of explicit subordination and interconnection reveals a world that is nearly full determined… in other words the characters (like the readers) do not necessarily get to make free choices.

          And, while that might be an interesting interpretation of how an author’s style helps convey his meaning (prime close reading territory), there is another more basic issue inherent in Faulkner’s sentence construction. The issue of reading comprehension. Readers have to determine what in the heck Faulkner is saying or implying in his sentences. Grasping the meaning of a sentence that goes on for more than a page requires a feat of linguistic analysis and memory that has nothing to do with close reading. It is a text complexity issue. Of course, if you are a fourth-grader, you don’t need a page-long sentence to feel challenged by an author’s grammar.

          Text complexity refers to both the sophisticated content and the linguistic complexity of texts. A book like, “To Kill a Mockingbird” is a good example of sophisticated content, but with little linguistic complexity. It is a good candidate for a close reading lesson, but it won’t serve to extend most kids’ language. While a book like “Turn of the Screw” could be a good candidate for close reading, but only if a teacher is willing to teach students to negotiate its linguistic challenges.

         The standards are asking teachers to do just that: to teach kids to comprehend linguistically complex texts and the carry out close reads. They definitely are not the same thing.

Monday, May 18, 2015

An Argument About Matching Texts to Students

A reader wrote:
My main response is toward your general notion of the research surrounding teaching kids "at their level."

First, I think the way you're describing instructional/skill levels obfuscates the issue a bit. Instructional level, by definition, means the level at which a child can benefit from instruction, including with scaffolding. Frustrational, by definition, means the instruction won't work. Those levels, like the terms "reinforcement & punishment" for example, are defined by their outcomes, not intentions. If a child learned from the instruction, the instruction was on the child's "instructional" level.

Where we may be getting confused is that I think you actually are referring to teaching reading comprehension using material that is in a child's instructional level with comprehension, but on a child's frustrational level with reading fluency. This is a much different statement than what I think most teachers are getting from your messages about text complexity, to the point that I think they're making mistakes in terms of text selection.

More generally, I'd argue that there is copious research supporting using "instructional material" to teach various reading skills. Take, for example, all of the research supporting repeated readings. That intervention, by definition, uses material that is on a child's "instructional" level with reading fluency, and there is great support that it works. So, the idea that somehow "teaching a child using material on his/her instructional level is not research supported" just doesn't make sense to me.

In terms of this specific post about how much one can scaffold, I think it largely depends on the child and specific content, as Lexiles and reading levels don't fully define a material's "instructional level" when it comes to comprehension. I know many 3rd graders, for example, that could be scaffolded with material written on an 8th grade level, but the content isn't very complex, so scaffolding is much easier.

The broad point here, Dr. Shanahan, is that we're over-simplifying, therefore confusing, the issue by trying to argue that kids should be taught with reading material on their frustrational level, or on grade level despite actual skill level. People are actually hearing you say that we should NOT attempt to match a child with a text - that skill level or lexile is completely irrelevant - when I believe you know you're saying that "instructional level" is just a bit more nuanced than providing all elements of reading instruction only on a child's oral reading fluency instructional range.

First, you are using the terms “instructional level” and “frustration level” in idiosyncratic ways. These terms are not used in the field of reading education as you claim, nor have they ever been. These levels are used as predictions, not as post-instruction evaluations. If they were used in the manner you suggest, then there would be little or no reason for informal reading inventories and running records. One would simply start teaching everyone with grade level materials, and if a student was found to make no progress, then we would simply lower the text difficulty over time.

My reply:
Of course, that is not what is done at all. Students are tested, instructional levels are determined, instructional groups are formed, and books assigned based on this information.

The claim has been that if you match students to text appropriately (the instructional level) that you will maximize the amount of student learning. This definition of instructional level does allow for scaffolding—in fact, that’s why students are discouraged from trying to read instructional level materials on their own, since there would be no scaffold available.

Fountas and Pinnell, for example, are quite explicit that even with sound book matching it is going to be important to preteach vocabulary, discuss prior knowledge, and engage children in picture walks so that they will be able to read the texts with little difficulty. And, programs like Accelerated Reading limit what books students are allowed to read.

You are also claiming that students have different instructional levels for fluency and comprehension. Informal reading inventories and running records measure both fluency AND reading comprehension. They measure them separately.  But there is no textbook or commercial IRI that suggests to teachers that they should be using different levels of texts to teach these different skills or contents. How accurately the students read the words and answer questions are combined to make an instructional text placement—not multiple text placements.

If we accept your claim that any text that leads to learning is at the “instructional level,” then pretty much any match will do. Students, no matter how they are taught, tend to make some learning gains in reading as annual Title I evaluations have shown again and again. These kids might have only gained .8 years in reading this year (the average is 1.0), but they were learning and by your lights that means we must have placed them appropriately.

Repeated reading has been found to raise reading achievement, as measured by standardized reading comprehension tests, but as Steve Stahl and Melanie Kuhn have shown, such fluency instruction works best—that is, leads to greater learning gains—when students work with books identified as being at their frustration levels rather than at their so-called instructional levels. That’s why in their large-scale interventions they teach students with grade level texts rather than trying to match students to texts based on an invalid construct (the instructional level).

You write: “People are actually hearing you say that we should NOT attempt to match a child with a text -- that skill level or Lexile is completely irrelevant - when I believe you know you're saying that "instructional level" is just a bit more nuanced than providing all elements of reading instruction only on a child's oral reading fluency instructional range.”

In fact, I am saying that beyond beginning reading, teachers should NOT attempt to match students with text. I am also saying that students should be reading multiple texts and that these should range from easy (for the child) to quite difficult. I am saying that the more difficult a text is, the more scaffolding and support the teacher needs to provide—and that such scaffolding should not include reading the text to the student or telling the student what the text says.


I am NOT saying that skill level or Lexile are irrelevant, or that “instructional level” is simply a bit more nuanced then people think. It is useful to test students and to know how hard the texts are for that student; that will allow you to be ready to provide sufficient amounts of scaffolding (and to know when you can demand greater effort and when just more effort will not pay off).

Saturday, February 22, 2014

First-Grade Close Reading

I've been looking for online and workshop information on close reading and everything I've seen and heard has recommended doing close reading on material that is well above kids independent reading level. Your post talks about the futility of doing a close read on preprimer material, which I completely agree with. What do you think about using higher text, say second grade, with second semester first graders in a teacher-supported group lesson?

I recently tried a bit of close reading with my first graders (see the second section of this post if you have time to read: http://firstgradecommoncore.freeforums.net/thread/4/close-reading - if not I completely understand) While I found it valuable, I'm struggling with there being not enough hours in the day and prioritizing the needs of my students.


The reason why I challenged close reading with young children is because of the lack of depth of appropriate texts for them to read. Close reading requires a deep or analytical reading that considers not just what a text says, but how it works as a text (e.g., examining layers of meaning, recognizing the effectiveness of literary devices, interpreting symbolism). Beginning reading texts simply lack this depth of meaning (or are usually too hard for kids to read).

Your email and the youtube link that is included in that imply that the idea of close reading is simply to read a challenging text with comprehension (challenging in this case meaning hard rather than complex—a very important distinction). For example, the video shows students interpreting word meanings in a hard text. A good lesson, yes indeed, but not really a close read.

I definitely would not assign second-grade texts to second-semester first-graders unless they were reading at a second-grade level (that is not uncommon, so if your kids are reading that well, go for it). For more typical first-graders (and those who are struggling), I would not do this. You can definitely engage kids in close listening activities with richer texts read by the teacher (a lot of the reading, by the way, seemed to be done by the teacher in the video that was included here), but that should not take the place of the children’s reading.

I agree with the idea that phonological awareness, phonics, oral reading fluency, writing, and reading comprehension (not close reading) should be the real priorities in grade one… so should oral language, of course, and close listening fits that idea nicely. You’ll have plenty of time to ramp this up when students are reading at a second-grade level.