Showing posts with label challenging text. Show all posts
Showing posts with label challenging text. Show all posts

Monday, November 4, 2013

Who's Right on Text Complexity?

It seems that there is a lot of conflicting information coming out about accuracy and complex text. In the April edition of The Reading Teacher, Richard Allington wrote an article pertaining to struggling readers. In this article he says that there are studies showing the benefits to teaching children using text where their accuracy is high. Our district just raised the running record accuracy rate expectation to 95-98% accuracy based on the current research. Yet, your blog postings pull in the opposite direction. How do teachers know what is right and what is wrong? After all, teachers want to do what is best and most effective towards student learning.
  
What a great question. In my blog post, I cited particular studies and Dick Allington’s focused on a completely different set of studies. This is what teachers find so confusing. 

The experimental studies that I cited randomly assigned students to different treatment groups, so that children were matched to books in different ways, which allows a direct comparison of the impact of these methods—and gives us some certainty that the differences in learning were due to the different ways students were matched with text and not to something else.

Allington cites several correlational studies that examine existing patterns of relationship. These studies show that the lowest readers will tend to be placed in relatively harder texts and that they tend to make the least gains or to be the least motivated.

The problem with correlational studies of this kind is that they don’t allow us to attribute causation. From such evidence we can’t determine what role, if any, the student-book match made in kids’ learning. 

The students may have lagged because of how they were matched to books. But their low learning gains could also be due to other unmeasured instructional or demographic differences (many differences between high and low readers have been documented, but those were not controlled or measured in these studies). It could just be that the lowest readers make the least gains and that it has nothing to do with how they are matched to books. That’s why you need experiments (to determine whether the correlations matter).

I looked at studies that actually evaluated the effectiveness of this instructional practice (and these studies found either that student-text match made no difference or that harder placements led to more learning). While Dick looked at studies that revealed that there was a relationship between these variables, omitting all mention of these contradictory direct tests or of any of the correlational evidence that didn’t support his claims.

There were two experimental studies in his review, but neither of them manipulated this particular variable, so these results are correlational, too. For example, Linnea Ehri and her colleagues created a program in which teachers provided intensive reading support to young struggling readers (mainly explicit instruction in phonological awareness and phonics). However, teachers varied in how much reading they had the students do during the intervention and how they matched children to books; the kids who did a lot of reading of easier materials seemed to learn the most. That is an interesting finding, but it is still just a correlation.

One possibility is that there were other differences that weren’t measured (but that were somehow captured indirectly by the text-match variable). Perhaps the teachers were just responding to the students who were making the biggest gains and were undershooting their levels since they were gaining so fast. That would mean that it wasn’t the student-book match that was leading to learning, but that the better learning was influencing teacher decision-making about student-book match. How could we sort that confusing picture out? With experiments that systematically observe the impact of book placement separate from other variables; such as the the experimental studies that I cited.

A couple of other points worth noting: the kids who gained the least in the Ehri study were placed in texts in the way that you say your school is doing. In the Ehri study, the kids who made the biggest gains were in even easier materials than that; materials that should have afforded little opportunity to learn (which makes my point—there is no magic level that kids have to be placed in text to allow them to learn).

Another important point to remember: Allington’s article made no distinction based on grade levels or student reading levels. His claim is that all struggling readers need to spend much or most of their time reading relatively easy texts, and his most convincing data were drawn from studies of first-graders. However, the Common Core State Standards do not raise text levels for beginning readers. When students are reading at a first-grade level or lower (no matter what their ages), it may be appropriately cautious to keep them in relatively easy materials (though there are some discrepant data on this point too--that suggest that grouping students for instruction in this way damages children more than it helps them).

Experimental studies show that by the time students are reading like second-graders, it is possible for them to learn from harder text (as they did in the Morgan study). If we hold students back at their supposed levels, we are guaranteeing that they cannot reach the levels of literacy needed for college and career readiness by the time they leave high school.



Thursday, August 8, 2013

Powerpoints from Summer Speeches on CCSS

This has been a very busy summer with lots of projects, research analysis, article writing, and, of course, many presentations around the country. These talks have focused on the shifts or changes required by Common Core, the foundational skills preserved by CCSS in the primary grades, disciplinary literacy, challenging text, and close reading. The Powerpoints from those presentations are all now available at this site.

https://sites.google.com/site/summer2013ccss/home/summer-2013-presentations

Tuesday, February 5, 2013

A Question on Text Complexity


I am looking for some clarification on the guided reading discussion. It would seem by many that you are saying that students do not need to work at their “instructional” level while learning reading skills and strategies. What I think you are saying is that once they are beyond decoding text up to a second grade reading level it is no longer necessary to do this. Sounds like once they get here they are reading and they can then move on to being taught comprehension using more complex text with teacher guidance. Is this what you mean? Or are you in fact saying that leveled guided reading of all sorts is not effective? Should students in K and 1 not worry about comprehension at their instructional level and just work with guidance and support from a teacher through higher more complex text?  If so, at what point do we begin to understand what they can actually do on their own?

 Good questions. I hope I can clarify. Let’s try this:

Text difficulty matters a lot with beginning readers. They have to figure out the decoding system and much of this knowledge comes from abstracting patterns from the words they read. By keeping text relatively easy early on, we make it easier for them to figure out the code. Thus, guided reading and other schemes for nurturing beginning readers and bringing them along step-by-step through increasingly difficult text levels make a lot of sense. Definitely use them with beginners.

Once kids reach about a beginning second-grade decoding level, we don’t need to be as scrupulous about text difficulty. Students can learn from a pretty wide range of difficulty levels, and text difficulty is not a reliable predictor of student learning.

If the issue is teaching reading, then matching text complexity with student reading levels is NOT the issue. That’s where guided reading and similar schemes go wrong.

Placing students in more challenging books is a good idea because it increases opportunity to learn (there is more to figure out in challenging texts). This is important since our kids do not read effectively at high enough levels.  

But just placing students in more challenging text makes the same error that guided reading did; it just replaces an over-reliance on one kind of text-student match with another. Increases in text difficulty levels need to be coordinated with increases in the amounts and quality of scaffolding, support, encouragement, and explanation provided by the teacher. If a text is relatively easy for students, as with a traditional guided reading match, then they won’t require much instructional support with that text (though there won’t be much to learn from such texts either). But if the text is relatively difficult for students, teachers will need to be a lot more energetic in their teaching responses.

There is more to be learned from challenging texts, but this means that there needs to be a lot more teaching with such texts. Instead of asking what book level to teach someone at, teachers should ask, “If I place a student in a book this challenging, how much support will I need to provide to enable him/her to learn from this text?"  

Friday, February 1, 2013

How Can I Teach with Books that are Two Years above Student Reading Levels?


I teach 4th grade general education. I have read several of your articles the last few days because I have a growing frustration regarding guided reading. I believe a lot of your ideas about what does not work are correct, but I don't understand what you believe we SHOULD be doing. I am confused about how to give students difficult text books to read without reading it to them. I thought I was doing what I was supposed to be doing. I do not know how to scaffold science or social studies text for students that are 2 years behind without reading it to them. I also feel pressure in these subjects to read it to them because I thought it was more important for them to understand the information thoroughly by reading the text aloud, having thoughtful discussions, and follow up activities. Every time I think I know what I should be doing, I read another article and realize that I am doing that wrong too. So, please give me guidance on how to best teach nonfiction and fiction text to my class whole group. What strategies and types of activities are the best?

I feel your pain. What would it look like to scaffold a fourth-grade lesson from a social studies book with children reading at what formerly we would have referred to as a second-grade level? I think there are a number of possibilities.

First, I would “level” (pun intended) with the kids. That is, I would not try to hide from them that I was going to ask them to read a book that we would in the past have said was too hard for them. The point here is motivation. People like a challenge and kids are people. When you ask them to take on something really hard, let them in on the secret so they know to be proud of themselves when they meet the challenge.

Second, and here I have to be a bit experimental, trying some choices that might turn out not to work—or, more likely, that turn out to be not as efficient as some of the other choices. My first attempt would be to read the chapter we were going to work with, trying to identify anything that might trip the kids up: specific ideas that I thought were especially complicated or subtle or abstract, key background information that they might not know yet, essential vocabulary, sentences that might confuse, cohesive links among the words that could be hard to track, organizational structure that might require highlighting, and so on. Basically, what makes this text hard to comprehend? With that information, I would now make a decision: is the difficulty something to be prevented or monitored?

Sometimes, I will think that a problem is so big that I must get out in front of it. If there is something that you are certain the kids can’t figure out that might discourage them or that wouldn’t be worth the time, then by all means intervene early. If I think the key to understanding this page is a particular vocabulary word, I very well might explain that word before having the kids attempt the page. But often, I would rather have the students give it a try; there is nothing wrong with trying something and failing the first time. I can monitor their success with questions aimed at revealing whether they got that point or not, and I can follow up with assistance. So, if the students don't connect a particular concept and process appropriately because of a confusing cohesive link (like not recognizing that “it” referred to the planentary ring and not gravity), I will get the kids involved in trying to connect the various references throughout the text.

Third, the scaffolding described above will likely require some rereading—either of the whole chapter (fourth grade science and social studies chapters are surprisingly short, so rereading the entire chapter is usually not that big a deal). Thus, they try to read it; I question them and help them work through the problems; and then they reread it (perhaps more than once), to see if they can figure it out the second or third time.

Fourth, let’s say I have tried that and the process has been really slow and labored or the kids are being tripped up, not by the ideas, but by their struggle to recognize and read the words. If this is the case, before I even get to the reading and scaffolding and rereading described above, I would have the students do fluency work. For example, I would have the students mumble read the text (or a part of the text) at their desks. Or, I would partner them up and have them engage in paired reading, taking turns reading one page aloud to their partner, and then listening and helping as the partner tries the next page. That kind of oral reading practice with repetition can be a big help in raising the students’ ability to work with that text. Once they have read it like that once or twice, you’d be surprised at how much better they can read it for comprehension. Thus, they would then be ready for step two above. As I said, you have to be experimental—trying out different combinations and orders of fluency work, reading, scaffolding, and rereading.

This can be painstaking. But, in the end, the students will have read the material that formerly you would have protected them from. They will have both the science or social studies knowledge, but it will have come about because of their own interactions with the text, rather than because you read it to them or told them what it said. By engaging in such efforts (and this is a real effort—it involves a lot of teacher planning, modeling, explaining, etc.), the students become better able to handle harder text than they could at the start. Over time they build the strength to handle more challenging language with less teacher guidance.

       

Friday, January 18, 2013

Q & A On All Things Common Core

Recently, I participated in a webinar for McGraw-Hill about teaching with the common core standards. Participants sent in some questions and I have provided answers to those questions. Thought you might be interested in the wide-ranging conversation. Here is a link to the webinar itself in case you want to start there.

http://www.shanahanonliteracy.com/2012/11/mcgraw-hill-webinar.html

Any suggestions as to how raising text levels will work for students that are learning English? Are the same ideas relevant? I suspect that it isn’t that different across languages in terms of how this works generally or how well it will work. What needs to be scaffolded might differ, however. Usually second language learners will need more vocabulary support or grammar support than will be needed by native speakers (but there can be a lot of individual variation in this). Second language experts have long expressed concerns about text placements that under shot ELL students’ intellectual capacities; that problem will definitely be improved by this approach. For more info on English learners and common core visit http://ell.stanford.edu/

With the huge emphasis on increased text level, it seems that the amount of reading done will decrease significantly. What are your thoughts on this? That is a real possibility and it could be a problem. I think it is something we will need to be vigilant about. I continue to stress the idea that NOT all student reading needs to in the common core ranges and the importance of varied reading difficulty across the school day and school year. Obviously when one is dealing with very hard text, it makes sense to work with smaller doses of that (because it takes longer to figure it out)… with easier text the doses can be bigger. By working with a mix of texts, it is possible to get practice with both the intensity and extensiveness to increase student reading levels and reading stamina.

  David Coleman suggests reading 50% informational and 50% literary text. When we present students with "reach" texts, would you suggest we put more informational than literary texts in their hands? No, I generally wouldn’t say that, though in practice it might turn out that way. Kids will need experience in handling a wide variety of more challenging texts. However, I’ve been looking at the texts that elementary teachers report using with kids. The informational texts that they use tend to be harder than the literary texts… so if the harder texts that are available in your classroom are the informational texts, then these texts might very well be the ones that you use as reach texts.

If the vast majority of students in a classroom is reading two grade levels below current grade level, and the teacher is exposing the students to grade level shared text, is this enough? Should the shared text be ABOVE current grade level in this case? I don’t think there is a specific match of text to students (in terms of text difficulty) that facilitates learning. It will always be three variables: how well the student reads now, how hard the text is, and how much thoughtful support the teacher provides to help the student figure the text out. Working with materials two years harder than we would have used in the past is likely a sufficient distance to allow learning – now it is up to the teacher to provide enough support to encourage learning.

What would be the accuracy percentage you'd recommend when you suggest students read at their frustration level/"reach" level? See previous question. There is no set level. William Powell’s work suggests that these accuracy percentages might vary by grade levels, but that they were often in the mid 80-percents for the students who made the greatest gains (which is much lower than we would have encouraged in the past).

What is the role of literary nonfiction? If you want to prepare students to read well you should give them opportunities to work with a wide variety of text types—so they gain experience dealing with different language, text features, purposes, structures, etc. Literary nonfiction—essays, biographies, speeches, criticism—is wonderful and important. However, literature and non-literary informational text (science, history, etc.) are important, too. I fear that many schools will increase literary nonfiction, but will not increase the reading of non-literary informational text. (I also fear the pressure in some schools for the English Department to take on science and history reading—which makes no sense to me).

Can you put a percent on the maximum amount of time allowed for out-of-level reading? No. We definitely don’t know what the best mix of challenging and less challenging might be.

Do these shifts also apply to early intervention reading programs in all grade levels? Early intervention programs focus on learners in preschool, kindergarten, and grade one. I don’t think it would be a good idea to ramp text difficulty up for these students. Stay with the kinds of materials and student-text matches that we have traditionally used at these levels. (For later interventions, I like the idea of the highly skilled intervention teacher in an advantaged situation—smaller groups of children, for instance, working with harder text. Remember to learn from such text a lot more support is needed, so shifting to difficult text in the high support situation makes greater sense.

If this is true for grades 2-12, is it the role of grades K-1 to teach ALL students to the point of being on grade level expectations of CCSS? Grades PreK-1 have a lot to accomplish. The reason why we don’t ramp up the difficulty level of texts is to ensure that students develop their beginning reading and writing skills (e.g., phonological awareness, decoding, fluency, comprehension). Let’s not try to hurry past that part of the process (by raising the texts levels), but let’s give kids he skills that will allow them to benefit from the more challenging texts they will face later.

Using grade level texts (not a steady diet of out of level) is a big shift in thinking. As a literacy coach, how do I convince teachers that what we have been telling them to do is not the CCSS way anymore? I can feel a revolt coming on! However, it makes good sense to me. Are there studies there about how this shift impacts students' achievement? 
AND this one: 
During the webinar, I asked about research that supported asking students to read above their instructional levels. Dr. Shanahan indicated that there were a few studies. Could you give me the names of some of those researchers?

Here are a couple of past blogs that provide this information.





I work in a small district in Cedar City Utah as a school literacy specialist. Our district does not even have a core reading program that it requires all schools to use. (I use to work in Granite School district in Salt Lake City) My teachers want new curriculum in order to teach these new standards. Any suggestions on how to get the district to realize that new material is a real need with new standards?

 The Common Core is requiring the use of more challenging texts than has been common in the past. It is requiring substantially greater attention to informational text and literary non-fiction. It is requiring greater attention to connections across texts, and to the use of texts that have sufficient intellectual depth to support close readings. I can’t imagine schools reaching the common core without making changes to their texts (how big those changes will need to be will depend on what is in place now, of course).

I would like to ask Dr. Shanahan if the three read, first for key ideas/details, second for craft/structure, and third for integration of knowledge/ideas works for informational text as well as literary? AND Can you briefly describe what a close reading in science might look like?

Yes, attention to those three kinds of thinking makes sense with both kinds of reading though the specifics may differ a bit (a key idea in one type of text is not necessarily a key idea in another). Early on a close reading of science is not that different from other close readings, but as students move up through the grades – and science texts gets more specialized—it can look pretty different. However, the structure of close reading can be pretty similar even when some of the specifics change. Thus, initially, it is important that students be able to identify the main idea and key details. This means students have to learn to focus on the key scientific information that would allow them to summarize the text adequately (so far, not that different from literary reading, and yet what kind of information matters most differs even at this point—character motive is pretty important in literary reading, while material cause or causation without motive is essential to science). A deeper stab at reading science will then require attention to the nature of the author’s language and the structure of the text: this might include teaching students to understand the structure of an experiment or the kind of sentence-to-sentence analysis of text illustrated in Reading in Secondary Content Areas. Then to push even deeper, analyzing the connection among the parts of the text (such as the connections of the data-communication devices, tables and the like, to the prose) or comparing one scientific account with another.

What are your thoughts about using gradated texts? Texts on a variety of levels as a scaffold? I think reading multiple texts on a topic written at different levels of difficulty is a terrific scaffold for dealing with harder text. In the past, if a text was hard for students, reading teachers would have encouraged using a different text to be used “instead of.” The idea here is not to flee from the hard text, but to read some easier “in addition to” texts on the same topic and to climb these easier texts like stair-steps.

Where do learning disabled students fit with regard to these shifts? I think teachers who work with these students may rely less on simply putting kids in easier texts as their response to these students’ needs, and more on trying to help them to deal with whatever they are struggling with.

What recommendations do you have for getting a student, who may be reading 1-2 years below their grade level, to read at their grade level in the shortest amount of time? I would make sure the student had about 3 hours per day of reading and writing work and this should engage the student in reading every day; reading something relatively easy and something challenging. The work with the challenging text needs guidance and support from a teacher with a lot of attention and explicit work on vocabulary. I would also argue for substantial fluency work (that could be with the same challenging text—repeated oral reading of some form or other). Depending on the age and skill level, I might push for explicit decoding instruction. I would encourage/require a lot of writing, too. Yes, it does, but what is a key idea in one kind of text may not be in another.

Tuesday, October 16, 2012

New Presentation on Scaffolding Challenging Text

Here is the link to my presentation on scaffolding challenging text. Hope it is useful.

https://sites.google.com/site/tscommoncore/text-complexity

Monday, September 10, 2012

CCSS Allows More than Lexiles


When I was working on my doctorate, I had to conduct a historical study for one of my classes. I went to the Library of Congress and calculated readabilities for books that had been used to teach reading in the U.S. (or in the colonies that became the U.S.). I started with the Protestant Tutor and the New England Primer, the first books used for reading instruction here. From there I examined Webster’s Blue-Backed Speller and its related volumes and the early editions of McGuffey’s Readers.

Though the authors of those have left no record of how those books were created, it is evident that they had sound intuitions as to what makes text challenging. Even in the relatively brief single volume Tutor and Primer, the materials got progressively more difficult from beginning to end. These earliest books ramped up in difficulty very quickly (you read the alphabet on one page, simple syllables on the next, which was followed by a relatively easy read, but then challenge levels would jump markedly).

By the time we get to the three-volume Webster, the readability levels adjust more slowly from book to book with the speller (the first volume) being by far the easiest, and the final book (packed with political speeches and the like) being all but unreadable (kind of like political speeches today).

By the 1920s, psychologists began searching for measurement tools that would allow them to describe the readability or comprehensibility of texts. In other words, they wanted to turn these intelligent intuitions about text difficulty into tools that anyone could use. That work has proceeded by fits and starts over the past century, and has resulted in the development of a plethora of readability measurements.

Readability research has usually focused on the reading comprehension outcome. Thus, they have readers do something with a bunch of texts (e.g., answer questions, do maze/cloze tasks) and then they try to predict these performance levels by counting easy to measure characteristics of the texts (words and sentences). The idea is to use easily measured or counted text features and to then place the texts on a scale from easy to hard that agrees with how readers did with the texts.

Educators stretched this idea of readability to one of learnability. Instead of trying to predict how well readers would understand a text, educators wanted to use readability to predict how well students would learn from such texts. Thus, the idea of “instructional level”: if you teach students with books that appropriately matched their reading levels, the idea was that students would learn more. If you placed them in materials that were relatively easier or harder, there would be less learning. This theory has not held up very well when empirically tested. Students seem to be able to learn from a pretty wide range of text difficulties, depending on the amount of teacher support.

The Common Core State Standards (CCSS) did not buy into the instructional level idea. Instead of accepting the claim that students needed to be taught at “their levels,” the CCSS recognizes that students will never reach the needed levels by the end of high school unless harder texts were used for teaching; not only harder in terms of students’ instructional levels, but harder also in terms of which texts are assigned to which grade levels. Thus, for Grades 2-12, CCSS assigned higher Lexile levels to each grade than in the past (the so-called stretch bands).

Lexiles is a recent schemes for measuring readability. Initially, it was the only readability measure accepted by the Common Core. That is no longer the case. CCSS now provides text guidance for how to match books to grade level using several formulas. This change does not take us back to using easier texts for each grade level. Nor does it back down from encouraging teachers to work with students at levels higher than their so-called instructional levels. It does mean that it will be easier for schools to identify appropriate texts using and of six different approaches—many of which are already widely used by schools.

Of course, there are many other schemes that could have been included by CCSS (there are at least a couple of hundred readability formulas). Why aren’t they included? Will they be going forward?

From looking at what was included, it appears to me that CCSS omitted two kinds of measures. First, they omitted those schemes that have not used often (few publishers still use Dale-Chall or the Fry Graph to specify text difficulties, so there would be little benefit in connecting them to the CCSS plan). Second, they omitted widely used measures that were not derived from empirical study (Reading Recovery levels, Fountas & Pinnell levels, etc.). Such levels are not necessarily wrong—remember educators have intuitively identified text challenge levels for hundreds of years.

These schemes are especially interesting for the earliest reading levels (CCSS provides no guidance for K and 1). For the time being, it makes sense to continue to use such approaches for sorting out the difficulty of beginning reading texts, but then to switch to approaches that have been tested empirically in grades 2 through 12. [There is very interesting research underway on beginning reading texts involving Freddie Hiebert and the Lexile people. Perhaps in the not-too-distant future we will have stronger sources of information on beginning texts].    

Here is the new chart for identifying text difficulties for different grade levels:





Common
Core Band

ATOS
Degrees of
Reading
Power®
2nd 3rd
2.75 5.14
42 54
4th 5th
4.97 7.03
52 60
6th 8th
7.00 9.98
57 67
9th 10th
9.67 12.01
62 72
11th CCR
11.20 14.10
67 74




Common
Core Band

Flesch-
Kincaid

The Lexile
Framework®
2nd 3rd
1.98 5.34
420 820
4th 5th
4.51 7.73
740 1010
6th 8th
6.51 10.34
925 1185
9th 10th
8.32 12.12
1050 1335
11th CCR
10.34 14.2
1185 1385




Common
Core Band

Reading
Maturity

SourceRater
2nd 3rd
3.53 6.13
0.05 2.48
4th 5th
5.42 7.92
0.84 5.75
6th 8th
7.04 9.57
4.11 10.66
9th 10th
8.41 10.81
9.02 13.93
11th CCR
9.57 12.00
12.30 14.50



For more information: