Showing posts with label DIBELS. Show all posts
Showing posts with label DIBELS. Show all posts

Thursday, October 27, 2016

Oral Reading Fluency is More than Speed

Letter I received:

I found these troubling quotes in the Report of the National Reading Panel:

"Fluency, the ability to read a text quickly, accurately, and with proper expression..."

"Fluent readers can read text with speed, accuracy, and proper expression..."

My dismay is due to (a) listing rate first in both statements, and (b) using "quickly" and "with speed" rather than "rate" (or "appropriate rate" as in the CCSS fluency standard). I wonder if this wording may have encouraged folks who now embrace the notion that "faster is better" (e.g. "better readers have higher DIBELS scores--wcpm")

In my own work I often refer to Stahl & Kuhn (2002) who stated that "fluent reading sounds like speech"-- smooth, effortless, but not "as fast as you can."

Who’s right?

Shanahan response:

            Well, first off, let me take full responsibility for the wordings that you found troubling. I took the lead in writing that portion of the report, and so I probably wrote it that way. Nevertheless, I doubt that my inapt wording was what triggered the all too prevalent emphasis on speed over everything else in fluency; that I’d pin on misinterpretations of DIBELS.

            I, too, have seen teachers guiding kids to read as fast as they can, trying to inflate DIBELS scores in meaningless ways. What a waste of time.

            But, that said, the importance of speed/quickness/rate in fluency cannot be overstated—though it obviously can be misunderstood.

            The fundamental idea that I was expressing in those quotes was that students must get to the point where they can recognize/decode words with enough facility that they will be able to read the author's words with something like the speed and prosody of language. 

            Old measures of fluency—like informal reading inventories--looked at accuracy alone, which is only adequate with beginning readers. The problem with accuracy measures is that they overrate the plodders who can slowly and laboriously get the words right (as if they were reading a meaningless list of random words). 

            DIBELS was an important advance over that because it included rate and accuracy--which is sufficient in the primary grades, but which overrates the hurried readers who can speed through texts without appropriate expression. Studies are showing that prosody is not particularly discriminating in the earlier grades, but as kids progress it gains in importance (probably because the syntax gets more complex and prosody or expression is an indicator of how well kids are sorting that out—rather than just decoding quickly enough to allow comprehension).

            Fluency instruction and monitoring are very important, and I agree with your complaint that it is often poorly taught and mis-assessed by teachers. I think there are a couple of reasons for that.

            First, I think many teachers don’t have a clear fluency concept—and stating its components—accuracy, rate, and prosody—in their order of development won’t fix that. Fluency is not a distinct skill as much as it is an amalgam of skills. It is part decoding, part comprehension.

            Kids cannot read if they can’t decode and recognize words; translating from print to pronunciation. That’s why we teach things like sight words, phonological awareness, and phonics.

            However, recognizing words in a list is a very different task than reading them horizontally, organized into sentences, with all the distraction that implies. Speed (or rate or quickness) don’t really matter when reading a list of words. But when reading sentences, it is critical that you move it along. Slow word reading indicates that a student is devoting a lot of cognitive resources to figuring out the words, and that means cognitive resources will not be available to thinking about the ideas. That’s why speed of word reading is so important; it is an indicator of how much a reader will be able to focus on a text’s meaning.

            But fluency is not just fast word reading. It includes some aspects of reading comprehension, too. For instance, fluent readers tend to pronounce homographs (heteronyms)—desert, affect, intimate—correctly without needing to slow down or try alternatives. Fluent readers may have no advantage in thinking deeply about the ideas in a text, but they do when it comes to this kind of immediate interpretation while reading.

            Another aspect of comprehension that is part of fluency is the ability to parse sentences so that they sound like sentences. Someone listening to your oral reading should be able to understand the message, because you would have grouped the words appropriately into phrases and clauses. To read in that way, you, again, have to be quickly interpreting the sentences—using punctuation and meaning as you go.  

            Teachers who think that fluency is just reading the right words, or just reading the right words really fast, is missing the point. Stahl and Kuhn are right: fluency has to go, not necessarily fast, but the speed of normal language.

             Second, I think many teachers don’t understand assessment. Reading assessments of all kinds try to estimate student performance based on small samples of behavior. Accordingly, the assessment tasks usually differ from the overall behavior in important ways. With fluency that means measuring some aspects of the concept—speed and accuracy—while not measuring others—prosody.

            Given the imperfect nature of these predictor tasks, it is foolish, and even damaging, to teach the tasks rather than the ability we are trying to estimate. It is like teaching kids to answer multiple-choice questions rather than teaching them to think about the ideas in text.

            As long as teachers try to teach facets of tests rather than reading we're going to see this kind of problem. The following guidance might help.

1.    Tell students to read the text aloud as well as they can—not as fast as they can.
2.    Tell them that they will be expected to answer questions about the text when they finish—so they will read while trying to understand the text.
3.    Pay attention not just to the wcpm (words correct per minute), but to whether the reading sounds like language.


November Powerpoints

-->

Saturday, May 7, 2016

What doesn’t belong here? On Teaching Nonsense Words


            Obviously you shouldn’t wear an especially short skirt to work, though it might be fine for a night of bar hopping. It would just be out of place. Lil Wayne can do rap, but he’d definitely be out of place at Gospel a Convention, sort of like a love affair with a happy ending in a Taylor Swift lyric.




            So what’s out of place in reading education?

            My nominee is the act of teaching kids to read nonsense words. Don’t do it. It don’t belong (it may even be worse than orange and green).

            Why, you might ask, would anyone teach nonsense words? I attribute this all-too-common error to a serious misunderstanding of tests and testing.

            Many years ago researchers were interested in determining how well kids could decode. They decided upon lists of words that were graded in difficulty. The more words the students could read accurately, the better we assumed his/her decoding must be.

            But, then they started to think: It’s possible for kids to memorize a bunch of words. In fact, with certain high frequency words we tell kids to memorize them. If I flash the word “of” to a student and he/she reads it correctly, that might not be due to better phonics skills, but just because Johnny had that one drilled into long-term memory.

            That means with word tests we can never be sure of how well kids can decode.
           
            The solution: nonsense words tests. If we give kids lists of nonse words, that is combinations of letters that fit English spelling patterns, but that aren’t really words, then if students can read them they must have decoding skills, because no one in their right mind would teach these made up letter combinations to children.

            Enter tests like DIBELS decoding measure. Tests designed to help determine quickly who needs more help with decoding. These aren’t tests aimed at evaluating programs or teachers; they are diagnostic.

            These tests work pretty well, too. Studies show a high correlation between performance on nonsense words and real words, and some of the time the nonsense word scores are more closely related with reading achievement than the word test scores!

            But many schools are now using these to make judgments about teachers.

            And, the teachers’ reaction has been to teach nonsense words to the kids. Not just any nonsense words either; the specific nonsense words that show up on DIBELS. That means these teachers are making the test worthless. If kids are memorizing pronunciations for those nonsense words, then the tests no longer can tell how well the kids can decode.

            We can do better. Please do not use these kinds of tests to make judgments about teachers, it just encourages foolish responses on their parts. And, please do not teach these nonsense words to the kids. It is harmful to kids. It definitely doesn’t belong here.

           

Thursday, August 20, 2009

Yes, Virginia, You Can DIBEL Too Much!

I visited schools yesterday that used to DIBEL. You know what I mean, the teachers used to give kids the DIBELS assessments to determine how they were doing in fluency, decoding, and phonemic awareness. DIBELS has been controversial among some reading experts, but I’ve always been supportive of such measures (including PALS, TPRI, Ames-web, etc.). I like that they can be given quickly to provide a snapshot of where kids are.

I was disappointed that they dropped the tests and asked why. “Too much time,” they told me, and when I heard their story I could see why. This was a district that like the idea of such testing, but their consultants had pressured them into repeating it every week for at risk kids. I guess the consultants were trying to be rigorous, but eventually the schools gave up on it altogether.

The problem isn’t the test, but the silly testing policies. Too many schools are doing weekly or biweekly testing and it just doesn’t make any sense. It’s as foolish as checking your stock portfolio everyday or climbing on the scale daily during a diet. Experts in those fields understand that too much assessment can do harm, so they advise against it.

Frequent testing is misleading and it leads to bad decisions. Investment gurus, for example, suggest that you look at your portfolio only every few months. Too many investors look at a day’s stock losses and sell in a panic, because they don’t understand that such losses happen often—and that long term such losses mean nothing. 

The same kind of thing happens with dieting. You weigh yourself and see that you’re down 2 pounds, so what the heck, you can afford to eat that slice of chocolate cake. But your weight varies through the day as you work through the nutrition cycle (you don’t weigh 130, but someplace between 127 and 133). So, when your weight jumps from 130 to 128, you think “bring on the desert” when you real weight hasn't actually changed since yesterday.

And the same kind of thing happens with DIBELS. Researchers investigated the standard error of measurement (SEM) of tests like DIBELS (Poncy, Skinner, & Axtell, 2005 in the Journal of Psychoeducational Measurement) and found standard errors of 4 to 18 points with oral reading fluency. That’s the amount that the test scores jump around. 

They found that you could reduce the standard error by testing with multiple passages (something that DIBELS recommends, but most schools ignore). But, testing with multiple passages only got the SEM down to 4 to 12 points.

What does that mean? Well, for example, second graders improve in words correct per minute (WCPM) in oral reading about 1 word per week. That means it would take 4 to 12 weeks of average growth for the youngster to improve more than a standard error of measurement.

If you test Bobby at the beginning of second grade and he gets a 65 wcpm in oral reading, then you test him a week later and he has a 70, has his score improved? That looks like a lot of growth, but it is within a standard error so it may just be test noise. If you test him again in week 3, he might get a 68, and week 4 he could reach 70 again, and so on. Has his reading improved, declined, or stagnated? Frankly, you can’t tell in this time frame because on average a second grader will improve about 3 words in that time, but the test doesn’t have the precision to identify reliably a 3-point gain. The scores could be changing because of Bobby’s learning, or because of the imprecision of the measurement. You simply can't tell.

Stop the madness. Let’s wait 3 or 4 months, still a little quick, perhaps, but since we use multiple passages to estimate reading levels ,it is probably is okay. In that time frame, Bobby should gain about 12-16 words correct per minute if everything is on track. If the new testing reveals gains that are much lower than that, then we can be sure there is a problem, and we can make some adjustment to instruction. Testing more often can’t help, but it might hurt!

Sunday, August 31, 2008

Which Reading First Idea Has the Least Research Support?

Reading First is the federal education program that encourages teachers to follow the research on how best to teach reading. The effort requires that teachers teach phonemic awareness (grades K-1), phonics (grades K-2), oral reading fluency (grades 1-3), vocabulary (grades K-3), and reading comprehension strategies (grades K-3). Reading First emphasizes such teaching because so many studies have shown that the teaching of each of these particular things improves reading achievement.

Reading First also requires that kids get 90-minutes of uninterrupted reading instruction each day, because research overwhelmingly shows that the amount of teaching provided makes a big difference in kids’ learning.
It requires that kids who are struggling be given extra help in reading through various of interventions. Again, an idea supported by lots of research. Early interventions get a big thumbs up from the research studies.

It requires that teachers and principals receive lots of professional development in reading, the idea being that if they know how to teach reading effectively, higher reading achievement will result. The research clearly supports this idea, too.

It requires that kids be tested frequently using monitoring tests to identify which kids need extra help and to do this early, before they have a chance to fall far behind. Sounds pretty sensible to me, but where’s the research?

Truth be told, there is a very small amount of research on the learning benefits of “curriculum-based measurement” and “work sampling, but beyond these meager—somewhat off-point—demonstrations, there is little empirical evidence supporting such big expenditures of time and effort.

This isn’t another rant against DIBELS (the tests that have been used most frequently for this kind of monitoring). Replace DIBELS with any monitoring battery you prefer (e.g., PALS, Ames-Webb, ISEL, TPRI) and you have the same problem. What do research studies reveal about the use of these tests to improve achievement? Darned little!

There is research showing that these tests are valid and reliable, that is they tend to measure what they claim to measure and they do this in a stable manner. In other words, the quality of these tests in terms of measurement properties isn’t the problem.

The real issue is how would you use these tests appropriately to help improve kids’ performance? For instance, do we really need to test everyone or are there kids who so clearly are succeeding or failing that we would be better off saving the testing time and simply stipulating that they will or will not get extra help?

Or, are the cut scores really right for these tests? I know when I reviewed DIBELS for Buros I found that the cut scores (the scores used to identify who is at risk) hadn’t been validated satisfactorily. Since then my experiences in Chicago suggest to me that the scores aren’t sufficiently rigorous; that means many kids who need help don’t get it because the tests fail to identify them as being in need.

Perhaps, the monitoring test schemes (and the tests themselves) are adequate, but in practice you can’t make it work. I have personally seen teachers subverting these plans by doing things like having kids memorizing nonsense words, or having kids read as fast as possible (rather than reading for meaning). Test designers can’t be held accountable for such misuse of their tests, but such aberrations cannot be ignored in determining the ultimate value of these testing plans.

There are few aspects of Reading First that make more sense than checking up on the students’ reading progress, and providing extra help to those who are not learning… unfortunately, we don’t have much evidence showing that such schemes—as actually carried out in classrooms—work the way logic says they should. I think it is worth continuing to try to make such approaches pay off for kids, but given the lack of research support, I think real prudence is needed here:

1. Administer these tests EXACTLY in the way the manuals describe.

2. Limit the amount of testing to what is really needed to make a decision (if a teacher is observing everyday and believes that a child is struggling with some aspect of reading, chances are pretty good that extra help is needed).

3. Examine the results of your testing over time. Perhaps if you systematically adjust the cut scores, you can improve student learning. It is usually best to err on the side of giving kids more help than they might need.

4. Don’t neglect aspects of reading instruction that can’t be measured as easily (such as vocabulary or reading comprehension). Monitoring tests do a reasonably good job of helping teachers to sort out performance of “simple skills.” They do not, nor do they purport to, assess higher level processes; these still need to be taught and taught thoroughly and well, however. Special effort may be needed to ensure that these are adequately addressed given the lack of direct testing information.

Monday, December 24, 2007

Fluency -- Not Hurrying

December 24, 2007

          Oral reading fluency has become a hot topic in the past few years. Of all aspects of reading, it still may be the most neglected, but we seem to be doing somewhat better in providing fluency instruction than we were when the Report of the National Reading Panel) concluded that fluency instruction improved reading achievement. That surprised many people; the idea that practicing oral reading could do more than improve the oral reading seemed strange. Usually we get better at what we practice: so, it would make sense to have kids doing a lot of silent reading rather than oral reading, since we want them to get good at silent reading.

           But the research is pretty clear that oral reading practice, when done appropriately, not only makes kids sound better, but comprehend better, too—including on silent reading tests. One reason that oral reading can do more for silent reading than silent reading, is that often when students are asked to read silently, they may not even be reading, or their reading might be flawed and labored but who would know it if it was done silently? Oral reading makes reading more physical and less mental, so it is easier to keep on task and to notice miscues and deal with them.

          Of course, if we are going to teach oral reading (in order to make kids better comprehenders) it is reasonable to monitor their progress. That’s where oral reading tests, like DIBELS, come in. Teachers can listen to kids read, and get a pretty good idea of their progress and pick out who may need more help. Sadly, I’m starting to see teachers doing silly things like asking kids to read as fast as they can so that they can get good DIBELS scores. The problem with that is that kids are supposed to read faster as a result of becoming more skilled at decoding and interpreting text, not because they are hurrying. I have no doubt that fluency instruction can have a powerful impact on reading comprehension. I also have no doubt that hurrying kids through texts is bad idea that won’t lead to that kind of learning. By all means use DIBELS (and DIBELS-like) oral reading tests. But make sure they are tests of reading--rather than hurrying.