Showing posts with label DIBELS. Show all posts
Showing posts with label DIBELS. Show all posts

Thursday, August 20, 2009

Yes, Virginia, You Can DIBEL Too Much!

I visited schools yesterday that used to DIBEL. You know what I mean, the teachers used to give kids the DIBELS assessments to determine how they were doing in fluency, decoding, and phonemic awareness. DIBELS has been controversial among some reading experts, but I’ve always been supportive of such measures (including PALS, TPRI, Ames-web, etc.). I like that they can be given quickly to provide a snapshot of where kids are.

I was disappointed that they dropped the tests and asked why. “Too much time,” they told me, and when I heard their story I could see why. This was a district that like the idea of such testing, but their consultants had pressured them into repeating it every week for at risk kids. I guess the consultants were trying to be rigorous, but eventually the schools gave up on it altogether.

The problem isn’t the test, but the silly testing policies. Too many schools are doing weekly or biweekly testing and it just doesn’t make any sense. It’s as foolish as checking your stock portfolio everyday or climbing on the scale daily during a diet. Experts in those fields understand that too much assessment can do harm, so they advise against it.

Frequent testing is misleading and it leads to bad decisions. Investment gurus, for example, suggest that you look at your portfolio only every few months. Too many investors look at a day’s stock losses and sell in a panic, because they don’t understand that such losses happen often—and that long term such losses mean nothing. The same kind of thing happens with dieting. You weigh yourself and see that you’re down 2 pounds, so what the heck, you can afford to eat that slice of chocolate cake. But your weight varies through the day as you work through the nutrition cycle (you don’t weigh 130, but someplace between 127 and 133). So, when your weight jumps from 130 to 128, you think “bring on the desert” when you real weight hsn't actually changed since yesterday.

The same kind of thing happens with DIBELS. Researchers investigated the standard error of measurement (SEM) of tests like DIBELS (Poncy, Skinner, & Axtell, 2005 in the Journal of Psychoeducational Measurement) and found standard errors of 4 to 18 points with oral reading fluency. That’s the amount that the test scores jump around. They found that you could reduce the standard error by testing with multiple passages (something that DIBELS recommends, but most schools ignore). But, testing with multiple passages only got the SEM down to 4 to 12 points.

What does that mean? Well, for example, second graders improve in words correct per minute (WCPM) in oral reading about 1 word per week. That means it would take 4 to 12 weeks of average growth for the youngster to improve more than a standard error of measurement.

If you test Bobby at the beginning of second grade and he gets a 65 wcpm in oral reading, then you test him a week later and he has a 70, has his score improved? That looks like a lot of growth, but it is within a standard error so it is probably just test noise. If you test him again in week 3, he might get a 68, and week 4 he could reach 70 again, and so on. Has his reading improved, declined, or stagnated? Frankly, you can’t tell in this time frame because on average a second grader will improve about 3 words in that time, but the test doesn’t have the precision to identify reliably a 3-point gain. The scores could be changing because of Bobby’s learning, or because of the imprecision of the measurement. You simply can't tell.

Stop the madness. Let’s wait 3 or 4 months, still a little quick, perhaps, but since we use multiple passages to estimate reading levels ,it is probably is okay. In that time frame, Bobby should gain about 12-16 words correct per minute if everything is on track. If the new testing reveals gains that are much lower than that, then we can be sure there is a problem, and we can make some adjustment to instruction. Testing more often can’t help, but it might hurt!

Sunday, August 31, 2008

Which Reading First Idea Has the Least Research Support?

Reading First is the federal education program that encourages teachers to follow the research on how best to teach reading. The effort requires that teachers teach phonemic awareness (grades K-1), phonics (grades K-2), oral reading fluency (grades 1-3), vocabulary (grades K-3), and reading comprehension strategies (grades K-3). Reading First emphasizes such teaching because so many studies have shown that the teaching of each of these particular things improves reading achievement.

Reading First also requires that kids get 90-minutes of uninterrupted reading instruction each day, because research overwhelmingly shows that the amount of teaching provided makes a big difference in kids’ learning.
It requires that kids who are struggling be given extra help in reading through various of interventions. Again, an idea supported by lots of research. Early interventions get a big thumbs up from the research studies.

It requires that teachers and principals receive lots of professional development in reading, the idea being that if they know how to teach reading effectively, higher reading achievement will result. The research clearly supports this idea, too.

It requires that kids be tested frequently using monitoring tests to identify which kids need extra help and to do this early, before they have a chance to fall far behind. Sounds pretty sensible to me, but where’s the research?

Truth be told, there is a very small amount of research on the learning benefits of “curriculum-based measurement” and “work sampling, but beyond these meager—somewhat off-point—demonstrations, there is little empirical evidence supporting such big expenditures of time and effort.

This isn’t another rant against DIBELS (the tests that have been used most frequently for this kind of monitoring). Replace DIBELS with any monitoring battery you prefer (e.g., PALS, Ames-Webb, ISEL, TPRI) and you have the same problem. What do research studies reveal about the use of these tests to improve achievement? Darned little!

There is research showing that these tests are valid and reliable, that is they tend to measure what they claim to measure and they do this in a stable manner. In other words, the quality of these tests in terms of measurement properties isn’t the problem.

The real issue is how would you use these tests appropriately to help improve kids’ performance? For instance, do we really need to test everyone or are there kids who so clearly are succeeding or failing that we would be better off saving the testing time and simply stipulating that they will or will not get extra help?

Or, are the cut scores really right for these tests? I know when I reviewed DIBELS for Buros I found that the cut scores (the scores used to identify who is at risk) hadn’t been validated satisfactorily. Since then my experiences in Chicago suggest to me that the scores aren’t sufficiently rigorous; that means many kids who need help don’t get it because the tests fail to identify them as being in need.

Perhaps, the monitoring test schemes (and the tests themselves) are adequate, but in practice you can’t make it work. I have personally seen teachers subverting these plans by doing things like having kids memorizing nonsense words, or having kids read as fast as possible (rather than reading for meaning). Test designers can’t be held accountable for such misuse of their tests, but such aberrations cannot be ignored in determining the ultimate value of these testing plans.

There are few aspects of Reading First that make more sense than checking up on the students’ reading progress, and providing extra help to those who are not learning… unfortunately, we don’t have much evidence showing that such schemes—as actually carried out in classrooms—work the way logic says they should. I think it is worth continuing to try to make such approaches pay off for kids, but given the lack of research support, I think real prudence is needed here:

1. Administer these tests EXACTLY in the way the manuals describe.

2. Limit the amount of testing to what is really needed to make a decision (if a teacher is observing everyday and believes that a child is struggling with some aspect of reading, chances are pretty good that extra help is needed).

3. Examine the results of your testing over time. Perhaps if you systematically adjust the cut scores, you can improve student learning. It is usually best to err on the side of giving kids more help than they might need.

4. Don’t neglect aspects of reading instruction that can’t be measured as easily (such as vocabulary or reading comprehension). Monitoring tests do a reasonably good job of helping teachers to sort out performance of “simple skills.” They do not, nor do they purport to, assess higher level processes; these still need to be taught and taught thoroughly and well, however. Special effort may be needed to ensure that these are adequately addressed given the lack of direct testing information.

Monday, December 24, 2007

Fluency -- Not Hurrying

December 24, 2007

Oral reading fluency has become a hot topic in the past few years. Of all aspects of reading, it still may be the most neglected, but we seem to be doing somewhat better in providing fluency instruction than we were when the National Reading Panel (http://www.nationalreadingpanel.org/) concluded that fluency instruction improved reading achievement. That surprised many people; the idea that practicing oral reading could do more than improve the oral reading seemed strange. Usually we get better at what we practice: so, it would make sense to have kids doing a lot of silent reading rather than oral reading, since we want them to get good at silent reading.

But the research is pretty clear that oral reading practice, when done appropriately, not only makes kids sound better, but comprehend better, too—including on silent reading tests. One reason that oral reading can do more for silent reading than silent reading, is that often when students are asked to read silently, they may not even be reading, or their reading might be flawed and labored but who would know it if it was done silently? Oral reading makes reading more physical and less mental, so it is easier to keep on task and to notice miscues and deal with them.

Of course, if we are going to teach oral reading (in order to make kids better comprehenders) it is reasonable to monitor their progress. That’s where oral reading tests, like DIBELS (http://dibels.uoregon.edu/), come in. Teachers can listen to kids read, and get a pretty good idea of their progress and pick out who may need more help. Sadly, I’m starting to see teachers doing silly things like asking kids to read as fast as they can so that they can get good DIBELS scores. The problem with that is that kids are supposed to read faster as a result of becoming more skilled at decoding and interpreting text, not because they are hurrying. I have no doubt that fluency instruction can have a powerful impact on reading comprehension. I also have no doubt that hurrying kids through texts is bad idea that won’t lead to that kind of learning. By all means use DIBELS (and DIBELS-like) oral reading tests. But make sure they are tests of reading--rather than hurrying.