Showing posts with label DIBELS. Show all posts
Showing posts with label DIBELS. Show all posts

Sunday, January 8, 2017

Further Arguments about Too Much Testing

I hear you.

            Last week I posted a blog challenging the amount of testing and test preparation in American reading classes. I got smacked, metaphorically, by friend and foe alike. Some posted their concerns, many more sent them to me directly.

            The grumbles from past foes are the easiest to reply to. They often expressed—in passive aggressive tones—exasperation that I have “finally” woken up to the idea that testing companies are evil and that testing is a conspiracy against kids and teachers. They know because they follow Diane Ravitch’s “research.”

            The thing is—and I’m sure this is true since I’ve reread last week’s posting—I didn’t really come out against testing. Just against over-testing and test prep generally. The politicians have imposed some testing—and I think they have overdone it—but teachers and principals are also devoting too much time to testing, and that's on us.

            Dr. Ravitch seems to be quite upset about accountability testing, which she herself helped impose on educators overriding the critics who depended upon research in their arguments. (Ravitch is an educational historian, and quite a good one, but ignrores—then and now—psychological and educational research).

            I’m not even against accountability testing, as long as the amount of testing is commensurate with the information that one is collecting. To find out how well a school or district is doing, do we really need to test every year? Do they change that fast? Do we really need to test everyone? Anyone ever hear of random sampling? Come onnnnnn!

            If Dr. Ravitch’s minions spent more time in schools, they’d know the heaviest testing commitments are the ones the districts (and, sometimes, even individual principals and teachers) have taken on themselves. We may blame those misguided efforts on the accountability testing—we all want to look good for the picture—but, it is a bad choice, nevertheless. And, it is a choice.

            I do find the critics’ vexation with me a little surprising. For example, when I was director of reading in the Chicago Public Schools (15 years ago), I was ordered, by then Mayor Daley—to emphasize test prep in my teacher education efforts in the city. Unlike some of the critics who these days are so noisy about over-testing, I had skin in the game and I refused.

            It might be worth noting that my refusal led to two outcomes that matter: (1) the Chicago Public Schools engaged in the least test prep—before or since; and (2) Chicago kids made their biggest measured gains in reading. Not a research study, but a policy dispute affecting nearly a half million kids.

            Of course, those who appreciated my past candor were now chagrined at my remarks. They weren’t necessarily upset by what I had to say about accountability testing (many of them concur that it is over the top), but they were scared to death by my comments on the various screening, monitoring, and diagnostic tests that are so much of the daily lives of primary grade classrooms.

            Again, I think I was clear, despite the concerns. The typical complaint: “I understand you, but no one else will.” That is, they get that I am not opposed to all classroom assessment, but they are sure no one else will appreciate the subtlety of what they see as a complex position.

            For example, one dear friend, a grandmother, pointed out her appreciation that her grandkids are given annually a standardized test in reading and math. The reason? She doesn’t trust teachers or schools to actually tell how kids are doing.

            The fact is too often teachers don’t tell parents how their kids are doing. For all kinds of reasons: What if a child isn’t doing well and I don’t know what to tell the parent—why raise a question I can’t answer? What if I don’t think there is anything that can be done—it’s a minority child without economic resources whose family is a wreck? What if I only notice effort and not achievement? What if I just don’t want the argument (often parents don’t like to hear that junior isn’t succeeding)?

            An annual test isn’t perfect, but it doubles the amount of information that most parents have and that isn’t a bad thing. I’m not against that kind of testing.

            One reader thought I was smacking DIBELS, but I wasn’t. I was tough on the notion that tests like DIBELS can profitably be given to ANYBODY every week or two through a school year. But not because I was anti-DIBELS.

            Twice a year I go to my dentist. She takes x-rays every fourth visit. Why doesn’t she do it every time? For two reasons: first, dental health doesn’t change that fast, so they try not to test more than would help; and, second, because x-rays can cause damage, so the balance is best struck between help and hindrance, by testing once every four checkups instead of the seemingly more rigorous testing every time.

            DIBELS-like instruments won’t do physical damage, like x-rays, but they do reduce the amount of teaching and they might shape that teaching in bizarre ways. That is harmful.

            My advice:
1.     Reduce accountability testing to the minimum amounts required to accomplish the goal. Research is clear that we can test much less to find out how states, districts, and schools are doing. Without a loss of information.

2.     Test individual kids annually to ensure parents have alternative information to that provided by teachers.

3.     Limit diagnostic testing in reading to no more than 2-3 times per school year. Studies do not find that any more testing than that is beneficial, and no research supports reducing the amount of teaching to enable such over-testing.

4.     Give most test prep a pass. It doesn’t really help and it reduces the amount of essential instruction that kids should be getting. One practice test given once one or two weeks ahead so kids will feel comfortable with the testing should be plenty.





           




           


             

Thursday, October 27, 2016

Oral Reading Fluency is More than Speed

Letter I received:

I found these troubling quotes in the Report of the National Reading Panel:

"Fluency, the ability to read a text quickly, accurately, and with proper expression..."

"Fluent readers can read text with speed, accuracy, and proper expression..."

My dismay is due to (a) listing rate first in both statements, and (b) using "quickly" and "with speed" rather than "rate" (or "appropriate rate" as in the CCSS fluency standard). I wonder if this wording may have encouraged folks who now embrace the notion that "faster is better" (e.g. "better readers have higher DIBELS scores--wcpm")

In my own work I often refer to Stahl & Kuhn (2002) who stated that "fluent reading sounds like speech"-- smooth, effortless, but not "as fast as you can."

Who’s right?

Shanahan response:

            Well, first off, let me take full responsibility for the wordings that you found troubling. I took the lead in writing that portion of the report, and so I probably wrote it that way. Nevertheless, I doubt that my inapt wording was what triggered the all too prevalent emphasis on speed over everything else in fluency; that I’d pin on misinterpretations of DIBELS.

            I, too, have seen teachers guiding kids to read as fast as they can, trying to inflate DIBELS scores in meaningless ways. What a waste of time.

            But, that said, the importance of speed/quickness/rate in fluency cannot be overstated—though it obviously can be misunderstood.

            The fundamental idea that I was expressing in those quotes was that students must get to the point where they can recognize/decode words with enough facility that they will be able to read the author's words with something like the speed and prosody of language. 

            Old measures of fluency—like informal reading inventories--looked at accuracy alone, which is only adequate with beginning readers. The problem with accuracy measures is that they overrate the plodders who can slowly and laboriously get the words right (as if they were reading a meaningless list of random words). 

            DIBELS was an important advance over that because it included rate and accuracy--which is sufficient in the primary grades, but which overrates the hurried readers who can speed through texts without appropriate expression. Studies are showing that prosody is not particularly discriminating in the earlier grades, but as kids progress it gains in importance (probably because the syntax gets more complex and prosody or expression is an indicator of how well kids are sorting that out—rather than just decoding quickly enough to allow comprehension).

            Fluency instruction and monitoring are very important, and I agree with your complaint that it is often poorly taught and mis-assessed by teachers. I think there are a couple of reasons for that.

            First, I think many teachers don’t have a clear fluency concept—and stating its components—accuracy, rate, and prosody—in their order of development won’t fix that. Fluency is not a distinct skill as much as it is an amalgam of skills. It is part decoding, part comprehension.

            Kids cannot read if they can’t decode and recognize words; translating from print to pronunciation. That’s why we teach things like sight words, phonological awareness, and phonics.

            However, recognizing words in a list is a very different task than reading them horizontally, organized into sentences, with all the distraction that implies. Speed (or rate or quickness) don’t really matter when reading a list of words. But when reading sentences, it is critical that you move it along. Slow word reading indicates that a student is devoting a lot of cognitive resources to figuring out the words, and that means cognitive resources will not be available to thinking about the ideas. That’s why speed of word reading is so important; it is an indicator of how much a reader will be able to focus on a text’s meaning.

            But fluency is not just fast word reading. It includes some aspects of reading comprehension, too. For instance, fluent readers tend to pronounce homographs (heteronyms)—desert, affect, intimate—correctly without needing to slow down or try alternatives. Fluent readers may have no advantage in thinking deeply about the ideas in a text, but they do when it comes to this kind of immediate interpretation while reading.

            Another aspect of comprehension that is part of fluency is the ability to parse sentences so that they sound like sentences. Someone listening to your oral reading should be able to understand the message, because you would have grouped the words appropriately into phrases and clauses. To read in that way, you, again, have to be quickly interpreting the sentences—using punctuation and meaning as you go.  

            Teachers who think that fluency is just reading the right words, or just reading the right words really fast, is missing the point. Stahl and Kuhn are right: fluency has to go, not necessarily fast, but the speed of normal language.

             Second, I think many teachers don’t understand assessment. Reading assessments of all kinds try to estimate student performance based on small samples of behavior. Accordingly, the assessment tasks usually differ from the overall behavior in important ways. With fluency that means measuring some aspects of the concept—speed and accuracy—while not measuring others—prosody.

            Given the imperfect nature of these predictor tasks, it is foolish, and even damaging, to teach the tasks rather than the ability we are trying to estimate. It is like teaching kids to answer multiple-choice questions rather than teaching them to think about the ideas in text.

            As long as teachers try to teach facets of tests rather than reading we're going to see this kind of problem. The following guidance might help.

1.    Tell students to read the text aloud as well as they can—not as fast as they can.
2.    Tell them that they will be expected to answer questions about the text when they finish—so they will read while trying to understand the text.
3.    Pay attention not just to the wcpm (words correct per minute), but to whether the reading sounds like language.


November Powerpoints

-->

Saturday, May 7, 2016

What doesn’t belong here? On Teaching Nonsense Words


            Obviously you shouldn’t wear an especially short skirt to work, though it might be fine for a night of bar hopping. It would just be out of place. Lil Wayne can do rap, but he’d definitely be out of place at Gospel a Convention, sort of like a love affair with a happy ending in a Taylor Swift lyric.




            So what’s out of place in reading education?

            My nominee is the act of teaching kids to read nonsense words. Don’t do it. It don’t belong (it may even be worse than orange and green).

            Why, you might ask, would anyone teach nonsense words? I attribute this all-too-common error to a serious misunderstanding of tests and testing.

            Many years ago researchers were interested in determining how well kids could decode. They decided upon lists of words that were graded in difficulty. The more words the students could read accurately, the better we assumed his/her decoding must be.

            But, then they started to think: It’s possible for kids to memorize a bunch of words. In fact, with certain high frequency words we tell kids to memorize them. If I flash the word “of” to a student and he/she reads it correctly, that might not be due to better phonics skills, but just because Johnny had that one drilled into long-term memory.

            That means with word tests we can never be sure of how well kids can decode.
           
            The solution: nonsense words tests. If we give kids lists of nonse words, that is combinations of letters that fit English spelling patterns, but that aren’t really words, then if students can read them they must have decoding skills, because no one in their right mind would teach these made up letter combinations to children.

            Enter tests like DIBELS decoding measure. Tests designed to help determine quickly who needs more help with decoding. These aren’t tests aimed at evaluating programs or teachers; they are diagnostic.

            These tests work pretty well, too. Studies show a high correlation between performance on nonsense words and real words, and some of the time the nonsense word scores are more closely related with reading achievement than the word test scores!

            But many schools are now using these to make judgments about teachers.

            And, the teachers’ reaction has been to teach nonsense words to the kids. Not just any nonsense words either; the specific nonsense words that show up on DIBELS. That means these teachers are making the test worthless. If kids are memorizing pronunciations for those nonsense words, then the tests no longer can tell how well the kids can decode.

            We can do better. Please do not use these kinds of tests to make judgments about teachers, it just encourages foolish responses on their parts. And, please do not teach these nonsense words to the kids. It is harmful to kids. It definitely doesn’t belong here.