Our core reading program includes weekly assessments. We are all supposed to give those tests every week, but to tell the truth, it is kind of hit or miss. The tests don’t seem to be linked to our accountability test, but our reading coordinator says that we have to do weekly testing if we are going to use the program “with fidelity.” What do you think?
We are testing too much. Weighing the pig more frequently don’t make him no fatter! And there are no such thing as "insta-tests" (tests that could be given in an instant without some sacrifice of instructional time).
There are basically two kinds of tests that we give in schools: accountability tests and instructional assessments (e.g., screening, monitoring, diagnosis).
The former are not necessarily aimed at improving teaching in any specific way. They are there so that parents, taxpayers, and other interested parties know how we’re doing.
I have no problem with that kind of accountability testing as long as the tests are valid (reading tests need to test reading ability) and reliable (there are a consistency and stability to how children perform)—and that we limit the time devoted to these instruments. One big problem in schools that I visit, is that while the accountability tests themselves may be relatively brief, the teachers and administrators waste large amounts of instructional time trying to artificially make themselves look good on these tests.
The kind of assessments you are asking about fall into the second category—the tests that are aimed at improving teaching more directly. For instance, many schools use instructional tests to determine who will get extra help with phonics or fluency. Or, teachers will quiz students to give feedback to them or their parents about how they are doing; like the weekly spelling test.
These kinds of tests will typically take up more class time than the accountability instruments, but that can be okay—at least if they help teachers to improve learning.
Unfortunately, there is not a lot of evidence that this kind of testing actually improves things much in reading. There are several such studies in math and some other subjects, but in reading not so much. That doesn’t mean that we shouldn’t try to use such tests to improve the teaching of reading, but it does place an extra burden on us to be particularly cautious and careful to ensure they work.
The kinds of tests you are asking about—the weekly tests that are often included in commercial core reading programs—are pretty limited in terms of what they can tell you.
First, these tests are usually pretty brief, which makes sense if you are going to give them weekly; no one wants to sacrifice a day of teaching just to find out how well the kids did on the first four days of the week. But that brevity means these tests are likely to be unreliable; no professional in their right mind would want to make consequential decisions on the basis of these tests.
Second, these weekly tests typically don't purport to provide a valid sampling of the skills and abilities of reading. Their focus is limited to the words, spelling patterns, and other skills taught earlier in the week. That’s why a youngster who does well on one of those tests will not necessarily do well on the end of year accountability measure (or even on monitoring tests like DIBELS—just because you learned particular words this week doesn’t necessarily mean that you can read a lot more words).
Given all of that, what good are these kinds of weekly tests?
Teachers and principals often want such tests so they can use them to give grades. I don’t have any big problem with that, but I do wonder why that can’t be done with the copious assignments that should be included in such programs. Weekly tests aren’t necessary if kids are engaged in daily decoding exercises, oral reading fluency practice, writing, and reading comprehension—under the supervision of a teacher who is paying attention. Why not collect information along the way from that kind of work and use that Friday time, for--wait for it--teaching.
For example, in the schools that I work with, I strongly encourage teachers to listen to the reading of 4-6 kids each day during fluency instruction—and to jot the results down as they observed. Over a report card marking period that would provide 9 observations of oral reading fluency (and while none of those observations alone would be sufficiently reliable to predict results on something like a state accountability test—the collection of them in aggregate certainly would).
Similar data can be collected from writing assignments, worksheets, and observations of discussion groups.
The real benefit of these brief weekly tests is that they should give the teacher a clue as to whether the kids have mastered that week’s skills… which should trigger an instructional response.
Poor performance on such tests should lead to re-teaching.
Your question makes me think that is not the case in your school. Why bother to identify a problem if you aren’t going to address it anyway?
Copyright © 2017 Shanahan on Literacy. All rights reserved. Web Development by Dog and Rooster, Inc.