Shanahan response:
I haven’t written much about PARCC or SBAC—or the other new tests that other states are taking on—in part because they are not out yet. There are some published prototypes, and I was one of several people asked to examine the work product of these consortia. Nevertheless, the information available is very limited, and I fear that almost anything I may write could be misleading (the prototypes are not necessarily what the final product will turn out to be).
However, let me also say that, unlike many who strive for school literacy reform and who support higher educational standards, I’m not all that enthused about the new assessments.
Let me explain why.
1. I don't think the big investment in testing is justified.
I’m a big supporter of teaching phonics and phonological awareness because research shows that to be an effective way to raise beginning reading achievement. I have no commercial or philosophical commitment to such teaching, but trust the research. There is also strong research on the teaching of vocabulary, comprehension, and fluency, and expanding the amount of teaching is a powerful idea, too.
I would gladly support high-stakes assessment if it had a similarly strong record of stimulating learning, but that isn't the case.
Test-centered reform is expensive, and it has not been proven to be effective. The best studies of it that I know reveal either extremely slight benefits, or somewhat larger losses (on balance, it is—atbest—a draw). Having test-based accountability, does not lead to better reading achievement.
(I recognize that states like Florida have raised achievement and they had high-stakes testing. The testing may have been part of what made such reforms work, but you can't tell if the benefits weren't really due to the other changes (e.g., professional development, curriculum, instructional materials, amount of instruction) that were made simultaneously.)
2. I doubt that new test formats—no matter how expensive—will change teaching for the good.
In the early 1990s, P. David Pearson, Sheila Valencia, Robert Reeve, Karen Wixson, Charles Peters, and I were involved in helping Michigan and Illinois develop innovative tests; tests that included entire texts and with multipe-response question formats that did away with the one-correct answer notion. The idea was that if we had tests that looked more like “good instruction,” then teachers who tried to imitate the tests would do a better job. Neither Illinois nor Michigan saw learning gains as a result of these brilliant ideas.
That makes me skeptical about both PARCC and SBAC. Yes, they will ask some different types of questions, but that doesn’t mean the teaching that results will improve learning. I doubt that it will.
I might be more excited if I didn’t expect companies and school districts to copy the formats, but miss the ideas. Instead of teaching kids to think deeply and to reason better, I think they’ll just put a lot of time into two-part answers and clicking.
3. Longer tests are not really a good idea.
We should be trying to maximize teaching and minimize testing (minimize, not do away with). We need to know how states, school districts, and schools are doing. But this can be figured out with much less testing. We could easily estimate performance on the basis of samples of students—rather than entire student bodies—and we don’t need annual tests; with samples of reliable sizes, the results just don’t change that frequently.
Similarly, no matter how cool a test format may seem, it is probably not worth the extra time needed to administer. I suspect the results of these tests will correlate highly with the tests that they replace. If that's the case, will you really get any more information from these tests? And, if not, then why not use these testing days to teach kids instead? Anyone interested in closing poverty gaps, or international achievement gaps, is simply going to have to bite the bullet: more teaching, not more testing, is the key to catching up.
4. The new reading tests will not provide evidence on skills ignored in the past.
The new standards emphasize some aspects of reading neglected in the past. However, these new tests are not likely to provide any information about these skills. Reading tests don't work that way (math tests do, to some extent). We should be able to estimate the Lexile levels that kids are attaining, but we won’t be able to tell if they can reason better or are more critical thinkers (they may be, but these tests won’t reveal that).
Reading comprehension tests—such as those used by all 50 states for accountability purposes—can tell us how well kids can comprehend. They cannot tell which skills the students have (or even if reading comprehension actually depends on such a collection of discrete skills). Such tests, if designed properly, should provide clues about the level of language difficulty that students can negotiate successfully, but beyond that we shouldn’t expect any new info from the items.
On the other hand, we should expect some new information. The new tests are likely to have different cut scores or criteria of success. That means these tests will probably report much lower scores than in the past. Given the large percentage of boys and girls who “meet or exceed” current standards, graduate from high school, and enter college, but who lack basic skills in reading, writing, and/or mathematics, it would only be appropriate that their scores be lower in the future.
However, I predict that when those low-test scores arrive, there will be a public outcry that some politicians will blame on the new standards. Instead of recognizing that the new tests are finally offering honest info about how their kids are doing, they’ll believe that the low scores are the result of the poor standards and there'll be a strong negative reaction. Instead of militating for better schools, the public will be stimulated to support lower standards.
The new tests will only help if we treat them differently than the old tests. I hope that happens, but I'm skeptical.
Comments
See what others have to say about this topic.