Showing posts with label SBAC. Show all posts
Showing posts with label SBAC. Show all posts

Sunday, January 25, 2015

Concerns about Accountability Testing

Why don’t you write more about the new tests?

I haven’t written much about PARCC or SBAC—or the other new tests that other states are taking on—in part because they are not out yet. There are some published prototypes, and I was one of several people asked to examine the work product of these consortia. Nevertheless, the information available is very limited, and I fear that almost anything I may write could be misleading (the prototypes are not necessarily what the final product will turn out to be).

However, let me also say that, unlike many who strive for school literacy reform and who support higher educational standards, I’m not all that enthused about the new assessments. 

Let me explain why.

1. I don't think the big investment in testing is justified. 

I’m a big supporter of teaching phonics and phonological awareness because research shows that to be an effective way to raise beginning reading achievement. I have no commercial or philosophical commitment to such teaching, but trust the research. There is also strong research on the teaching of vocabulary, comprehension, and fluency, and expanding the amount of teaching is a powerful idea, too.

I would gladly support high-stakes assessment if it had a similarly strong record of stimulating learning, but that isn't the case.

Test-centered reform is expensive, and it has not been proven to be effective. The best studies of it that I know reveal either extremely slight benefits, or somewhat larger losses (on balance, it is—at best—a draw). Having test-based accountability, does not lead to better reading achievement.

(I recognize that states like Florida have raised achievement and they had high-stakes testing. The testing may have been part of what made such reforms work, but you can't tell if the benefits weren't really due to the other changes (e.g., professional development, curriculum, instructional materials, amount of instruction) that were made simultaneously.)

2. I doubt that new test formats—no matter how expensive—will change teaching for the good.

In the early 1990s, P. David Pearson, Sheila Valencia, Robert Reeve, Karen Wixson, Charles Peters, and I were involved in helping Michigan and Illinois develop innovative tests; tests that included entire texts and with multipe-response question formats that did away with the one-correct answer notion. The idea was that if we had tests that looked more like “good instruction,” then teachers who tried to imitate the tests would do a better job. Neither Illinois nor Michigan saw learning gains as a result of these brilliant ideas.

That makes me skeptical about both PARCC and SBAC. Yes, they will ask some different types of questions, but that doesn’t mean the teaching that results will improve learning. I doubt that it will.

I might be more excited if I didn’t expect companies and school districts to copy the formats, but miss the ideas. Instead of teaching kids to think deeply and to reason better, I think they’ll just put a lot of time into two-part answers and clicking. 

3. Longer tests are not really a good idea.

We should be trying to maximize teaching and minimize testing (minimize, not do away with). We need to know how states, school districts, and schools are doing. But this can be figured out with much less testing. We could easily estimate performance on the basis of samples of students—rather than entire student bodies—and we don’t need annual tests; with samples of reliable sizes, the results just don’t change that frequently.

Similarly, no matter how cool a test format may seem, it is probably not worth the extra time needed to administer. I suspect the results of these tests will correlate highly with the tests that they replace. If that's the case, will you really get any more information from these tests? And, if not, then why not use these testing days to teach kids instead? Anyone interested in closing poverty gaps, or international achievement gaps, is simply going to have to bite the bullet: more teaching, not more testing, is the key to catching up.

4. The new reading tests will not provide evidence on skills ignored in the past.

The new standards emphasize some aspects of reading neglected in the past. However, these new tests are not likely to provide any information about these skills. Reading tests don't work that way (math tests do, to some extent). We should be able to estimate the Lexile levels that kids are attaining, but we won’t be able to tell if they can reason better or are more critical thinkers (they may be, but these tests won’t reveal that).

Reading comprehension tests—such as those used by all 50 states for accountability purposes—can tell us how well kids can comprehend. They cannot tell which skills the students have (or even if reading comprehension actually depends on such a collection of discrete skills). Such tests, if designed properly, should provide clues about the level of language difficulty that students can negotiate successfully, but beyond that we shouldn’t expect any new info from the items.

On the other hand, we should expect some new information. The new tests are likely to have different cut scores or criteria of success. That means these tests will probably report much lower scores than in the past. Given the large percentage of boys and girls who “meet or exceed” current standards, graduate from high school, and enter college, but who lack basic skills in reading, writing, and/or mathematics, it would only be appropriate that their scores be lower in the future.


However, I predict that when those low-test scores arrive, there will be a public outcry that some politicians will blame on the new standards. Instead of recognizing that the new tests are finally offering honest info about how their kids are doing, they’ll believe that the low scores are the result of the poor standards and there'll be a strong negative reaction. Instead of militating for better schools, the public will be stimulated to support lower standards.

The new tests will only help if we treat them differently than the old tests. I hope that happens, but I'm skeptical.

Friday, December 20, 2013

Are the SBAC and PARCC Technology Requirements Fair?

I am a 4th grade math teacher, and I love CC standards. I’ve been teaching to them and my students are making HUGE gains in math.  My question is about PARCC. I have looked online at the protocol questions and cannot figure out what students will really be expected to do. It looks like they will need to cut, paste, and type. My fear is that the online component of the test is going to skew the results and students will be unnecessarily frustrated trying to show their thinking using "tools". It seems the test is automatically biased towards wealthier schools with more technology, technology teachers, and parents that buy technology for the children as "toys". How can we be sure that PARCC is assessing their reading and math, not their technology skills? Also, how can we help prepare our students for the types of technology skills they will be required to perform with PARCC?

Like you, I’m nervous about the technology of the new tests. We’re in a tech revolution, and yet, I don’t see as much of that technology in schools as is widely presumed. Even schools that have lots of I-Pads or computers often don’t have the bandwidth needed or the onsite tech support. There are definitely home and school disparities when it comes to tech availability.

Another issue has to do with whether tech is really necessary—in an academic sense—in the testing. Looking at the available prototypes for the tests, I would say yes and no. For example, students have traditionally marked answers on tests and worksheets simply by checking off an item or filling in a bubble grid; nothing particularly academic in those skills. The new assessments will have them doing  “drag-and-drop” and the like instead. Is that really an advance?

But there are items in which students must access webpages and identify sentences in text, and of course, there is writing and revising with these tools. All of these examples seem, to me, to be authentic academic tasks. There is nothing wrong with drag-and-drop items, but if they weren’t there, the assessments would tell us pretty much the same thing. That’s not true of these other skills. In all of these latter cases, students are asked to negotiate tasks that are common in college and the workplace, and as such kids should be able to handle them.

I suspect when the feds required that these new tests be tech-based, they thought NCLB would be reauthorized. That might have allowed the federal government to incent school districts to upgrade their technology. Unfortunately, that hasn’t happened. Many schools are now scrambling to upgrade their technology (often these efforts seem aimed only at the test—one hopes they’ll soon figure out that they have to use these for instruction as well).

In any event, your question is a good one. It is that the technology disadvantage of some kids will affect performance. That could mean that kids who, though they can read well, may score poorly because of unfamiliarity with keyboards, data screens, etc. That might not be misleading, however. Reading in the 21st century is more than reading a book or magazine; it really does require critical reading of multiple texts available on the Internet; just like writing does usually involve typing on a computer or other device. Monitoring whether our kids can do these tasks successfully is appropriate. The side benefit of that, one hopes, is that schools will move more quickly to making such tools more widely available.