Showing posts with label Testing Accommodations. Show all posts
Showing posts with label Testing Accommodations. Show all posts

Tuesday, July 30, 2013

More on the Lindsay Lohan Award: Or Why Some Days I Should Stay in Bed

Not surprisingly my entry about the PARCC decision to read the reading test to some students received a big response from readers. Some notes from parents of special education students were wonderfully supportive (though they struggled to be because they are torn by the issue--both wanting their kids to forego these tests and wanting them to experience full inclusion including taking these tests). Those letters were not posted to my site and given their personal nature I'm not putting them up here either.

Other responses weren't as supportive. In fact some respondents were pretty heated up at my view. I usually just respond to those letters on-line as they come in. There were too many, and they were too thoughtful to respond to each one in that way. Accordingly, I accepted each of those comments without reply (and you can read them in the comments section linked to that last blog entry). I am replying to them collectively here in this entry. Hope that you find it useful.

My response to comments on the Lindsay Lohan Award:

The first problem with these responses has to do with the purpose of the test. These comments assume that the purpose of PARCC is to provide individual student evaluations. But that isn’t why the test is given, nor how it will be used. These tests are aimed at satisfying federal law (NCLB). States must test students and report the scores in various ways to meet accountability requirements. It is the states and schools that are being evaluated, not the individual kids. These are not college entry exams; they are not tests to determine special education status; and they are not diagnostic tests to identify learning needs. They are for accountability purposes. States can stretch them to serve some other purposes of course, but most basically they need to be able to show how students are doing in terms of meeting the common core standards (and the pertinent ones to this discussion are the reading standards—not the listening ones). NCLB requires that we test not just the general population, but special populations too, so that we can monitor how states and schools are doing in serving these particular boys and girls.

As the respondents point out, these pretend “reading” scores will carry an annotation so that no one will make reading decisions about such students based on their listening skills, but that won’t prevent such scores from being aggregated for state and school comparisons. That means states will be rewarded for testing as many students as they can with a listening test instead of a reading test. Although learning disabilities are normally distributed throughout the population, I suspect we’ll see very different incidences of learning disability in the various states as this policy unfolds.  

Similarly irrelevant is the idea that students may use assistive technologies in college or the workplace (e.g., text-to-speech software). That idea is an argument that reading skills no longer matter in our society because we can find work arounds to literacy to accomplish some goals. Taken to its extreme it is an argument against teaching literacy to anyone—certainly not to students whose cognitive skills or life circumstances make them expensive to teach (those kids can just buy Kutzweil software). The problem is that we have a lot invested in a society that expects individuals to attain relatively high levels of literacy, and as such we care whether students have that skill or not. There are certainly work-arounds to literacy demands, but ultimately we still value literacy attainment and want to know how well our children—all of our children—are doing.

I, too, don’t like making students who cannot read sit through painful exams year in and year out. If we already know they cannot read (because, presumably we have data showing this), then why test them yet again? Perhaps we should not test such students for accountability purposes. Nevertheless, states should not be rewarded for loading up their special education rosters to “earn” higher performance levels; instead, when a school exempts children from the burden of testing, the students’ scores should be counted as being at chance levels, the score that would be expected if someone randomly went through and just marked answers without doing the reading. In my experience, often when that choice is available, schools prefer to test everyone (on the off chance that these students might end up with a somewhat higher test score than the chance levels).     

Another issue raised by these letters is the idea of the unfairness of testing these students  “who through no fault of their own” cannot read. That sounds reasonable, on the face of it, but if that makes sense with regard to special education students, it should make equal sense for children who grow up in abject poverty. The impacts of being raised in poverty are devastating in terms of student language and reading development (perhaps a listening test wouldn't be fair for those kids either--maybe we could measure other skills like how fast they can run or how well they can sing), and these deficits are certainly not due to any fault of the children so raised. Of course, if we exempt all the children purported to have learning disabilities and all who are raised in poverty, and all who come from homes in which a language other than English is spoken… then there probably isn’t much reason to test at all. That’s the problem with an accountability test. If you start opting kids out, you’ll need to opt all of them out eventually—just to be fair. (Many states have bad histories when it comes to using such subterfuge with the National Assessment and with their own state accountability tests.)

As for my “mocking tone,” thank you for noticing. It was (and is) mocking when it comes to this decision. The reason is that accommodations should not be given when they change the central purpose of the test. PARCC’s guidelines allow for accommodations when they “minimize/eliminate features of the assessment that are irrelevant to what is being measured.” Being able to decode the words seems to be a pretty central to feature of reading to me. I have absolutely no problem with reading a math test to kids as an accommodation, but reading a reading test to them changes the basic nature of what you are going to find out. The idea that decoding is no longer an essential part of reading in PARCC states is much deserving of my mockery--and of yours.    

Sunday, July 21, 2013

The Lindsay Lohan Award for Poor Judgment or Dopey Doings in the Annals of Testing

Lindsay Lohan is a model of bad choices and poor judgments. Her crazy decisions have undermined her talent, wealth, and most important relationships. She is the epitome of bad decision making (type “ridiculous behavior” or “dopey decisions” into Google and see how fast her name comes up). Given that, it is fitting to name an award for bad judgment after her.

Who is the recipient of the Lindsay? I think the most obvious choice would be PARCC, one of the multi-state consortium test developers. According to Education Week, PARCC will allow its reading test to be read to struggling readers. I assume if students suffer from dyscalculia they’ll be able to bring a friend to handle the multiplication for them, too.

Because some students suffer from disabilities it is important to provide them with tests that are accessible. No one in their right mind would want blind students tested with traditional print; Braille text is both necessary and appropriate. Similarly, students with severe reading disabilities might be able to perform well on a math test, but only if someone read the directions to them. In other cases, magnification or extended testing times might be needed.

However, there is a long line of research and theory demonstrating important differences in reading and listening. Most studies have found that for children, reading skills are rarely as well developed as listening skills. By eighth grade, the reading skills of proficient readers can usually match their listening skills. However, half the kids who take PARCC won’t have reached eighth grade, and not everyone who is tested will be proficient at reading. Being able to decode and comprehend at the same time is a big issue in reading development. 

I have no problem with PARCC transforming their accountability measures into a diagnostic battery—including reading comprehension tests, along with measures of decoding and oral language. But if the point is to find out how well students read, then you have to have them read. If for some reason they will not be able to read, then you don’t test them on that skill and you admit that you couldn’t test them. But to test listening instead of reading with the idea that they are the same thing for school age children flies in the face of logic and a long history of research findings. (Their approach does give me an idea: I've always wanted to be elected to the Baseball Hall of Fame, despite not having a career in baseball. Maybe I can get PARCC to come up with an accommodation that will allow me to overcome that minor impediment.)  

The whole point of the CCSS standards was to make sure that students would be able to read, write, and do math well enough to be college- and career-ready. Now PARCC has decided reading isn’t really a college- or career-ready skill. No reason to get a low reading score, just because you can't read. I think you will agree with me that PARRC is a very deserving recipient of the Lindsay Lohan Award for Poor Judgment; now pass that bottle to me, I've got to drive home soon.