Showing posts with label PARCC. Show all posts
Showing posts with label PARCC. Show all posts

Friday, December 20, 2013

Are the SBAC and PARCC Technology Requirements Fair?

I am a 4th grade math teacher, and I love CC standards. I’ve been teaching to them and my students are making HUGE gains in math.  My question is about PARCC. I have looked online at the protocol questions and cannot figure out what students will really be expected to do. It looks like they will need to cut, paste, and type. My fear is that the online component of the test is going to skew the results and students will be unnecessarily frustrated trying to show their thinking using "tools". It seems the test is automatically biased towards wealthier schools with more technology, technology teachers, and parents that buy technology for the children as "toys". How can we be sure that PARCC is assessing their reading and math, not their technology skills? Also, how can we help prepare our students for the types of technology skills they will be required to perform with PARCC?

Like you, I’m nervous about the technology of the new tests. We’re in a tech revolution, and yet, I don’t see as much of that technology in schools as is widely presumed. Even schools that have lots of I-Pads or computers often don’t have the bandwidth needed or the onsite tech support. There are definitely home and school disparities when it comes to tech availability.

Another issue has to do with whether tech is really necessary—in an academic sense—in the testing. Looking at the available prototypes for the tests, I would say yes and no. For example, students have traditionally marked answers on tests and worksheets simply by checking off an item or filling in a bubble grid; nothing particularly academic in those skills. The new assessments will have them doing  “drag-and-drop” and the like instead. Is that really an advance?

But there are items in which students must access webpages and identify sentences in text, and of course, there is writing and revising with these tools. All of these examples seem, to me, to be authentic academic tasks. There is nothing wrong with drag-and-drop items, but if they weren’t there, the assessments would tell us pretty much the same thing. That’s not true of these other skills. In all of these latter cases, students are asked to negotiate tasks that are common in college and the workplace, and as such kids should be able to handle them.

I suspect when the feds required that these new tests be tech-based, they thought NCLB would be reauthorized. That might have allowed the federal government to incent school districts to upgrade their technology. Unfortunately, that hasn’t happened. Many schools are now scrambling to upgrade their technology (often these efforts seem aimed only at the test—one hopes they’ll soon figure out that they have to use these for instruction as well).

In any event, your question is a good one. It is that the technology disadvantage of some kids will affect performance. That could mean that kids who, though they can read well, may score poorly because of unfamiliarity with keyboards, data screens, etc. That might not be misleading, however. Reading in the 21st century is more than reading a book or magazine; it really does require critical reading of multiple texts available on the Internet; just like writing does usually involve typing on a computer or other device. Monitoring whether our kids can do these tasks successfully is appropriate. The side benefit of that, one hopes, is that schools will move more quickly to making such tools more widely available.  




Sunday, July 21, 2013

The Lindsay Lohan Award for Poor Judgment or Dopey Doings in the Annals of Testing


Lindsay Lohan is a model of bad choices and poor judgments. Her crazy decisions have undermined her talent, wealth, and most important relationships. She is the epitome of bad decision making (type “ridiculous behavior” or “dopey decisions” into Google and see how fast her name comes up). Given that, it is fitting to name an award for bad judgment after her.

Who is the recipient of the Lindsay? I think the most obvious choice would be PARCC, one of the multi-state consortium test developers. According to Education Week, PARCC will allow its reading test to be read to struggling readers. I assume if students suffer from dyscalculia they’ll be able to bring a friend to handle the multiplication for them, too.

Because some students suffer from disabilities it is important to provide them with tests that are accessible. No one in their right mind would want blind students tested with traditional print; Braille text is both necessary and appropriate. Similarly, students with severe reading disabilities might be able to perform well on a math test, but only if someone read the directions to them. In other cases, magnification or extended testing times might be needed.

However, there is a long line of research and theory demonstrating important differences in reading and listening. Most studies have found that for children, reading skills are rarely as well developed as listening skills. By eighth grade, the reading skills of proficient readers can usually match their listening skills. However, half the kids who take PARCC won’t have reached eighth grade, and not everyone who is tested will be proficient at reading. Being able to decode and comprehend at the same time is a big issue in reading development. 

I have no problem with PARCC transforming their accountability measures into a diagnostic battery—including reading comprehension tests, along with measures of decoding and oral language. But if the point is to find out how well students read, then you have to have them read. If for some reason they will not be able to read, then you don’t test them on that skill and you admit that you couldn’t test them. But to test listening instead of reading with the idea that they are the same thing for school age children flies in the face of logic and a long history of research findings. (Their approach does give me an idea: I've always wanted to be elected to the Baseball Hall of Fame, despite not having a career in baseball. Maybe I can get PARCC to come up with an accommodation that will allow me to overcome that minor impediment.)  


The whole point of the CCSS standards was to make sure that students would be able to read, write, and do math well enough to be college- and career-ready. Now PARCC has decided reading isn’t really a college- or career-ready skill. No reason to get a low reading score, just because you can't read. I think you will agree with me that PARRC is a very deserving recipient of the Lindsay Lohan Award for Poor Judgment; now pass that bottle to me, I've got to drive home soon.

Wednesday, August 22, 2012

Thank Goodness the Writing Scores are Going to Drop

Okay, so you’re thinking: “This guy is even more nuts than I thought. How can he root for kids to write poorly?"

I hope I’m not nuts, but one of the major new tests to be used to monitor student performance against the common core state standards is well designed (truth in advertising: I serve on the English Language Arts Technical Work Groups for that test). However, those new designs are almost certain to lower student writing scores, which I hope will be good for kids—at least in the long run.

PARCC is a 23 state consortium that is designing new English language arts assessments (mostly for states east of the Mississippi River). Earlier this week, PARCC released item and task prototypes and I hope that you’ll take a careful look at them—even if you are not in a PARCC state:

http://www.parcconline.org/parcc-assessment

How can I be so sure writing scores are going to drop with PARCC? I’ve been studying this topic for more than three decades and one thing that I’ve learned is that reading and writing are not perfectly related or aligned. The correlations of reading and writing are lower than one would expect—which angered many people when I first started reporting that in the early 1980s.

That means that while there are a lot of students who read and write poorly or who read and write well, there are also surprising numbers who read well and write poorly and vice versa.

Traditional state writing assessments were designed so that students did not have to read to do the writing. Students who wrote well, but read poorly, did well on past tests.

PARCC is going to have students read texts, answer reading comprehension questions, and then write about those texts (summarizing or synthesizing, according to the prototypes). Students who manage to express themselves well, but who struggle with reading, will be at a marked disadvantage on the writing assessment. Such students will fail to write well not because of weaknesses in composition, but in comprehension.

That’s why the scores are going to drop. But why would I cheer for this?

Two reasons really. Research shows that literacy is improved when students write about what they read. Recently, there has been little emphasis on correlating reading and writing instruction and PARCC’s test design will push many teachers to combine reading and writing. That’s a real plus for kids.

Also, past measures provided a purer assessment of “writing,” but it wasn’t the writing that allows individuals to succeed academically and economically. Writing about reading is not as pure a measure of writing, but it is a much better measure of writing about reading, which has greater value to our children.

So, the writing scores are going to drop, but that means students are more likely to end up with higher real proficiency, especially with the skills that we most want them to have. That is going to look bad, but it is a real benefit for the kids. 

Thursday, May 3, 2012

Here We Go Again


For years, I’ve told audiences that one of my biggest fantasies (not involving Heidi Klum) was that we would have a different kind of testing and accountability system. In my make-believe world, teachers and principals would never get to see the tests – under penalty of death.

They wouldn’t be allowed within miles of a school on testing days, and they would only be given general information about the results (e.g., “your class was in the bottom quintile of fourth grades in reading”). Telling a teacher the kinds of test questions or about the formatting would be punished severely, too.

In that fantasy, teachers would be expected to try to improve student reading scores by… well, by teaching kids to read without regard to how it might be measured later. I have even mused that it would be neat if the test format changed annually to even discourage teachers from thinking about teaching to a test format.

In some ways, because of common core, my fantasy is coming true (maybe Heidi K. isn’t far behind?).

Principals and teachers aren’t sure what these tests look like right now. The whole system has been reset, and the only sensible solution is… teaching.

And, yet, I am seeing states that are holding back on rolling out the common core until they can see the test formats.

Last week, Cyndie (my wife – yes, she knows all about Heidi and me – surprisingly, she doesn’t seem nervous about it) was contacted by a state department of education trying to see if she had any inside dope on the PARCC test.

This is crazy. We finally have a chance to raise achievement and these test-chasing bozos are working hard to put us back in the ditch. There is no reason to believe that you will make appreciable or reliable gains teaching kids to reply to certain kinds of test questions or to particular test formats (you can look it up). The people who push such plans know very little about education (can they show you the studies of their “successful” test-teaching approaches?). I am very pleased with the unsettled situation in which teachers and principals don’t know how the children’s reading is going to be evaluated; it is a great opportunity for teachers and kids to show what they can really do.