I hope and pray that you write about or repost regarding state reading assessments. I just received a call from a frantic academic coach stating that her principal has told her teachers to look at our state test’s achievement level descriptors and create test-based questions aligned to those levels to ask when immersing students in literature and informational texts. Is this a good use of their time? Isn’t it really all about the text as wells as students’ knowledge of the subject matter, vocabulary, and sentence complexity? Please help!
You’re right. It’s been awhile since I’ve gotten up on this particular soapbox.
Many consider this “the season to be jolly,” but for schools the kickoff for heavy test prep is soon to begin. Bah, humbug.
That principal has probably been told to “use your data” or to create “data driven classrooms,” with the idea being to shine on the annual accountability tests.
While I appreciated the hopefulness behind this practice, I have one small concern…. The fact that it doesn’t actually work.
These so-called test score improvement experts who promulgate these ideas don’t seem to mind that their recommendations contradict both the research and successful educational policy and practice.
Their “theory”—and it is just a theory—is that one can raise reading scores through targeted teaching of particular comprehension skills. Teachers are to use the results of their state accountability tests to look for fine-grained weaknesses in reading achievement—or to try to identify which educational standards the kids aren’t meeting.
This idea makes sense, perhaps, in mathematics. If kids perform well on the addition and subtraction problems but screw up on the multiplication ones, then focusing more heavily on multiplication MIGHT make sense.
But reading comprehension questions are a horse of a different color. There is no reason to think that practicing answering particular types of comprehension questions would improve test performance.
Question types are not skills (e.g., main idea, supporting details, drawing conclusions, inferencing). In math, 3x9 is going to be 27 every doggone time. But the main idea of a short story? That is going to depend upon the content of the story and how the author constructed the tale. In other words, the answer is going to be different with each text.
Practicing skills is fine, but if what you are practicing is not repeatable, then it is not a skill.
The test makers know this. Look at any of the major tests (e.g., SBAC, PARCC, AIR, SAT, ACT). They will tell you that their test is based upon the educational standards or that their questions are consistent with those standards. But when they report student performance, they provide an overall reading comprehension score, with no sub-scores based on the various question types.
Why do they do it that way?
Because it is impossible to come up with a valid and reliable score for any of these question types. ACT studied it closely and found that question types didn’t determine reading performance. Texts mattered but questions types didn’t. In fact, they concluded that if the questions were complex and the texts were simple, readers could answer any kind of question successfully; but if the questions were simple and the texts were hard, the readers couldn’t answer any kinds of question.
Reading comprehension tests measure how well students can read a collection of texts—not the types of questions they can answer.
If this principal really wants to see better test performance, there is a trick that I’m ready to reveal here.
The path to better reading scores? Teach kids to read.
It works like magic.
Devote substantial time to teaching phonemic awareness (preK-1), phonics (preK-2), oral reading fluency, vocabulary, reading comprehension, and writing. Make sure kids are being taught to read grade level texts—not just texts at the kids’ supposed reading levels” in grades 2 and up.
Copyright © 2023 Shanahan on Literacy. All rights reserved. Web Development by Dog and Rooster, Inc.