Blast from the Past: This was first posted on January 29, 2009 and reposted on August 9, 2018. As we get ready to start a new school year, it would be wise for teachers to dedicate themselves to avoiding wasting time this year having kids practicing answering certain kinds of questions. Reading is not the ability to answer particular kinds of questions, but to making sense of text. Many more research studies showing this since this posting first appeared.
Here’s a big idea that can save your school district a lot of money and teachers and kids a lot of time: reading comprehension tests cannot be used to diagnose reading problems.
This isn’t a traditional educator complaint about reading tests; I’m pro reading test. The typical reading comprehension test (e.g, Gates-MacGinitie, Stanford, Metropolitan, Iowa, state accountability assessment) is valid, reliable, reasonably respectful of students from varied cultures… and, yet, those tests cannot be used diagnostically.
The problem isn’t with the tests, it’s just a fact based on the nature of reading ability. Reading is complicated. It involves a bunch of skills that need to be used either simultaneously or in amazingly rapid sequence. Reading comprehension tests do a great job of identifying who has trouble with reading, but they can’t sort out why students struggle. Is it a comprehension problem or did the student fail to decode? Maybe the youngster decoded the words, just fine, but didn’t know the word meanings. Or could she read the text fluently, with the pauses in the right places within sentences? Of course, none of those might be problems: maybe the student really had trouble thinking about the ideas.
Because reading is a hierarchy of skills that must be used simultaneously, failures with low-level skills necessarily undermine higher level ones (like interpreting ideas in the text). Because every comprehension question has to be answered on the basis of decoding, interpretations of word meanings, use of prior knowledge, analysis of sentence syntax, etc., it is impossible to find patterns of student performance on a typical reading comprehension test that can tell you anything.
That is also a reason why items are so highly intercorrelated in reading comprehension tests.
The companies that offer to analyze kids test results to provide you with an instructional map of their comprehension needs are offering something of no value. If a main idea question is hard, all your kids will need help with main ideas. If several inferential questions are bunched at the end of the test and some of your kids don’t finish all the items, you’ll find out that most of your kids need help with inferencing.
No scheme for analyzing item responses on comprehension tests is reliable and none has been validated empirically. Those schemes simply don’t work, except to separate schools from their money.
Comments
See what others have to say about this topic.