Blast from the Past: This entry first posted on September 7, 2019, and reappeared on February 10, 2024. It seems to be that time of the year again. Principals are being encouraged by central administrations to put on the full court press for higher test scores this year. I know that, not because of Mardi Gras or Groundhog Day, but because I'm starting to get those questions from teachers. Here is one that just came in: "Dr. Shanahan, you have stated that STANDARDIZED reading test items analysis is a 'fool's errand.' My district requires me to complete an items analysis of a standardized reading test to "narrow my instructional focus on specific skills and question types" -per my admin. You stated, 'It is not the question type. It is the text.' How can I convince my supervisors to move away from this practice? Please help." Almost 5 years separates these questions, and the answer to them has not changed a bit.
RELATED: I want my students to comprehend, am I teaching the wrong kind of strategies?
Teacher question:
What are your thoughts on standards-based grading in ELA which is used in many districts? For example, teachers may be required to assign a number 1-4 (4 being mastery) that indicates a student’s proficiency level on each ELA standard. Teachers need to provide evidence to document how they determined the level of mastery. Oftentimes tests are created with items that address particular standards. If students get those items correct, that is evidence of mastery. What do you recommend?
Shanahan response:
Oh boy… this answer is going make me popular with your district administration!
The honest answer is that this kind of standards-based grading makes no sense at all.
It is simply impossible to reliably or meaningfully measure performance on the individual reading standards. Consequently, I would not encourage teachers to try to do that.
If you doubt me on this, contact your state department of education and ask them why the state reading test doesn’t provide such information.
Or better yet, see if you can get those administrators who are requiring this kind of testing and grading to make the call.
You (or they) will find out that there is a good reason for that omission, and it isn’t that the state education officers never thought of it themselves.
Better yet check with the agencies who designed the tests for your state. Call AIR, Educational Testing Service, or ACT, or the folks who designed PARCC and SBAC or any of the other alphabet soup of accountability monitoring devices.
What you’ll find out is that no one has been able to come up with a valid or reliable way of providing scores for individual reading comprehension “skills” or standards.
Those companies hired the best psychometricians in the world, and have collectively spent billions of dollars designing tests, and haven’t been able to do what your administration wants. And, if those guys can’t, why would you assume that Mrs. Smith in second grade can do it in her spare time?
Studies have repeatedly shown that standardized reading comprehension tests measure a single factor—not a list of skills represented by the various types of question asked.
What should you do instead?
Test kids’ ability to comprehend a text of a target readability level. For instance, in third grade you might test kids with passages at appropriate levels for each report card marking (475L, 600L, 725L, and 850L). What you want to know is whether kids could make sense of such texts through silent reading.
You can still ask questions about these passages based on the “skills” that seem to be represented in your standards—you just can’t score them that way.
What you want to know is whether kids can make sense of such texts with 75% comprehension.
In other words, it’s the passages and text levels that should be your focus, not the question types or individual standards.
If kids can read such passages successfully, they’ll be able to answer your questions. And, if they can’t, then you need to focus on increasing their ability to read such texts. That means teaching things like vocabulary, text structure, and cohesion and having the kids read sufficiently challenging texts—not practicing answering certain types of questions.
Sorry administrators, you’re sending teachers on a fool’s errand. One that will not lead to higher reading achievement. It will just result in misleading information for parents and kids and a waste of effort for your teachers.
References
ACT. (2006). Reading between the lines. Iowa City, IA : American College Testing.
Davis , F.B. (1944). Fundamental factors in comprehension in reading. Psychometrika , 9( 3), 185–197.
Kulesz, P. A., Francis, D. J., Barnes, M. A., & Fletcher, J. M. (2016). The influence of properties of the test and their interactions with reader characteristics on reading comprehension: An explanatory item response study. Journal of Educational Psychology, 108(8), 1078-1097. https://doi.org/10.1037/edu0000126
Muijselaar, M. M. L., Swart, N. M., Steenbeek-Planting, E., Droop, M., Verhoeven, L., & de Jong, P. F. (2017). The dimensions of reading comprehension in Dutch children: Is differentiation by text and question type necessary? Journal of Educational Psychology, 109(1), 70-83. https://doi.org/10.1037/edu0000120
Spearritt , D. (1972). Identification of subskills of reading comprehension by maximum likelihood factor analysis. Reading Research Quarterly, 8( 1), 92–111. https://doi.org/10.2307/746983
Thorndike, R. (1973). Reading as reasoning. Reading Research Quarterly, 9(2), 135-147. https://doi.org/10.2307/747131
To examine the comments and discussion that responded to the original posting, click here:
https://www.shanahanonliteracy.com/blog/should-we-grade-students-on-the-individual-reading-standards
LISTEN TO MORE: Shanahan On Literacy Podcast
Comments
See what others have to say about this topic.