On Progress Monitoring, Maze Tests, and Reading Comprehension Assessment

  • 29 November, 2015
  • 8 Comments
Teacher question:
I am looking for some insight on the use of mazes to progress monitor reading comprehension.  I teach in a middle school (6-8) and am struggling with using this to measure reading comprehension with fluent readers. So much of their reading comprehension in class is measured by determining main idea, recalling basic facts, inferencing, and analyzing the use of literary elements. It seems that when the maze is used to monitor reading comprehension, it doesn’t offer much information about the reader. Often students rush through it and circle words just to complete it in the time allotted and score exactly the same as students who are reading and choosing the correct word, but do not complete it in the allotted time. It seems like student motivation is a critical component of the accuracy of these scores.
Is the maze an effective way to measure passage comprehension, or is it simply a way to measure sentence comprehension? Do you have any suggestions on what else could be used? I appreciate your help with this and look forward to your response.
Shanahan responds:
            John Guthrie developed maze in the 1970s to determine how well students could read particular texts. Let’s say you have a 7thgrade science book and want to know who in your class is likely to struggle with that book. 
            To figure this out you'd test students on several passages from that science book. According to Guthrie, students who score 50% or higher on maze should be able to handle this book. 
            The benefit of maze is that it is easy to construct, administer, and score and maze results are reasonably accurate and reliable. (To design a maze test, you select a passage of 150-200 words in length, delete a word from the second sentence, and every 5th or 7th word after that. Provide the students with three word choices in random order: the correct word, a word that is the same part of speech but incorrect, and a word that is the wrong part of speech.)
            As you point out, maze tells you nothing about what comprehension skills students have or how well they can answer certain kinds of questions. However, question-and-answer comprehension questions can’t tell you that either, so switching tests won't solve that problem for you.
            I was at the University of Delaware during the 1970s where John Guthrie was working at the time. He'd told the late Aileen Tobin, my office mate, a funny thing about maze. He told her that they had tried it out with individual sentences and with passages (as described above) and it didn’t make any difference. Even when sentences were presented randomly students seemed to perform equally well.
            We laughed a lot about that. It just didn't make sense to us. We wondered if that was also true of other popular measures such as cloze tests. (Cloze is similar to maze, but harder to administer because instead of multiple-choice it requires students to fill in the blanks.)
            Our banter over this issue ended up in a series research studies that I carried out. We found just what you surmised. Students performed as well on sequential order passages and on passages that we had scrambled the orders of the sentences. Imagine reading Moby Dick, starting with sentence 16, then 5, then 32, then 1, etc. (Randomizing sentence order doesn't hurt maze or cloze performance, but it wreaks havoc on summary writing.)    
            I also found that cloze correlated best with multiple-choice reading comprehension tests that asked questions based on information from single sentences. Correlations were lower if students had to synthesize information across the passages.
            Cloze and maze tests provide reasonable predictions of reading comprehension, but they do this based on how well students interpret single sentences. For most readers, the prediction works because it is unusual that someone develops the ability to read sentences without developing the ability to read texts.  
            If you want to know who is going to struggle with your literature anthology, maze can be a tool that will help you to accomplish that. If you want to identify specific reading comprehension skills so you can provide appropriate practice, maze won’t help, but neither will the testing alternatives that you could consider. 
            You say you want to monitor your students’ reading comprehension. I suspect that means you need a way of determining at various points during the year whether your students are reading better. For this, I would suggest that you use a collection of graded passages (using Lexiles or some other text evaluation method to put these on a difficulty continuum). Identify the levels of difficulty your students can handle successfully (this could be done with maze tests of those passages), and then later in the year, check to see if the students can now handle passages that are even harder. 
          Monitoring comprehension means not tabulating specific skills that have been accomplished, but what complexity of text language students can negotiate. Perhaps early in the year, your students will be able to score 50% or higher with texts written at 800 Lexiles. By mid-year you'd want them to score like that with harder passages (e.g., 900L-950L). That kind of a testing regimen would allow you to identify who is improving and who is not.

Comments

See what others have to say about this topic.

Karen Burrows Apr 10, 2017 08:58 PM


What about the use of MAZE-type exercises for instruction, not assessment? Our school uses the computerized reading program, Lexia Core 5, which uses these activities to teach fluency.

11/30/15

Timothy Shanahan Apr 10, 2017 08:58 PM

Karen--

I know of no research on the instructional use of maze. However, the notion of having students reading sentences and trying to figure out which word makes sense should be beneficial. I guess you could overdo it, but used wisely it seems like a reasonable exercise to engage kids in.

11/30/15

Kristen Hull Apr 10, 2017 08:59 PM

I'm curious to know how you feel about using DIBELS as our universal screener and progress monitoring tool for benchmark testing and also RTI monitoring. My school is using Reading Wonders and the struggling first graders that need more reinforcement in applying phonics skills struggle with performing and passing the DIBELS testing. The students that do not pass oral reading fluency passages in DIBELS testing lack the phonics decoding skills of vowel teams, long vowel, silent e, and inflections -ed and -ing.

I have also noticed that the same students struggle with the reading wonders assessment, reading the passage independently, which in turn had them struggling with the reading comp questions.

How can I help my students that lack these skills in small group using the reading wonders curriculum (first grade)? I just feel like I have hit a wall and need to beef up the phonics portion of this new program. Any suggestions to help my struggling readers?

2/25/16

Timothy Shanahan Apr 10, 2017 08:59 PM

I have no problem with using DIBELS as a screener or progress monitor if it is used correctly. For example, many teachers tell kids to read as fast as they can--which is not the purpose of the fluency test and is not the appropriate guidance. Similarly, DIBELS requires two one-minute samples to evaluate ORF, but many schools only use a single passage. Not a good choice. If it is administered properly, DIBELS can be very useful and if kids are struggling to read words and nonsense words, you definitely should be concerned about their decoding skills.

2/25/16

laurin Folts Nov 14, 2019 06:35 PM

If one is using a Maze CBM for high schoolers or college students- what would an appropriate benchmark be?

Rachelle Parrs Oct 27, 2021 02:38 AM

Hello- the newest edition of DIBELS has moved to one passage. In your comment from 2017 you state that, “ Similarly, DIBELS requires two one-minute samples to evaluate ORF, but many schools only use a single passage. Not a good choice.” Can you elaborate or share evidence that supports this change in their measure? Do your concerns about a single passage still stand or have there been updates since 2017 that alleviate that concern?

Timothy Shanahan Oct 27, 2021 02:16 PM

Rachelle-
The article that you want is Valencia, S.W., Smith, A.T., Reece, A.M., Li, M., Wixson, K.K., & Newman, H. (2010). Oral reading fluency assessment: Issues of construct, criterion, and consequential validity. Reading Research Quarterly, 45(3), 270-291.
https://deepblue.lib.umich.edu/bitstream/handle/2027.42/88060/RRQ.45.3.1.pdf;sequence=1
Basically, they found significant differences in the ability of 1-minute versus 3-minute reads in their ability to predict reading success for kids.

My concern still stands and this new practice increases my concerns about the reliability of the procedures (especially given the frequency with which those tests are sometimes administered.

tim

nicole Jan 06, 2023 08:28 PM

its good

What Are your thoughts?

Leave me a comment and I would like to have a discussion with you!

Comment *
Name*
Email*
Website
Comments

On Progress Monitoring, Maze Tests, and Reading Comprehension Assessment

8 comments

One of the world’s premier literacy educators.

He studies reading and writing across all ages and abilities. Feel free to contact him.