How to Analyze or Assess Reading Comprehension

  • 09 November, 2019
  • 21 Comments

Teacher question:

I've attached a Student Work Analysis tool that we are using. I have read that you oppose attempts to grade students on the individual reading standards. Although this tool is not used for grading students, it is a standard-by-standard analysis of the students’ work, and I wonder what you think of it? [The form that was included provided spaces for teachers to analyze student success with each of their state’s math standards].

Shanahan response: 

In the blog entry that you refer to, I spoke specifically about evaluating reading comprehension standards (not math or even the more foundational or skills-oriented decoding, vocabulary, or morphology).

A common error in reading education is to treat reading comprehension as if it were a skill or a collection of discrete skills.

Skills tend to be highly repeatable things…

Many of the items listed as comprehension skills are not particularly repeatable.  All these standards or question types aimed at main idea, central message, key details, supporting details, inferencing, application, tone, comparison, purpose, etc. are fine, but none is repeatable in real reading situations.

Each of these actions is unique or at least high particularized. Each time these instances occur they are in completely different contexts. To execute them, it requires different steps from instance to instance.

Not only does each text have its own main ideas, but because the expression of each text is so different, what it takes to locate, identify, or construct a main idea will vary greatly from text to text. Contrast this with forming the appropriate phoneme for sh or ph, computing the product of 3 X 3, or defining photosynthesis.

Another problem is that these supposed comprehension skills aren’t individually measurable.

My point isn’t that teachers can’t ask questions that would require students to figure out particular things about a text—of course they can—but performance on such questions is startlingly unreliable. Today, Johnny might answer the tone question like a champ, but tomorrow he won’t—since that is a different story, and the author revealed tone in a totally different way.

Also, comprehension questions asked about a particular text aren’t independent of each other (and item independence is imperative in assessment). The reason, little Johnny struggled with tone on the day after wasn’t because he forgot what he knew about tone, nor even because tone was handled more subtly in text two… but because his reading was deeply affected by that text’s more challenging vocabulary, complex sentences, or complicated time sequence—none of which are specifically tone issues.

That means that when teachers try to suss out how well Johnny can meet Standard 6 by asking tone questions, his answers will reveal how well he could make sense of tone in one particular text, but it won’t likely be indicative of how well he’ll handle tone on any other. (Not at all what one would expect to see with math, decoding, or vocabulary assessments).

Reading comprehension is so affected by the readers’ prior knowledge of the subject matter being read about and the language used to express those ideas (e.g., vocabulary, sentence structure, cohesion, text organization, literary devices, graphics), that focusing one’s attention on which kinds of question the kids could answer is a fool’s errand.

If I were trying to assess reading comprehension information to determine who might need more help, the kind of help to provide, or who I should worry about concerning the end of year testing, then I wouldn’t hesitate to ask questions that seemed to reflect the standards… but the information I’d use for assessment would ignore how well the kids could answer particular types of questions.

My interest would be in how well students did with particular types of texts.

Keep track of their overall comprehension with different types of text. I’d record the following information:

  1. How the student did on each overall text (the percentage of questions answered correctly, or an estimate of the percentage of key information the student could include in a summary).
  2. The topics of the texts (with, perhaps, some rating of each child’s familiarity with those topics).
  3. An estimate of the text difficulty (in terms of Lexiles or another readability estimate).
  4. The lengths of the texts (in numbers of words, preferably).
  5. Whether the text was Literary (narrative or poetry), or Informational (expository or argumentative).

Thus, a student record may look something like this:

 

  Comprehension

         Lexile

     Familiarity

    Text Type

       Length

Week 1

         90% 

          400L

            4

  Fiction/Narrative

    300 words

Week 2

         60%

          570L

            2                     (habitats)

  Info/Exposition

    550 words

Week 3

         75%

          500L

           2

  Fiction/Narrative

    575 words

Week 4

         75%

          570L 

           4                      (robots)

  Info/Exposition

    500 words

Week 5

         80%

          490L

           4

  Fiction/Narrative

    400 words

Week 6

         65%

          580L

           3                      (climate)

   Info/Exposition

    500 words

Week 7

         85%

          525L

          3

  Fiction/Narrative

    250 words

 Over time, you’ll get some sense that junior does great with texts that are lower than 500L, but not so well with texts that are harder than 550L (unless they’re about robots).

Or, perhaps over the report card marking period you may notice a difference in performance on the the literary or informational texts (which you can see in my example above). But you also need to notice that the informational texts were relatively harder here, so it isn’t certain that the student would struggle more with content than literature (though one might make an effort to sort this out to see if there is a consistent pattern). Likewise the student seemed to be able to handle silent reading demands with the shorter texts, but comprehension tended to fall off with the longer texts. That may lead me to try to do more to build stamina with this student.

And so on.

Basically, the information that you are collecting should describe how well the student does with particular types of texts (in terms of discourse types, length, topic familiarity, and difficulty), rather than trying to figure out which comprehension skills the individual question responses may reveal.

If a student does well with many of the passages, then he or she will likely do well with the comprehension standards—as long as these weekly dipsticks are reflective of the difficulty, lengths, and types of texts that will appear on the end-of-year tests.  

And, If students perform poorly with many of the passages, then their performance on all question types will be affected.

Comments

See what others have to say about this topic.

Nancy Myers Nov 09, 2019 06:17 PM

How valuable do you think Informal Reading Inventories (IRI) are for assessing comprehension and word analysis skills?

Michelle Nov 09, 2019 07:24 PM

How would you suggest using this information to inform instruction?

Tiffany Nov 09, 2019 08:06 PM

What are your thoughts on using a DRA 2 assessment on 1st graders to determine their reading level? My entire team uses it to decide if a student is on grade level with me being the only teacher that does not use this assessment. I do not believe that this is the most accurate way to make this determination. Thoughts?

Rob Nov 09, 2019 08:31 PM

What should schools do for report cards in elementary school for letting parents know how Well their children are reading?

Angela Nov 09, 2019 08:40 PM

I often wondered how students could score above grade level on the RI and perform poorly on a reading comprehension assessment. I know RI looks at syllable length and sentence length. RI provides the readability level. I’m assuming we are seeing the potential of what a student can achieve in his ability to decode and comprehend shorter text. However, maybe it is also important for the student to read within that text complexity level in order to comprehend longer text at that readability level. Is this correct? How can we explain a student getting an above grade level RI score, but score low on a grade-level comprehension assessment?

Tim Shanahan Nov 10, 2019 03:35 AM

Nancy— I do believe IRIs can provide A useful sense of how well a student can read texts of a particular level of difficulty. They are not particularly informative with regard to any kinds of species about comprehension or word analysis skills.

Tim

Tim Shanahan Nov 10, 2019 03:40 AM

Michelle— it should identify the students who are struggling with comprehension. These students should be provided with additional instruction. It is sort of like the decoding and fluency screeners—they identify who needs more attention and support. Depending on the results I might give some kids more work with information text, or we might work on increasing endurance by gradually lengthening the amount of text assigned prior to discussion.or, depending on how far below level the student may be i should have a better sense of the amount of scaffolding this student would need to work with grade level text.

Tim

Tim Shanahan Nov 10, 2019 03:43 AM

Tiffany—
I don’t have any problem with the DRA2 for deterring how well a student can read text...but at grade 1 it would be wise to screen PA and decoding skills, too.

Tim

Erica Nov 10, 2019 12:26 PM

How do you measure a student’s familiarity with a topic?

Lisa DeRoss Nov 10, 2019 04:09 PM

Tim, I was part of a research group in California in 2005-08 where we attempted to measure exactly what you describe here. We looked at teaching & assessing language acquisition & comprehension and text comprehension using a genre-focused approach. While results were mixed, my team found that explicit, systematic instruction in each genre yielded important information about reading behaviors and somewhat predicted student performance toward reclassification as fluent English proficient, particularly in grades 6-8. Perhaps we need to stop quantifying RC as a single construct, and instead track and report student growth both by genre and text complexity?

Tim Shanahan Nov 11, 2019 02:47 AM

Erica—
The way I used in this example was to provide a rubric to the students... letting them self evaluate their familiarity with the specifics of the text—from no familiarity at all, to I’ve heard of it but don’t know little about it, to I knew some of the information, to I’m very familiar.

Tim

Diana Castañón Nov 11, 2019 04:41 AM

Reading comprehension in a second language is more complex than recognizing the gender of a text and its aim. The lack of vocabulary and previous knowledge makes it hard for them to comprehend the general idea. Many students think they have to know all the words in a text, consequently they panic and try to copy from another student without attempting to read. This observation comes from students at high school level.

Now thinking about a higher level, when they do not know the language but have a slide idea of it, it makes it hard for them to comprehend the different gender to their area. The limited knowledge of the target language instead of helping it confuses the student.
Why is so difficult to guide the student to a critical level?

Marybeth Nov 11, 2019 07:03 AM

Have you ever heard of IRLA? My district uses it and I'm not sure I feed the positivpositivese

Tim Shanahan Nov 12, 2019 02:23 AM

MaryBeth
I have seen it, but have not analyzed it closely. I am skeptical about the usefulness of its sequence.

Tim

Richard L. Trower Nov 20, 2019 04:39 PM

I think that the strategy that you put forth here is worth a try! So naturally I will give it a try and after I have data on several different texts I will be able to see for myself what the data shows me and how I should proactively move forward to best help each student based on those findings.

Oscar Nov 24, 2019 04:58 PM

Thanks for sharing. I really had a hard time assessing my students because they read so slow and it leads to poor comprehension because they can't form the idea right away. After assessment, any tips to improve their reading comprehension skills?

Mary Pendleton Dec 17, 2019 01:58 PM

Even if you're looking at overall performance on a particular type of text, how would you explain to teachers why students do overwhelmingly well on some questions, but show a trend in not doing well with other questions (Ex: understand the theme, but struggle with character relationships in a particular text)?

Andrei Apr 06, 2020 02:34 PM

Is analysis is higher than comprehension?

Andrei Apr 06, 2020 02:36 PM

Is analysis is more important or more higher than comprehension?

Naomi Nelson Jul 15, 2021 02:28 AM

Hello Tim,
I am interested to know your thoughts on the assessment of reading comprehension using multiple choice questions? Particularly high-stakes entry or gate-keeping tests?

Kind regards,
Naomi Nelson

Kelly Parrott Mar 16, 2021 11:27 AM

Would you recommend using DRA's for below grade level 3rd grade students as a starting point?

What Are your thoughts?

Leave me a comment and I would like to have a discussion with you!

Comment *
Name*
Email*
Website
Comments

How to Analyze or Assess Reading Comprehension

21 comments

One of the world’s premier literacy educators.

He studies reading and writing across all ages and abilities. Feel free to contact him.