Saturday, April 12, 2014

How Much In-Class Reading?

I am wondering what you think are the acceptable ways to read text in a text in grades 3-8.  Obviously, round robin or popcorn reading is not one of them -- and these are still options we see too often. Independent reading is desired, and some degree of teacher read aloud to the whole class to model fluency and dramatic reading is appropriate as well. What other ways do you think are effective? How much time would you say we should push teachers to do each? (i.e. 60% independent, 20% teacher read aloud...) etc.

I’m with those who believe that students need to read a lot during their school days. Yes, they should read at home, but within their schoolwork at school in class, students need lots of opportunities and requirements to read.

The most powerful of such reading (in terms of stimulating student learning) seems to be oral reading with feedback from a teacher. I would discourage popcorn or round robin but not because the reading practice that they provide is so bad—just that they provide so little practice. When one student is reading, many more are just sitting waiting for their turn. The students who are reading are learning, and the others, not so much.

Research suggests that techniques like paired reading (in which kids read and reread texts to each other), reading while listening, echo reading, radio reading, etc. can all be good choices. In all of these techniques, many students are able to practice simultaneously, they read relatively challenging materials, and then they reread these in an effort to improve the quality. If students can read texts (8th grade or higher) orally at about 150 words correct per minute, I wouldn’t bother with this kind of practice, and if they could not, I would provide about 30 minutes of it each school day.

As powerful as oral reading is at stimulating student reading ability, we can’t ignore the fact that most reading that we engage in will be silent, and students need to practice this as well. I would strongly encourage teachers to have students read those texts silently that they are to write about or that are going to be the focus of group or class discussion. When I assign such reading in classrooms, kids often tell me that I’m doing it wrong (because their teachers have them read the texts round robin). Teachers do this to make sure kids read it and to monitor their reading. By doing the fluency work noted above, I do away with the need to monitor their fluency progress (I’m already doing that), and teachers can make sure students read from the discussions and writing that ensues. I would usually have students reading their literature selection and their social studies or science chapters silently. If students struggle with this, divide the assignments into shorter chunks (even 1 page at a time), and then stretch this out over time. I would suggest that students should be engaging in as much silent reading as oral reading in these grades (and if students are fully fluent as described above, then the silent reading should be almost 100% of what students read).

I would argue not only for minutes to be dedicated to fluency practice, but for another 30-45 minutes to focus on reading comprehension daily—and a lot of this time would entail silent reading. However, silent reading is also going to come up during science, social studies, and other subjects and this counts, too. Thus, having students spend as much as 15 minutes reading aloud (paired reading for 30 minutes would allow each student 15 minutes of such practice), and having students read for 20-30 minutes of a 45 minute comprehension lesson and reading another 10-20 minutes a day in other subjects would give kids a substantial amount of oral and silent reading practice.

Even in the silent reading context, there should be at least some oral reading. Most prominently: students should read aloud during discussions to provide evidence supporting their claims or refuting someone else’s.

It is a good idea to encourage kids to read on their own, but this has such a small impact on student learning that I would make such opportunities available in ways that would not appreciably reduce the instructional doses suggested above. Getting kids to read on their own beyond the school day, while providing them with the heavy involvement in reading across their school day will be the most powerful combination for getting students to high performance levels.





Thursday, April 3, 2014

Apples and Oranges: Comparing Reading Scores across Tests

I get this kind of question frequently from teachers who work with struggling readers, so I decided to respond publicly. What I say about these two tests would be true of others as well.

I am a middle school reading teacher and have an issue that I'm hoping you could help me solve. My students' placements are increasingly bound to their standardized test results. I administer two types of standardized tests to assess the different areas of student reading ability. I use the Woodcock Reading Mastery Tests and the Terra Nova Test of Reading Comprehension. Often, my students' WRMT subtest scores are within the average range, while their Terra Nova results fall at the the lower end of the average range or below. How can I clearly explain these discrepant results to my administrators? When they see average scores on one test they believe these students are no longer candidates for remedial reading services.

Teachers are often puzzled by these kinds of testing discrepancies, but they can happen for a lot of reasons.

Reading tests tend to be correlated with each other, but this kind of general performance agreement between two measures doesn’t mean that they would categorize student performance identically. Performing at the 35%ile might give you a below average designation with one test, but an average one with the other. Probably better to stay away from those designations and use NCE scores or something else that is comparable across the tests.

An important issue in test comparison is the norming samples that they use. And, that is certainly the case with these two tests. Terra Nova has a very large and diverse nationally representative norming sample (about 200,000 kids) and the GMRT is based on a much smaller group that may be skewed a bit towards struggling students (only 2600 kids). When you say that someone is average or below average, you are comparing their performance with those of the norming group. Because of their extensiveness, I would trust the Terra Nova norms more than the WMRT ones; Terra Nova would likely give me a more accurate picture of where my students are compared to the national population. The GMRT is useful because it provides greater information about how well the kids are doing in particular skill areas, and it would help me to track growth in these skills.

Another thing to think about is reliability. Find out the standard error of the tests that you are giving and calculate 95% confidence intervals for the scores. Scores should be stated in terms of the range of performance that the score represents. Lots of times you will find that the confidence intervals of the two tests are so wide that they overlap. This would mean that though the score differences look big, they are not really different. Let’s say that the standard error of one of the tests is 5 points (you need to look up the actual standard error in manual), and that your student received a standard score of 100 on the test. That would mean that the 95% confidence interval for this score would be: 90-110 (in other words, I’m sure that if the student took this test over and over 95% of his scores would fall between those scores). Now say that the standard score for the other test was 8 and that the student’s score on that test was 120. That looks pretty discrepant, but the confidence interval for that one is 104-136. Because 90-110 (the confidence interval for the first test) overlaps with 104-136 (the confidence interval of the second test), these scores look very different and yet they are actually the same.

You mention the big differences in the tasks included in the two tests. These can definitely make a difference in performance. Since WMRT is given so often to lower performing students, that test wouldn’t require especially demanding tasks to spread out performance, while the Terra Nova, given to a broader audience, would need a mix of easier and harder tasks (such as longer and more complex reading passages) to spread out student performance. These harder tasks push your kids lower in the group and may be so hard that it would be difficult to see short-term gains or improvements with such an test. WMRT is often used to monitor gains, so it tends to be more sensitive to growth.

You didn’t mention which edition of the tests you were administering. But these tests are revised from time to time and the revisions matter. GMRT has a 2012 edition, but studies of previous versions of the tests reveal big differences in performance from one edition to the other (despite the fact that the same test items were being used). The different versions of the tests changed their norming samples and that altered the tests performances quite a bit (5-9 points). I think you would find the Terra Nova to have more stable scores, and yet, comparing them across editions might reveal similar score inflation.

My advice is that when you want to show where students stand in the overall norm group, only use the Terra Nova data. Then use the GMRT to show where the students’ relative strengths and weaknesses are and to monitor growth in these skills. That means your message might be something like: “Tommy continues to perform at or near the 15% percentile when he is compared with his age mates across the country. Nevertheless, he has improved during the past three months in vocabulary and comprehension, though not enough to improve his overall position in the distribution.“ In other words, his reading is improving and yet he remains behind 85% of his peers in these skills.


Monday, March 31, 2014

To Play or Not to Play (in K and Pre), That is the Question

During both my childhood and the early years of my teaching career “reading readiness” dominated. The idea was that if you taught kids reading too early, you would do damage. My kindergarten teacher warned Mom not to try to teach me anything, and we were still stalling when I taught first grade.

Recently, a study at the University of Virginia found that we now live in a different world. Most kindergarten teachers believe that they should teach reading and that is pretty common in preschools, too. The headline in Education Week says it all: “Study Find Reading Lessons Edging Out Kindergarten Play.”

I’ve been a big cheerleader for early reading instruction, and why not? The research is overwhelming. Despite theories that teaching reading early would damage kids, there is no empirical evidence supporting those claims. As Head Start kids have ramped up their literacy knowledge over the past several years, their emotional health has improved along with it. Hundreds of studies now show benefits to teaching kids early.

However, that doesn’t mean that kids shouldn’t be playing or that the preschool and kindergarten environments shouldn’t be encouraging and supportive. Too often I see kindergarten reading instruction that doesn’t match well with the research findings.

I would strongly encourage the kinds of play/literacy lessons that Susan Neumann has long championed. Have restaurants, newspaper publishers, post offices, and libraries set up in these classrooms and engage children in literacy play.

Of course, phonological awareness and phonics should be taught explicitly, but the research is very clear that this should be small group work—engaging and interactive. (None of the studies with young kids which decoding instruction was effective presented the lessons to whole classes). Kids can respond in a variety of ways as well. If you are quizzing kids on whether they hear the same sounds at the beginnings of two words, they can jump or clap or rub their tummies to respond. Movement fits into such lessons real well, and various songs and language games can be used, too.

Encourage pretend reading and pretend writing and use techniques like language experience approach to introduce kids to text (and to encourage them to do their own writing). Label everything in classrooms, but involve kids in doing that.


My point is simply this: We should teach literacy in preschool and kindergarten. But play can be the basis of effective literacy lessons. Play more literacy in the early grades and avoid seeming like a fourth-grade class for young’uns. It is not an either or (despite the Ed Week headline); kids can play more and get more literacy instruction.

Tuesday, March 25, 2014

Indiana Drops Common Core

Yesterday, Indiana became the fifth state to choose not to teach to the Common Core standards (CCSS). Opponents of these shared standards have complained less about their content, than about how they were adopted. Critics claim the federal government forced states to adopt these standards by advantaging them in the Race to the Top competition. Two problems with those claims: (1) Indiana didn’t compete for Race to the Top—so there was no federal gun to its head, and (2) states, like Indiana, that don’t adopt Common Core face absolutely no federal penalty.

Ironic. Indiana’s governor claims he’s regaining Indiana’s sovereignty, while his action itself reveals that its sovereignty was never at risk. It is a deft and subtle act of political courage when a politician stands up to someone who hasn’t challenged him. (President Obama could learn from this. Perhaps he would look better on the Ukrainian front if he would issue stern warnings to Canada or Bermuda. That’ll show them whose boss!)

Why did Governor Pence pull the trigger on Common Core? He doesn’t seem to know. “By signing this legislation, Indiana has taken an important step forward in developing academic standards that are written by Hoosiers, for Hoosiers, and are uncommonly high, and I commend members of the General Assembly for their support,” Pence said in a press release. The tortured grammar aside—is it the standards or the Hoosiers who were uncommonly high?—this seems pretty clear.

But like many a “bold” politician of yore, the Guv went on to say, “Where we get those standards, where we derive them from to me is of less significance than we are actually serving the best interests of our kids. And are these standards going to be, to use my often used phrase, uncommonly high?” (I sure hope the new Indiana standards include grammar.)

In other words, Governor Pence dropped the CCSS standards because Hoosiers didn’t write them, but he doesn’t care where standards come from or who writes them. Maybe that’s why he has turned to a lifelong Kennedy-Democrat (Sandra Stotsky, not a Hoosier) to help him shape Indiana’s new educational standards. We all cheer for bipartisanship, but it is always startling to see Tea Party Conservatives and Massachusetts Liberals bedded down together.

What did the Guv get for his trouble? Dr. Stotsky publicly denounced the Hoosier draft for being too consistent with the CCSS standards. She wants Indiana teachers to teach different phonics, grammar, reading comprehension, and writing skills than those taught in the 49 other states (good luck with that).

Dr. Stotsky notes that the Indiana draft had a 70% overlap with the CCSS standards… but seemed to be silent about how much overlap there was among CCSS and the standards in Texas, Nebraska, Virginia, or Alaska; or with the previous and clearly inferior Indiana standards that she apparently advised on; or with the previous Massachusetts standards that she has championed. I guess that just shows that academics can be as slippery as politicians when they think they have a spotlight.


I support the CCSS standards because they are the best reading standards I’ve ever seen (and, yes, I am aware of their limitations and flaws). But if anyone comes up with better standards, I’d gladly support those, too (no matter how uncommonly high the Hoosiers might have been who wrote them).

Saturday, March 8, 2014

Is Amount of Reading Instruction a Panacea?

Recently, Education Week published an interesting piece about a Florida program aimed at extending the school days of children in the 100 lowest-performing elementary schools in the state. These schools were mandated to add an extra hour of reading instruction to their days. The result: 75% of the schools improved their reading scores, 70 of them coming off the lowest-performing list.


Duh!

Those who know my work in the schools are aware that amount of instruction is always the first thing that I look at. When I was the director of reading in the Chicago Public Schools it was one of my major mandates. Research overwhelmingly shows that more instruction tends to lead to more learning, and many supposedly research-proven programs obtain their advantages from, you guessed it, offering more teaching than kids will get in the control group.  

But the Ed Week article went on to point out that most of these extra-hour schools were still underperforming demographically matched schools and that 30 of them were still low performing.

Why doesn’t added time always work if it is such a no-brainer?

There are at least a few reasons.

First, time is not a variable. It is a measure or a dosage. Scientists abhor the idea of treating time as a variable. Long ago, the best minds thought iron rusted because of time. Eventually, they figured out that rust is due to exposure to moisture, and that time was a measure of how much moisture the iron was touching. More time meant more moisture.

In education, time is a measure of the amount of curriculum—explanation and practice—that children exposed to. It is the curriculum and how it is taught that makes the difference; time is simply a measure of that.

What if a curriculum is not sound? That is, what if being exposed to it does not usually lead someone to read, or repeats valuable lessons students have already mastered or fails to offer sufficient practice. An hour extra of something that doesn’t work won’t improve things. Time is just a measure, right? An hour of low quality teaching is an hour wasted.

Another problem is whether a mandated hour is actually an hour. Reading First, a federal initiative under No Child Left Behind, required that teachers provide 90 minutes per day of reading instruction. But classroom observers found a lot less than that in Reading First classrooms. Kids in those classrooms spent a lot of time waiting for instruction rather than being instructed.

Teachers don’t always appreciate how powerful their time with kids can be, so they are wasteful of the minutes. Do some self-observation of this and you’ll see what I mean. Thus, the schools stay open. The buses pick kids up an hour later. The teachers and kids are in the classroom. But reading instruction, not so much.

Finally, an extra hour may not equalize performance simply because it may be insufficient. We don’t know how much instruction and practice in reading anyone is getting. How much time is devoted to teaching reading during the school day? How much reading do students do in math, social studies, and science classes? Research studies show big differences in amount of reading instruction in school-to-school and even classroom-to-classroom comparisons.

How much do students read at home? How much time do they spend on the kinds of homework that make a difference? How much language development opportunity do they get before they come to school? What kinds of activities do they engage in through their libraries, parks, churches, synagogues, scouts, etc.?


The fact is that some students receive thousands of hours of instruction and practice in language and literacy each year, while others receive considerably less. An extra hour per day is precious (thank you, Florida), but it simply may be insufficient to overcome the huge differences that exist.