Showing posts with label Book placement. Show all posts
Showing posts with label Book placement. Show all posts

Sunday, April 10, 2011

What is the Biggest Literacy Teaching Myth in 2011?

While in graduate school, I worked with Jack Pikulski and became interested in the theory of instructional level. That’s the idea that text has a particular level of difficulty and that students learn best when they are matched with text in a particular way. If text is too hard, they won’t learn to read and if text is too easy they won’t make any progress. The difficulty levels in between those extremes (and there are usually levels and not a single level), are thought to be the levels at which instructional progress would be optimum.

It makes logical sense. If text is too easy, there is nothing to be learned from it, and if it is too hard, it would be like trying to catch knives.

And yet, I was surprised to find that text difficulty is hard to measure exactly (our measures have improved a bit since I was in grad school), and that readers’ levels of proficiency were pretty approximate too (this hasn’t improved much). The biggest surprise was the lack of clear research evidence showing the benefits of matching texts to kids (Jack tried such a study when I was there, but it fell apart over reliability issues and never was published).

As a young professor, I wrote about how instructional level theory had entered the field seemingly through research (at least that was the claim), but I revealed that research base to be a chimera.

In the 1980s, whole language influenced school books emerged. The state of California required the use of previously published literature as the basis of reading instruction (no research supporting that idea either) and banned any adaptation of such literature. So, publishers couldn’t adjust the readabilities of reading books, like they had with high school text books, and text levels got hard for a while. So hard in fact, that kids had trouble learning to read; especially first-graders. Teachers met the challenge by reading the books to the kids rather than having them do the reading themselves. Parents and grandparents rebelled. Their older children could read books that hadn’t already been read to them already, why couldn’t this younger group?

One offshoot of this debacle was the growth of “guided reading” as an approach to teaching. Teachers certainly have preferred it to throwing kids in the deep end while fervently hoping mom and dad had already taught them to swim (a pretty good summary of the whole language ideology of that time). Fountas and Pinnell came up with a weakly validated measure of text difficulty and claimed that kids had to be matched to it to succeed. They counseled the minimization of explicit teaching and encouraged teachers to simply have children read texts at the correct level and that learning would simply happen for most as they read those matched books (to their credit they did support providing explicit help when progress did not ensue automatically).
Given how widely used guided reading is, and how much sense it makes, particularly for beginning readers, one would think we have many studies showing the benefits of such an approach. In fact, the data are murkier than when I was in graduate school. It is not that various studies (such as those by Alissa Morgan, Renata O’Connor, and William Powell) haven’t pointed to optimum book-student matches, but that they have all pointed in different directions.

Now, the common core standards are insisting that text difficulties be stiffened and that teachers not just move kids to easier books when the going gets tough. My fear, of course, is that such a fiat could simply lead us back to the 1980s, with teachers reading hard books to kids (guided reading is obviously preferable to that).

First, the common core is probably setting levels that are too hard for beginners. There is a lot to be figured out by those kids with regard to decoding, and overwhelming them with really hard books is not going to facilitate their phonics progress. I hope we can persuade publishers and school districts to allow the path to be smoothed a bit for the little ones (I think they’ll progress faster under those circumstances). Second, for older students, the common core highlights some pretty important ideas: (1) that there is no particular level of text difficulty that has been consistently identified by research as being optimum; (2) that always having students reading text on their so-called reading level is like relegating them to training wheels forever; and (3) that most teachers don’t have a clue as to how to scaffold children’s learning from hard books. Mandate whatever you want, it won’t make teachers know how to implement any better.

Later entries to this blog will pursue this idea, as teachers are going to have to grow new wings if they are going to make this flight successfully.

Monday, September 28, 2009

Putting Students into Books for Instruction

This weekend, there was a flurry of discussion on the National Reading Conference listserv about how to place students in books for reading instruction. This idea goes back to Emmet Betts in 1946. Despite a long history, there hasn’t been a great deal of research into the issue, so there are lots of opinions and insights. I tend to lurk on these listservs rather than participating, but this one really intrigued me as it explored a lot of important ideas. Here are a few.

Which ways of indicating book difficulty work best?
This question came up because the inquirer wondered if it mattered whether she used Lexiles, Reading Recovery, or Fountas and Pinnell levels. The various responses suggested a whiff of bias against Lexiles (or, actually, against traditional measures of readability including Lexiles).

So are all the measures of book difficulty the same? Well, they are and they’re not. It is certainly true that historically most measures of readability (including Lexiles) come down to two measurements: word difficulty measure and sentence difficulty. These factors are weighted and combined to predict some criterion. Although Lexiles include the same components as traditional readability formulas, they predict different criteria. Lexiles are lined up with an extensive database of test performance, while most previous formulas predict the levels of subjectively sequenced passages. Also, Lexiles have been more recently normed. One person pointed out that Lexiles and other traditional measures of readability tend to come out the same (correlations of .77), which I think is correct, but because of the use of recent student reading as the criterion, I usually go with the Lexiles if there is much difference in an estimate.

Over the years, researchers have challenged readability because it is such a gross index of difficulty (obviously there is more to difficulty than sentences and words), but theoretically sound descriptions of text difficulty (such as those of Walter Kintsch and Arthur Graesser) haven’t led to appreciably better text difficulty estimates. Readability usually explains about 50% of the variation in text difficulty, and these more thorough and cumbersome measures don’t do much better.

One does see a lot of Fountas and Pinnell and Reading Recovery levels these days. Readability estimates are usually only accurate within about a year, and that is not precise enough to help a first-grade teacher to match her kids with books. So these schemes claim to make finer distinctions in text difficulty early on, but these levels of accuracy are open to question (I only know of one study of this and it was moderately positive), and there is no evidence that using such fine levels of distinction actually matter in student learning (there is some evidence of this with more traditional measures of readability).

If anything, I think these new schemes tend to put kids into too many levels and more than necessary. They probably correlate reasonably well with readability estimates, and their finer-grained results probably are useful for early first grade, but I’d hard pressed to say they are better than Lexiles or other readability formulas even at these levels (and they probably lead to over grouping).

Why does readability work so poorly for this?
I’m not sure that it really does work poorly despite the bias evident in the discussion. If you buy the notion that reading comprehension is a product of the interaction between the reader and the text (as most reading scholars do), why would you expect text measures to measure much more than half the variance in comprehension? In the early days of readability formula design, lots of text measures were used, but those fell away as it became apparent that they were redundant and 2-3 measures would be sufficient. The rest of the variation is variation in children’s interests and knowledge of topics and the like (and in our ability to measure student reading levels).

Is the right level the one that students will comprehend best at?
One of the listserv participants wrote that the only point to all of this leveling was to get students into texts that they could understand. I think that is a mistake. Often that may be the reason for using readability, but that isn’t what teachers need to do necessarily. What a teacher wants to know is “at what level will a child make optimum learning gains in my class?” If the child will learn better from something hard to comprehend, then, of course, we’d rather have them in that book.

The studies on this are interesting in that they suggest that sometimes you want students practicing with challenging text that may seem too hard (like during oral reading fluency practice) and other times you want them practicing with materials that are somewhat easier (like when you are teaching reading comprehension). That means we don’t necessarily want kids only reading books at one level: we should do something very different with a guided reading group that will discuss a story, and a paired reading activity in which kids are doing repeated reading, and an independent reading recommendation for what a child might enjoy reading at home.

But isn’t this just a waste of time if it is this complicated?
I don’t think it is a waste of time. The research certainly supports the idea that students do better with some adjustment and book matching than they do when they work whole class on the same level with everybody else.

However, the limitations in testing kids and testing texts should give one pause. It is important to see such data as a starting point only. By all means, test kids and use measures like Lexiles to make the best matches that you can. But don’t end up with too many groups (meaning that some kids will intentionally be placed in harder or easier materials than you might prefer), move kids if a placement turns out to be easier or harder on a daily basis than the data predicted, and find ways to give kids experiences with varied levels of texts (from easy to challenging). Even when a student is well placed, there will still be selections that turn out to be too hard or too easy, and adjusting the amount of scaffolding and support needed is necessary. That means that teachers need to pay attention to how kids are doing, and responding to these needs to make sure the student makes progress (i.e., improves in what we are trying to teach).

If you want to know more about this kind of thing, I have added a book to my recommended list (at the right here). It is a book by Heidi Mesmer on how to match texts with kids. Good luck.