Showing posts with label formative assessment. Show all posts
Showing posts with label formative assessment. Show all posts

Saturday, May 7, 2016

What doesn’t belong here? On Teaching Nonsense Words


            Obviously you shouldn’t wear an especially short skirt to work, though it might be fine for a night of bar hopping. It would just be out of place. Lil Wayne can do rap, but he’d definitely be out of place at Gospel a Convention, sort of like a love affair with a happy ending in a Taylor Swift lyric.




            So what’s out of place in reading education?

            My nominee is the act of teaching kids to read nonsense words. Don’t do it. It don’t belong (it may even be worse than orange and green).

            Why, you might ask, would anyone teach nonsense words? I attribute this all-too-common error to a serious misunderstanding of tests and testing.

            Many years ago researchers were interested in determining how well kids could decode. They decided upon lists of words that were graded in difficulty. The more words the students could read accurately, the better we assumed his/her decoding must be.

            But, then they started to think: It’s possible for kids to memorize a bunch of words. In fact, with certain high frequency words we tell kids to memorize them. If I flash the word “of” to a student and he/she reads it correctly, that might not be due to better phonics skills, but just because Johnny had that one drilled into long-term memory.

            That means with word tests we can never be sure of how well kids can decode.
           
            The solution: nonsense words tests. If we give kids lists of nonse words, that is combinations of letters that fit English spelling patterns, but that aren’t really words, then if students can read them they must have decoding skills, because no one in their right mind would teach these made up letter combinations to children.

            Enter tests like DIBELS decoding measure. Tests designed to help determine quickly who needs more help with decoding. These aren’t tests aimed at evaluating programs or teachers; they are diagnostic.

            These tests work pretty well, too. Studies show a high correlation between performance on nonsense words and real words, and some of the time the nonsense word scores are more closely related with reading achievement than the word test scores!

            But many schools are now using these to make judgments about teachers.

            And, the teachers’ reaction has been to teach nonsense words to the kids. Not just any nonsense words either; the specific nonsense words that show up on DIBELS. That means these teachers are making the test worthless. If kids are memorizing pronunciations for those nonsense words, then the tests no longer can tell how well the kids can decode.

            We can do better. Please do not use these kinds of tests to make judgments about teachers, it just encourages foolish responses on their parts. And, please do not teach these nonsense words to the kids. It is harmful to kids. It definitely doesn’t belong here.

           

Monday, November 9, 2015

RtI: When Things Don't Work as You Expected

          When I arose today I saw lots of Twitters and Facebook entries about a new U.S. Department of Education study. Then I started getting emails from folks in the schools and in the state departments of education. IES Study on RtI

          “What’s going on here?” was the common trope.

          Basically, the study looked at RtI programs in Grades 1 through 3. The reports say that RtI interventions were lowering reading achievement in Grade 1 and while the RtI interventions weren’t hurting the older kids, they weren’t helping them to read better.

          The idea of RtI is a good one, but the bureaucratization of it was predictable. You can go back and look at the Powerpoint on this topic that I posted years ago.

           I’m not claiming that I predicted the failure of RtI programs. Nevertheless, we should be surprised that research-based interventions aimed at struggling readers, with lots of assessment monitoring harmed rather than helped kids. But I’m not.

          In fairness, this kind of thing can go either way: on the one hand the idea of giving kids targeted instruction generally should improve achievement… and yet, on the other hand, this assumption is based on the idea that schools will accurately identify the kids and the reading problems, will provide additional instruction aimed at helping these kids to catch up, will offer quality teaching of the needed skills (meaning that usually such teaching will have positive learning outcomes), and that being identified to participate in such an effort won’t cause damage in and of itself (if kids feel marked as poor readers that can become a self-fulfilling prophecy with 6-year-olds just trying to figure things out). 

          When RtI was a hot topic I used to argue, somewhat tongue-in-cheek, for a 9-tier model; the point was that a more flexible and powerful system was going to be needed to make a real learning difference. If the identification of student learning needs is sloppy, or the “Tier 2” reading instruction just replaces an equivalent amount of “Tier 1” teaching, or the quality and intensity of instruction are not there… why would anyone expect RtI to be any better than what it replaced?

          Unfortunately, in a lot schools that I visit, RtI has just been a new bureaucratic system for getting kids into special education. Instead of giving kids a plethora of IQ and reading tests, seeking a discrepancy, now we find struggling readers, send them down the hall for part of their instructional day, and test the hell out of them with tests that can’t possibly identify whether growth/learning is taking place and moving them lockstep through “research-based” instructional programs.

          In other words, the programs emphasize compliance rather than encouraging teachers to solve a problem.

          First, there is too much testing in RtI programs. These tests are not fine-grained enough to allow growth to be measured effectively more than 2-4 times per year (in some places I’m seeing the tests administered weekly, biweekly, and monthly, a real time waster.

          Second, the tests are often not administered according to the standardized instructions (telling kids to read as fast as possible on a fluency test is stupid).

          Third, skills tests are very useful, but they can only reveal information about skills performance. Teaching only what can be tested easily is a foolish way to attack reading problems. Definitely use these tests to determine whether to offer extra teaching in phonological awareness, phonics, and oral reading fluency. But kids need work on reading comprehension and language as well, and those are not easily monitored. I would argue for a steady dose of teaching in the areas that we cannot test easily, and a variable amount of teaching of those skills that we can monitor.

          Fourth, the Tier 2 instruction should increase the amount of teaching that kids get. If a youngster is low in fluency or decoding, he should get additional fluency or decoding instruction. That means students should get the entire allotment of Tier 1 reading instruction, and then should get an additional dose of teaching on top of that.

          Fifth, it is a good idea to use programs that have worked elsewhere (“research based”). But that doesn’t mean the program will work for you. Teach that program like crazy with a lot of focus and intensity, just like in the schools/studies where it worked before—in fact, that’s likely why it worked elsewhere. Research-based doesn’t mean that it will work automatically; you have to make such programs work.

          Sixth, don’t put kids in an intervention and assume the problem is solved. The teacher should also beef up Tier 1 teaching, should steal extra instructional moments for these students in class, and should involve parents in their programs as well. What I’m suggesting is a full-court press aimed at making these struggling students successful—rather than a discrete, self-contained, narrow effort to improve things; Tier 2 interventions can be effective, but by themselves they can be a pretty thin strand for struggling readers to hang onto.


          I hope schools don’t drop RtI because of these research findings. But I also hope that they ramp up the quantity and quality of instruction to ensure that these efforts are successful.

Tuesday, September 22, 2015

Does Formative Assessment Improve Reading Achievement?

                        Today I was talking to a group of educators from several states. The focus was on adolescent literacy. We were discussing the fact that various programs, initiatives, and documents—all supposedly research-based efforts—were promoting the idea that teachers should collect formative assessment data.

            I pointed out that there wasn’t any evidence that it actually works at improving reading achievement with older students.

            I see the benefit of such assessment or “pretesting” when dealing with the learning of a particular topic or curriculum content. Testing kids about what they know about a topic, may allow a teacher to skip some topics or to identify topics that may require more extensive classroom coverage than originally assumed.

            It even seems to make sense with certain beginning reading skills (e.g., letters names, phonological awareness, decoding, oral reading fluency). Various tests of these skills can help teachers to target instruction so no one slips by without mastering these essential skills. I can’t find any research studies showing that this actually works, but I myself have seen the success of such practices in many schools. (Sad to say, I’ve also seen teachers reduce the amount of teaching they provide in skills that aren’t so easily tested—like comprehension and writing—in lieu of these more easily assessed topics.)

            However, “reading” and “writing” are more than those specific skills—especially as students advance up the grades. Reading Next (2004), for example, encourages the idea of formative assessment with adolescents to promote higher literacy. I can’t find any studies that support (or refute) the idea of using formative assessment to advance literacy learning at these levels, and unlike with the specific skills, I’m skeptical about this recommendation.

            I’m not arguing against teachers paying attention… “I’m teaching a lesson and I notice that my many of my students are struggling to make sense of the Chemistry book, so I change my up my upcoming lessons, providing a greater amount of scaffolding to ensure that they are successful.” Or, even more likely… I’m delivering a lesson and can see that the kids aren’t getting it, so tomorrow we revisit the lesson.

            Those kinds of observations and on-the-fly adjustments may be what all that is implied by the idea of “formative assessment.” If so, it is obviously sensible, and it isn’t likely to garner much research evidence.

            However, I suspect the idea is meant to be more sophisticated and elaborate than that. If so, I wouldn’t encourage it. It is hard for me to imagine what kinds of assessment data would be collected about reading in these upper grades, and how content teachers would ever use that information productively in a 42-minute period with a daily case load of 150 students.

            A lot of what seems to be promoted these days as formative assessment is getting a snapshot or level of a school’s reading performance, so that teachers and principals can see how much gain the students make in the course of the school year (in fact, I heard several of these examples today). That isn’t really formative assessment by any definition that I’m aware of. That is just a kind of benchmarking to keep the teachers focused. Nothing wrong with that… but you certainly don’t need to test 800 kids to get such a number (a randomized sample would provide the same information a lot more efficiently).

            Of course, many of the computer instruction programs provide a formative assessment placement test that supposedly identifies the skills that students lack so they can be guided through the program lessons. Thus, a test might have students engaged in a timed task of filling out a cloze passage. Then the instruction has kids practicing this kind of task. Makes sense to align the assessment and the instruction, right? But cloze has a rather shaky relationship with general reading comprehension, so improving student performance on that kind of task doesn’t necessarily mean that these students are becoming more college and career ready. Few secondary teachers and principals are savvy about the nature of reading instruction, so they get mesmerized by the fact that “formative assessment”—a key feature of quality reading instruction—is being provided, and the “gains” that they may see are encouraging. That these gains may reflect nothing that matters would likely never occur to them; it looks like reading instruction, it must be reading instruction.

            One could determine the value of such lessons by using other outcome measures that are more in line with the kinds of literacy one sees in college, as well as in civic, familial, and economic lives of adults. And, one could determine the value of the formative assessments included in such programs if one were to have groups use the program, following the diagnostic guidance based on the testing, and having other groups just use the program by following a set grade level sequence of practice. I haven’t been able to find any such studies on reading so we have to take the value of this pretesting on the basis of faith I guess.

            Testing less—even for formative purposes—and teaching more seems to me to be the best way forward in most situations.