Showing posts with label Assessment. Show all posts
Showing posts with label Assessment. Show all posts

Sunday, May 22, 2016

How Can Reading Coaches Raise Reading Achievement?

Teachers question:

I have just been hired as a reading coach in a school where I have been a third-grade teacher. My principal wants me to raise reading achievement and he says that he’ll follow my lead. I think I’m a good teacher, but what does it take to raise reading achievement in a whole school (K-5) with 24 teachers?

Shanahan's answer:

            It’s easy J. Just do the following 9 things:

1.    Improve leadership.
            Literacy leadership matters. You and your principal will need to be a team. The more the two of you know and agree upon the better. Over the next few years, your principal will be hiring and evaluating teachers, making placement and purchasing decisions, and communicating with the community. You need to be in on some of those things and you need to influence all of them. Your principal should tell the faculty that you speak for him on literacy matters and you both need to devote some time to increasing his literacy knowledge so he can understand and support your recommendations. I’d get on his calendar at least a couple of times per week to discuss strategy and debrief on what you are both doing, but also for professional development time for him.

2.    Increase the amount of literacy instruction.

            How much reading and writing instruction and practice kids get is critical.  Take a close look at how much of this kids are getting. Observe, talk to teachers, survey… find out how much teaching is being provided and how much reading the kids do within this teaching. Be on the look out for lost time. Mrs. Smith may schedule two hours of ELA, but she doesn’t start class until 9:12 most mornings due to late bus drop offs, milk money collection, Pledge of Allegiance, morning announcements and so on. And, her class takes a 7-minute bathroom break at about 10 each morning. She isn’t trying to teach for 2 hours, but only 1 hour 41 minutes (and the actual amount of instruction may be even less). That’s a whopping 60 hours less instruction per year than what she schedules! Try to get everyone up to 2-3 hours per day of reading and writing instruction, with a large percentage of that devoted to kids reading and writing within instruction (and, yes, a student reading aloud to the group, only counts as one student reading).

3.    Focus instruction on essential curriculum elements.

            ELA often is used for wonderful things that don’t make much difference in kids learning. I watched a “phonics lesson” recently in which most of the time was spent on cutting out pictures and pasting them to a page. The amount of sounding and matching letters to sounds could have been accomplished within about 30 seconds of this 20-minute diversion. You definitely can send kids off to read on their own, but not much learning is usually derived from this. Instead, make ia commitment to obtaining substantial instruction in each of the following research-proven components for every child.

(a) Teach students to read and understand the meanings of words and parts of words (decoding and word meaning): Dedicate time to teaching students phonological awareness (K-1, and strugglers low in those skills); phonics or decoding (K-2, or again the strugglers); sight vocabulary (high frequency words, K-2); spelling (usually linked to the decoding or word meanings); word meanings; and morphology (meaningful parts of words).   

(b) Teach students to read text aloud with fluency so that it sounds like language (accuracy—reading the author’s words as written; appropriate speed—about the speed one talks normally; and proper prosody or expression—pausing appropriately, etc.).

(c) Teach students to read with understanding and the ability to learn from text. With beginning readers this, like fluency practice, needs to be oral reading. However, by the end of Grade 1 and from then on, most reading for comprehension should be silent reading. Such instruction should teach students about text (like how it is organized, how author’s put themes in stories, or how history books differ from science books), about the kinds of information that is important (like main ideas or inferences), and ways to think about texts that will increase understanding (like summarizing along the way, or how to ask oneself questions about a text).

(d). Teach students to write effectively.  This would include training students in various means of getting their ideas onto paper—printing, handwriting, and keyboarding, but it also teaching them to write for various purposes (narration, exposition, argument), to negotiate the writing process effectively (planning, drafting, revising, editing), to write for a range of audiences, and to write powerful pieces (with interesting introductions, strong organizations, sufficient amounts of accurate information, etc.).

            All four of those are detailed in your state standards, no matter where you live, but make sure that kids get lots of teaching in each. (I’d strive for roughly 25% of the instructional time into each of those baskets—that comes out to approximately 90-135 hours per year of instruction in each of those 4 things).

4.    Provide focused professional development.

            I suspect this will be where much of your time is focused; making sure your teachers know how to teach those four essentials well. This might take the form of professional development workshops on particular topics, organizing teacher reading groups to pursue particular instructional issues, observing teachers and giving them feedback on their lessons, co-planning lessons with one or more teachers, providing demonstration lessons, and so on. You need to make sure that every one of your teachers knows what needs to be taught and how to teach it well.

5.    Make sure sound instructional programs are in place.

            It is possible to teach reading effectively without a commercial program, but there are serious drawbacks to that approach. First, there’s the fairness issue. Programs that are shared by school staff will not make all teachers equal in their ability to teach reading, but they sure can reduce the amount of difference that exists (especially when there is adequate supervision and professional development—see numbers 1 and 4 above). Second, programs can ensure that kids get instruction in key areas of reading, even when teachers aren’t comfortable providing such teaching. Basically, we want to ensure that every teacher has an adequate set of lessons for productive instruction in those four key components for sufficient amounts of time. If your teachers are skilled enough to improve upon the lessons in the shared core program, then by all means support these improvements and make sure they’re shared widely.

6.    Align assessments.

            It can be helpful to monitor kids learning, at least in basic skills areas that are amenable to easy assessment. It is reasonable, depending on the tests and the skills, to evaluate decoding skills or fluency ability formally 2-4 times per year. Of course, teachers can collect such information within instruction much more often than that. For instance, if a teacher is going to teach fluency for several minutes per day, why not take notes on how well individuals do with this practice and keep track of that over weeks. In any event, if we recognize that some students are not making adequate progress in these basic skills, then increasing the amount of teaching they get within class or beyond class can be sensible. The amount of testing needs to be kept to an absolute minimum, so this time can be used to improve reading.

7.    Target needs of special populations.

            Often there are particular groups of kids who struggle more than others within your ELA program. Two obvious groups are second-language learners (who may struggle with academics because they are still learning English) or kids with disabilities (who struggle to learn written language). Making sure that they get extra assistance within class when possible, and beyond class (through special classes, afterschool and summer programs, etc.) would make great sense. If you are making sure that everyone in the school benefits from 2 hours per day of real reading and writing instruction, then why not try to build programs that would ensure that these strugglers and stragglers get even more? I know one coach who runs an afterschool fluency program, for instance.

8.    Get parent support and help.

            Research says parents can help and that they often do. I suggest trying to enlist their help from the beginning. Many coaches do hold parent workshops about how to read to their kids, how to listen effectively to their children’s reading, how to help with homework, etc. Lots of times teachers tell me that those workshops are great, but that the parents they most wish would attend don’t show up. Don’t be discouraged. Sometimes those parents don’t get the notices (perhaps you could call them), or they work odd schedules (sometimes meetings during the school day are best for them—perhaps close to the time they have to pick their kids up from school), or they need babysitting support or translation (those one can be worked out, too).

9.    Motivate everybody.

            Just like leadership (#1 above) is necessary to get any of these points accomplished, so is motivation. You have to be the number one cheerleader for every teacher’s reading instruction, for every parent’s involvement, and for every student’s learning gains. Information about what your school is up to has to be communicated to the community so that everyone can take part. Some coaches hold reading parades in their neighborhoods, others have regular reading nights where kids in pajamas come to school with mom and dad to participate in reading activities, there are young author events, lunchtime book clubs, and million minute reading challenges, etc. You know, whatever takes to keep everyone’s head in the game. 

            Like I said, raising reading achievement is easy. You just have to know everything, get along with everybody, work like a horse, and keep smiling.  

Saturday, May 7, 2016

What doesn’t belong here? On Teaching Nonsense Words

            Obviously you shouldn’t wear an especially short skirt to work, though it might be fine for a night of bar hopping. It would just be out of place. Lil Wayne can do rap, but he’d definitely be out of place at Gospel a Convention, sort of like a love affair with a happy ending in a Taylor Swift lyric.

            So what’s out of place in reading education?

            My nominee is the act of teaching kids to read nonsense words. Don’t do it. It don’t belong (it may even be worse than orange and green).

            Why, you might ask, would anyone teach nonsense words? I attribute this all-too-common error to a serious misunderstanding of tests and testing.

            Many years ago researchers were interested in determining how well kids could decode. They decided upon lists of words that were graded in difficulty. The more words the students could read accurately, the better we assumed his/her decoding must be.

            But, then they started to think: It’s possible for kids to memorize a bunch of words. In fact, with certain high frequency words we tell kids to memorize them. If I flash the word “of” to a student and he/she reads it correctly, that might not be due to better phonics skills, but just because Johnny had that one drilled into long-term memory.

            That means with word tests we can never be sure of how well kids can decode.
            The solution: nonsense words tests. If we give kids lists of nonse words, that is combinations of letters that fit English spelling patterns, but that aren’t really words, then if students can read them they must have decoding skills, because no one in their right mind would teach these made up letter combinations to children.

            Enter tests like DIBELS decoding measure. Tests designed to help determine quickly who needs more help with decoding. These aren’t tests aimed at evaluating programs or teachers; they are diagnostic.

            These tests work pretty well, too. Studies show a high correlation between performance on nonsense words and real words, and some of the time the nonsense word scores are more closely related with reading achievement than the word test scores!

            But many schools are now using these to make judgments about teachers.

            And, the teachers’ reaction has been to teach nonsense words to the kids. Not just any nonsense words either; the specific nonsense words that show up on DIBELS. That means these teachers are making the test worthless. If kids are memorizing pronunciations for those nonsense words, then the tests no longer can tell how well the kids can decode.

            We can do better. Please do not use these kinds of tests to make judgments about teachers, it just encourages foolish responses on their parts. And, please do not teach these nonsense words to the kids. It is harmful to kids. It definitely doesn’t belong here.


Sunday, November 29, 2015

On Progress Monitoring, Maze Tests, and Reading Comprehension Assessment

Teacher question:
I am looking for some insight on the use of mazes to progress monitor reading comprehension.  I teach in a middle school (6-8) and am struggling with using this to measure reading comprehension with fluent readers. So much of their reading comprehension in class is measured by determining main idea, recalling basic facts, inferencing, and analyzing the use of literary elements. It seems that when the maze is used to monitor reading comprehension, it doesn’t offer much information about the reader. Often students rush through it and circle words just to complete it in the time allotted and score exactly the same as students who are reading and choosing the correct word, but do not complete it in the allotted time. It seems like student motivation is a critical component of the accuracy of these scores.

Is the maze an effective way to measure passage comprehension, or is it simply a way to measure sentence comprehension? Do you have any suggestions on what else could be used? I appreciate your help with this and look forward to your response.

Shanahan responds:
            John Guthrie developed maze in the 1970s to determine how well students could read particular texts. Let’s say you have a 7th grade science book and want to know who in your class is likely to struggle with that book. 

            To figure this out you'd test students on several passages from that science book. According to Guthrie, students who score 50% or higher on maze should be able to handle this book. 

            The benefit of maze is that it is easy to construct, administer, and score and maze results are reasonably accurate and reliable. (To design a maze test, you select a passage of 150-200 words in length, delete a word from the second sentence, and every 5th or 7th word after that. Provide the students with three word choices in random order: the correct word, a word that is the same part of speech but incorrect, and a word that is the wrong part of speech.)

            As you point out, maze tells you nothing about what comprehension skills students have or how well they can answer certain kinds of questions. However, question-and-answer comprehension questions can’t tell you that either, so switching tests won't solve that problem for you.

            I was at the University of Delaware during the 1970s where John Guthrie was working at the time. He'd told the late Aileen Tobin, my office mate, a funny thing about maze. He told her that they had tried it out with individual sentences and with passages (as described above) and it didn’t make any difference. Even when sentences were presented randomly students seemed to perform equally well.

            We laughed a lot about that. It just didn't make sense to us. We wondered if that was also true of other popular measures such as cloze tests. (Cloze is similar to maze, but harder to administer because instead of multiple-choice it requires students to fill in the blanks.)

            Our banter over this issue ended up in a series research studies that I carried out. We found just what you surmised. Students performed as well on sequential order passages and on passages that we had scrambled the orders of the sentences. Imagine reading Moby Dick, starting with sentence 16, then 5, then 32, then 1, etc. (Randomizing sentence order doesn't hurt maze or cloze performance, but it wreaks havoc on summary writing.)
            I also found that cloze correlated best with multiple-choice reading comprehension tests that asked questions based on information from single sentences. Correlations were lower if students had to synthesize information across the passages.

            Cloze and maze tests provide reasonable predictions of reading comprehension, but they do this based on how well students interpret single sentences. For most readers, the prediction works because it is unusual that someone develops the ability to read sentences without developing the ability to read texts.  

            If you want to know who is going to struggle with your literature anthology, maze can be a tool that will help you to accomplish that. If you want to identify specific reading comprehension skills so you can provide appropriate practice, maze won’t help, but neither will the testing alternatives that you could consider.

            You say you want to monitor your students’ reading comprehension. I suspect that means you need a way of determining at various points during the year whether your students are reading better. For this, I would suggest that you use a collection of graded passages (using Lexiles or some other text evaluation method to put these on a difficulty continuum). Identify the levels of difficulty your students can handle successfully (this could be done with maze tests of those passages), and then later in the year, check to see if the students can now handle passages that are even harder. 

          Monitoring comprehension means not tabulating specific skills that have been accomplished, but what complexity of text language students can negotiate. Perhaps early in the year, your students will be able to score 50% or higher with texts written at 800 Lexiles. By mid-year you'd want them to score like that with harder passages (e.g., 900L-950L). That kind of a testing regimen would allow you to identify who is improving and who is not.


Tuesday, September 22, 2015

Does Formative Assessment Improve Reading Achievement?

                        Today I was talking to a group of educators from several states. The focus was on adolescent literacy. We were discussing the fact that various programs, initiatives, and documents—all supposedly research-based efforts—were promoting the idea that teachers should collect formative assessment data.

            I pointed out that there wasn’t any evidence that it actually works at improving reading achievement with older students.

            I see the benefit of such assessment or “pretesting” when dealing with the learning of a particular topic or curriculum content. Testing kids about what they know about a topic, may allow a teacher to skip some topics or to identify topics that may require more extensive classroom coverage than originally assumed.

            It even seems to make sense with certain beginning reading skills (e.g., letters names, phonological awareness, decoding, oral reading fluency). Various tests of these skills can help teachers to target instruction so no one slips by without mastering these essential skills. I can’t find any research studies showing that this actually works, but I myself have seen the success of such practices in many schools. (Sad to say, I’ve also seen teachers reduce the amount of teaching they provide in skills that aren’t so easily tested—like comprehension and writing—in lieu of these more easily assessed topics.)

            However, “reading” and “writing” are more than those specific skills—especially as students advance up the grades. Reading Next (2004), for example, encourages the idea of formative assessment with adolescents to promote higher literacy. I can’t find any studies that support (or refute) the idea of using formative assessment to advance literacy learning at these levels, and unlike with the specific skills, I’m skeptical about this recommendation.

            I’m not arguing against teachers paying attention… “I’m teaching a lesson and I notice that my many of my students are struggling to make sense of the Chemistry book, so I change my up my upcoming lessons, providing a greater amount of scaffolding to ensure that they are successful.” Or, even more likely… I’m delivering a lesson and can see that the kids aren’t getting it, so tomorrow we revisit the lesson.

            Those kinds of observations and on-the-fly adjustments may be what all that is implied by the idea of “formative assessment.” If so, it is obviously sensible, and it isn’t likely to garner much research evidence.

            However, I suspect the idea is meant to be more sophisticated and elaborate than that. If so, I wouldn’t encourage it. It is hard for me to imagine what kinds of assessment data would be collected about reading in these upper grades, and how content teachers would ever use that information productively in a 42-minute period with a daily case load of 150 students.

            A lot of what seems to be promoted these days as formative assessment is getting a snapshot or level of a school’s reading performance, so that teachers and principals can see how much gain the students make in the course of the school year (in fact, I heard several of these examples today). That isn’t really formative assessment by any definition that I’m aware of. That is just a kind of benchmarking to keep the teachers focused. Nothing wrong with that… but you certainly don’t need to test 800 kids to get such a number (a randomized sample would provide the same information a lot more efficiently).

            Of course, many of the computer instruction programs provide a formative assessment placement test that supposedly identifies the skills that students lack so they can be guided through the program lessons. Thus, a test might have students engaged in a timed task of filling out a cloze passage. Then the instruction has kids practicing this kind of task. Makes sense to align the assessment and the instruction, right? But cloze has a rather shaky relationship with general reading comprehension, so improving student performance on that kind of task doesn’t necessarily mean that these students are becoming more college and career ready. Few secondary teachers and principals are savvy about the nature of reading instruction, so they get mesmerized by the fact that “formative assessment”—a key feature of quality reading instruction—is being provided, and the “gains” that they may see are encouraging. That these gains may reflect nothing that matters would likely never occur to them; it looks like reading instruction, it must be reading instruction.

            One could determine the value of such lessons by using other outcome measures that are more in line with the kinds of literacy one sees in college, as well as in civic, familial, and economic lives of adults. And, one could determine the value of the formative assessments included in such programs if one were to have groups use the program, following the diagnostic guidance based on the testing, and having other groups just use the program by following a set grade level sequence of practice. I haven’t been able to find any such studies on reading so we have to take the value of this pretesting on the basis of faith I guess.

            Testing less—even for formative purposes—and teaching more seems to me to be the best way forward in most situations.