Tuesday, April 22, 2014

Razing Standards?

I've been working (visiting research professor at Queens University, Belfast), and vacationing in Ireland for the past few weeks. From the Emerald Isle I've been keeping tabs on the ongoing embarrassing political mischief aimed at keeping America firmly entrenched in the middle educational ranks ("We're 25th, we're 25th!").

I certainly understand those who oppose the CCSS standards because of fears that they might cost some money to accomplish or that they might require us -- us the students, teachers, parents, political leaders -- to work harder, the way they have worked harder in all those countries that have sped past us on the education interstate during the past decade or two. I mean who wants to invest when you can spend, and who wants to work at making things better when you can sit on your duff and collect government paychecks. Let's face it, there are no prizes for being the hardest working governor. Keep standards low and your state will be sure to reach them... hell, you've probably reached them already.

That's why Indiana is regaining jobs so fast after the 2008 downturn. Not. If your kids can't read or do math as well as the kids in China, Singapore, South Korea, Hong Kong, Finland, Massachusetts, etc. you can't expect employers to flock to your state to set up businesses. But who'd want businesses in places like Indiana?

No, I don't have any problems understanding that kind of opposition because it is self-interested. Immediate self interest, no matter how callow, is always understandable.

I have more trouble with the looney tunes who have decided that no matter how bad the educational status quo is that they are for it. Conservatives who have screamed for years about the need for privatization because government schools aren't getting the job done are now pontificating on the importance of maintaining our current low educational standards in government-supported schools (I mean, you either think government programs--like public education--are a boondoggle, or you don't). I almost suffer from whiplash when I hear political conservatives shouting about the need to maintain the status quo when it comes to public education.

I'm just as amazed about the cartoon figures on the left as well. You know the ones I mean (the ones who are arguing that unemployment is a problem, but the 1 million unfilled jobs in America is not). They want equality for all sexual persuasions, races, ethnicities, languages, and legal statuses--until someone tries to do anything to shrink the educational differences among those groups. According to these geniuses, if you set high educational standards, you are doing it to emphasize existing differences.
The best thing I've read about CCSS since coming here is David Brooks' recent column in the NY Times. It is a must read. Mr. Brooks rightly blames kooks on both the left and right for these harmful political shenanigans. Here's the link:
When the Circus Descends by David Brooks

Brooks' notion that the circus has come to town is a good one. In fact, I've carried a similar image for the past few months. Imagine a brightly-colored Volkswagen. A clown emerges who looks remarkably like Glenn Beck, and then various grease-painted governors and leaders of special interest groups follow in their turn.

Of course, the question you find yourself asking is, "How many anti-CCSS clowns can you get into a Volkswagen?"

But the real question should be, "Why? Why would so many clowns fight so hard to maintain the status quo of low educational standards?"  

Thursday, April 17, 2014

Grading Reading Performance Under Common Core

I have a question that many teachers have asked and would like your help when thinking through the grading process for common core. How might the children receive grades for the many standards without giving a test? The teachers are doing a lot of processing text together as a class or in partners so they are wondering about the accountability for the students and how to get a grade to measure their knowledge. 

Good question.

Remember there are lots of parts of Common Core, so if you are an elementary teacher and you are teaching foundational skills (e.g., phonological awareness, phonics, oral reading fluency), then using one of the many test instruments (e.g., PALS, DIBELS, AIMS-WEB) still might be a useful way to go to get a sense of where your kids stand.

However, we don’t have good tests of reading comprehension that can be given quickly and that provide that kind of information, so teacher judgment will certainly be necessary. That doesn’t mean that you shouldn’t use the unit tests in your core program, those might help inform your decisions, but ultimately you are going to have to depend on your evaluation of student performance when they are writing about or discussing text.

I would strongly urge you to NOT try to give students scores in each of the standards. That wouldn’t make much sense and I don’t believe that you could do that reliably (nor can any existing tests). I would suggest that you pay attention to how well students do with texts of varying difficulty (so keep track of the Lexile levels, etc.). You might recognize patterns such as: “Johnny reads well when he is trying to understand texts at 400Lexile, but he struggles when they get to 500Lexile.” You could track this kind of thing yourself based on the texts that you teach or you could test the kids more formally with an informal reading inventory or something like Amplify.

You also might consider tracking how kids do with different part of the standards. Again, an example, might be that throughout a grading period you ask students questions that get at Key Ideas and Details, Craft and Structure, and Integration of Knowledge and Ideas. I wouldn’t expect big performance differences between these tasks but there might be some patterns there, and you could report on them (and make grading decisions accordingly).

To do any of this you will need a system of observation. Maybe something like this: For each group that you do guided reading with, keep a list of students. Then record the date and Lexile level of the text being read for each student. Keep track of how many questions you ask them and whether they did well. You could break these down by category or just keep track overall. Another possibility would be a multi-point rubric that describes how accurate, thorough, and incisive the students’ answers were.

Of course, CCSS stresses the idea of students writing about texts. You could have students writing about the texts that they read several times during a report card marking and use an average of your ratings of these responses to determine how well the student was doing. Again, I don’t think you will be able to come up with anything highly specific (“Johnny is doing well with standard 3, but he struggles on standard 5” so I’m giving him a B-), but you should be able to say that, “Students by this point of the year should be able to read a text at 450Lexile with at least 75% understanding and he can only do this texts at 350Lexile.”

Saturday, April 12, 2014

How Much In-Class Reading?

I am wondering what you think are the acceptable ways to read text in a text in grades 3-8.  Obviously, round robin or popcorn reading is not one of them -- and these are still options we see too often. Independent reading is desired, and some degree of teacher read aloud to the whole class to model fluency and dramatic reading is appropriate as well. What other ways do you think are effective? How much time would you say we should push teachers to do each? (i.e. 60% independent, 20% teacher read aloud...) etc.

I’m with those who believe that students need to read a lot during their school days. Yes, they should read at home, but within their schoolwork at school in class, students need lots of opportunities and requirements to read.

The most powerful of such reading (in terms of stimulating student learning) seems to be oral reading with feedback from a teacher. I would discourage popcorn or round robin but not because the reading practice that they provide is so bad—just that they provide so little practice. When one student is reading, many more are just sitting waiting for their turn. The students who are reading are learning, and the others, not so much.

Research suggests that techniques like paired reading (in which kids read and reread texts to each other), reading while listening, echo reading, radio reading, etc. can all be good choices. In all of these techniques, many students are able to practice simultaneously, they read relatively challenging materials, and then they reread these in an effort to improve the quality. If students can read texts (8th grade or higher) orally at about 150 words correct per minute, I wouldn’t bother with this kind of practice, and if they could not, I would provide about 30 minutes of it each school day.

As powerful as oral reading is at stimulating student reading ability, we can’t ignore the fact that most reading that we engage in will be silent, and students need to practice this as well. I would strongly encourage teachers to have students read those texts silently that they are to write about or that are going to be the focus of group or class discussion. When I assign such reading in classrooms, kids often tell me that I’m doing it wrong (because their teachers have them read the texts round robin). Teachers do this to make sure kids read it and to monitor their reading. By doing the fluency work noted above, I do away with the need to monitor their fluency progress (I’m already doing that), and teachers can make sure students read from the discussions and writing that ensues. I would usually have students reading their literature selection and their social studies or science chapters silently. If students struggle with this, divide the assignments into shorter chunks (even 1 page at a time), and then stretch this out over time. I would suggest that students should be engaging in as much silent reading as oral reading in these grades (and if students are fully fluent as described above, then the silent reading should be almost 100% of what students read).

I would argue not only for minutes to be dedicated to fluency practice, but for another 30-45 minutes to focus on reading comprehension daily—and a lot of this time would entail silent reading. However, silent reading is also going to come up during science, social studies, and other subjects and this counts, too. Thus, having students spend as much as 15 minutes reading aloud (paired reading for 30 minutes would allow each student 15 minutes of such practice), and having students read for 20-30 minutes of a 45 minute comprehension lesson and reading another 10-20 minutes a day in other subjects would give kids a substantial amount of oral and silent reading practice.

Even in the silent reading context, there should be at least some oral reading. Most prominently: students should read aloud during discussions to provide evidence supporting their claims or refuting someone else’s.

It is a good idea to encourage kids to read on their own, but this has such a small impact on student learning that I would make such opportunities available in ways that would not appreciably reduce the instructional doses suggested above. Getting kids to read on their own beyond the school day, while providing them with the heavy involvement in reading across their school day will be the most powerful combination for getting students to high performance levels.

Thursday, April 3, 2014

Apples and Oranges: Comparing Reading Scores across Tests

I get this kind of question frequently from teachers who work with struggling readers, so I decided to respond publicly. What I say about these two tests would be true of others as well.

I am a middle school reading teacher and have an issue that I'm hoping you could help me solve. My students' placements are increasingly bound to their standardized test results. I administer two types of standardized tests to assess the different areas of student reading ability. I use the Woodcock Reading Mastery Tests and the Terra Nova Test of Reading Comprehension. Often, my students' WRMT subtest scores are within the average range, while their Terra Nova results fall at the the lower end of the average range or below. How can I clearly explain these discrepant results to my administrators? When they see average scores on one test they believe these students are no longer candidates for remedial reading services.

Teachers are often puzzled by these kinds of testing discrepancies, but they can happen for a lot of reasons.

Reading tests tend to be correlated with each other, but this kind of general performance agreement between two measures doesn’t mean that they would categorize student performance identically. Performing at the 35%ile might give you a below average designation with one test, but an average one with the other. Probably better to stay away from those designations and use NCE scores or something else that is comparable across the tests.

An important issue in test comparison is the norming samples that they use. And, that is certainly the case with these two tests. Terra Nova has a very large and diverse nationally representative norming sample (about 200,000 kids) and the GMRT is based on a much smaller group that may be skewed a bit towards struggling students (only 2600 kids). When you say that someone is average or below average, you are comparing their performance with those of the norming group. Because of their extensiveness, I would trust the Terra Nova norms more than the WMRT ones; Terra Nova would likely give me a more accurate picture of where my students are compared to the national population. The GMRT is useful because it provides greater information about how well the kids are doing in particular skill areas, and it would help me to track growth in these skills.

Another thing to think about is reliability. Find out the standard error of the tests that you are giving and calculate 95% confidence intervals for the scores. Scores should be stated in terms of the range of performance that the score represents. Lots of times you will find that the confidence intervals of the two tests are so wide that they overlap. This would mean that though the score differences look big, they are not really different. Let’s say that the standard error of one of the tests is 5 points (you need to look up the actual standard error in manual), and that your student received a standard score of 100 on the test. That would mean that the 95% confidence interval for this score would be: 90-110 (in other words, I’m sure that if the student took this test over and over 95% of his scores would fall between those scores). Now say that the standard score for the other test was 8 and that the student’s score on that test was 120. That looks pretty discrepant, but the confidence interval for that one is 104-136. Because 90-110 (the confidence interval for the first test) overlaps with 104-136 (the confidence interval of the second test), these scores look very different and yet they are actually the same.

You mention the big differences in the tasks included in the two tests. These can definitely make a difference in performance. Since WMRT is given so often to lower performing students, that test wouldn’t require especially demanding tasks to spread out performance, while the Terra Nova, given to a broader audience, would need a mix of easier and harder tasks (such as longer and more complex reading passages) to spread out student performance. These harder tasks push your kids lower in the group and may be so hard that it would be difficult to see short-term gains or improvements with such an test. WMRT is often used to monitor gains, so it tends to be more sensitive to growth.

You didn’t mention which edition of the tests you were administering. But these tests are revised from time to time and the revisions matter. GMRT has a 2012 edition, but studies of previous versions of the tests reveal big differences in performance from one edition to the other (despite the fact that the same test items were being used). The different versions of the tests changed their norming samples and that altered the tests performances quite a bit (5-9 points). I think you would find the Terra Nova to have more stable scores, and yet, comparing them across editions might reveal similar score inflation.

My advice is that when you want to show where students stand in the overall norm group, only use the Terra Nova data. Then use the GMRT to show where the students’ relative strengths and weaknesses are and to monitor growth in these skills. That means your message might be something like: “Tommy continues to perform at or near the 15% percentile when he is compared with his age mates across the country. Nevertheless, he has improved during the past three months in vocabulary and comprehension, though not enough to improve his overall position in the distribution.“ In other words, his reading is improving and yet he remains behind 85% of his peers in these skills.

Monday, March 31, 2014

To Play or Not to Play (in K and Pre), That is the Question

During both my childhood and the early years of my teaching career “reading readiness” dominated. The idea was that if you taught kids reading too early, you would do damage. My kindergarten teacher warned Mom not to try to teach me anything, and we were still stalling when I taught first grade.

Recently, a study at the University of Virginia found that we now live in a different world. Most kindergarten teachers believe that they should teach reading and that is pretty common in preschools, too. The headline in Education Week says it all: “Study Find Reading Lessons Edging Out Kindergarten Play.”

I’ve been a big cheerleader for early reading instruction, and why not? The research is overwhelming. Despite theories that teaching reading early would damage kids, there is no empirical evidence supporting those claims. As Head Start kids have ramped up their literacy knowledge over the past several years, their emotional health has improved along with it. Hundreds of studies now show benefits to teaching kids early.

However, that doesn’t mean that kids shouldn’t be playing or that the preschool and kindergarten environments shouldn’t be encouraging and supportive. Too often I see kindergarten reading instruction that doesn’t match well with the research findings.

I would strongly encourage the kinds of play/literacy lessons that Susan Neumann has long championed. Have restaurants, newspaper publishers, post offices, and libraries set up in these classrooms and engage children in literacy play.

Of course, phonological awareness and phonics should be taught explicitly, but the research is very clear that this should be small group work—engaging and interactive. (None of the studies with young kids which decoding instruction was effective presented the lessons to whole classes). Kids can respond in a variety of ways as well. If you are quizzing kids on whether they hear the same sounds at the beginnings of two words, they can jump or clap or rub their tummies to respond. Movement fits into such lessons real well, and various songs and language games can be used, too.

Encourage pretend reading and pretend writing and use techniques like language experience approach to introduce kids to text (and to encourage them to do their own writing). Label everything in classrooms, but involve kids in doing that.

My point is simply this: We should teach literacy in preschool and kindergarten. But play can be the basis of effective literacy lessons. Play more literacy in the early grades and avoid seeming like a fourth-grade class for young’uns. It is not an either or (despite the Ed Week headline); kids can play more and get more literacy instruction.

Tuesday, March 25, 2014

Indiana Drops Common Core

Yesterday, Indiana became the fifth state to choose not to teach to the Common Core standards (CCSS). Opponents of these shared standards have complained less about their content, than about how they were adopted. Critics claim the federal government forced states to adopt these standards by advantaging them in the Race to the Top competition. Two problems with those claims: (1) Indiana didn’t compete for Race to the Top—so there was no federal gun to its head, and (2) states, like Indiana, that don’t adopt Common Core face absolutely no federal penalty.

Ironic. Indiana’s governor claims he’s regaining Indiana’s sovereignty, while his action itself reveals that its sovereignty was never at risk. It is a deft and subtle act of political courage when a politician stands up to someone who hasn’t challenged him. (President Obama could learn from this. Perhaps he would look better on the Ukrainian front if he would issue stern warnings to Canada or Bermuda. That’ll show them whose boss!)

Why did Governor Pence pull the trigger on Common Core? He doesn’t seem to know. “By signing this legislation, Indiana has taken an important step forward in developing academic standards that are written by Hoosiers, for Hoosiers, and are uncommonly high, and I commend members of the General Assembly for their support,” Pence said in a press release. The tortured grammar aside—is it the standards or the Hoosiers who were uncommonly high?—this seems pretty clear.

But like many a “bold” politician of yore, the Guv went on to say, “Where we get those standards, where we derive them from to me is of less significance than we are actually serving the best interests of our kids. And are these standards going to be, to use my often used phrase, uncommonly high?” (I sure hope the new Indiana standards include grammar.)

In other words, Governor Pence dropped the CCSS standards because Hoosiers didn’t write them, but he doesn’t care where standards come from or who writes them. Maybe that’s why he has turned to a lifelong Kennedy-Democrat (Sandra Stotsky, not a Hoosier) to help him shape Indiana’s new educational standards. We all cheer for bipartisanship, but it is always startling to see Tea Party Conservatives and Massachusetts Liberals bedded down together.

What did the Guv get for his trouble? Dr. Stotsky publicly denounced the Hoosier draft for being too consistent with the CCSS standards. She wants Indiana teachers to teach different phonics, grammar, reading comprehension, and writing skills than those taught in the 49 other states (good luck with that).

Dr. Stotsky notes that the Indiana draft had a 70% overlap with the CCSS standards… but seemed to be silent about how much overlap there was among CCSS and the standards in Texas, Nebraska, Virginia, or Alaska; or with the previous and clearly inferior Indiana standards that she apparently advised on; or with the previous Massachusetts standards that she has championed. I guess that just shows that academics can be as slippery as politicians when they think they have a spotlight.

I support the CCSS standards because they are the best reading standards I’ve ever seen (and, yes, I am aware of their limitations and flaws). But if anyone comes up with better standards, I’d gladly support those, too (no matter how uncommonly high the Hoosiers might have been who wrote them).