Thursday, April 17, 2014
I have a question that many teachers have asked and would like your help when thinking through the grading process for common core. How might the children receive grades for the many standards without giving a test? The teachers are doing a lot of processing text together as a class or in partners so they are wondering about the accountability for the students and how to get a grade to measure their knowledge.
Remember there are lots of parts of Common Core, so if you are an elementary teacher and you are teaching foundational skills (e.g., phonological awareness, phonics, oral reading fluency), then using one of the many test instruments (e.g., PALS, DIBELS, AIMS-WEB) still might be a useful way to go to get a sense of where your kids stand.
However, we don’t have good tests of reading comprehension that can be given quickly and that provide that kind of information, so teacher judgment will certainly be necessary. That doesn’t mean that you shouldn’t use the unit tests in your core program, those might help inform your decisions, but ultimately you are going to have to depend on your evaluation of student performance when they are writing about or discussing text.
I would strongly urge you to NOT try to give students scores in each of the standards. That wouldn’t make much sense and I don’t believe that you could do that reliably (nor can any existing tests). I would suggest that you pay attention to how well students do with texts of varying difficulty (so keep track of the Lexile levels, etc.). You might recognize patterns such as: “Johnny reads well when he is trying to understand texts at 400Lexile, but he struggles when they get to 500Lexile.” You could track this kind of thing yourself based on the texts that you teach or you could test the kids more formally with an informal reading inventory or something like Amplify.
You also might consider tracking how kids do with different part of the standards. Again, an example, might be that throughout a grading period you ask students questions that get at Key Ideas and Details, Craft and Structure, and Integration of Knowledge and Ideas. I wouldn’t expect big performance differences between these tasks but there might be some patterns there, and you could report on them (and make grading decisions accordingly).
To do any of this you will need a system of observation. Maybe something like this: For each group that you do guided reading with, keep a list of students. Then record the date and Lexile level of the text being read for each student. Keep track of how many questions you ask them and whether they did well. You could break these down by category or just keep track overall. Another possibility would be a multi-point rubric that describes how accurate, thorough, and incisive the students’ answers were.
Of course, CCSS stresses the idea of students writing about texts. You could have students writing about the texts that they read several times during a report card marking and use an average of your ratings of these responses to determine how well the student was doing. Again, I don’t think you will be able to come up with anything highly specific (“Johnny is doing well with standard 3, but he struggles on standard 5” so I’m giving him a B-), but you should be able to say that, “Students by this point of the year should be able to read a text at 450Lexile with at least 75% understanding and he can only do this texts at 350Lexile.”
Tuesday, March 25, 2014
Yesterday, Indiana became the fifth state to choose not to teach to the Common Core standards (CCSS). Opponents of these shared standards have complained less about their content, than about how they were adopted. Critics claim the federal government forced states to adopt these standards by advantaging them in the Race to the Top competition. Two problems with those claims: (1) Indiana didn’t compete for Race to the Top—so there was no federal gun to its head, and (2) states, like Indiana, that don’t adopt Common Core face absolutely no federal penalty.
Ironic. Indiana’s governor claims he’s regaining Indiana’s sovereignty, while his action itself reveals that its sovereignty was never at risk. It is a deft and subtle act of political courage when a politician stands up to someone who hasn’t challenged him. (President Obama could learn from this. Perhaps he would look better on the Ukrainian front if he would issue stern warnings to Canada or Bermuda. That’ll show them whose boss!)
Why did Governor Pence pull the trigger on Common Core? He doesn’t seem to know. “By signing this legislation, Indiana has taken an important step forward in developing academic standards that are written by Hoosiers, for Hoosiers, and are uncommonly high, and I commend members of the General Assembly for their support,” Pence said in a press release. The tortured grammar aside—is it the standards or the Hoosiers who were uncommonly high?—this seems pretty clear.
But like many a “bold” politician of yore, the Guv went on to say, “Where we get those standards, where we derive them from to me is of less significance than we are actually serving the best interests of our kids. And are these standards going to be, to use my often used phrase, uncommonly high?” (I sure hope the new Indiana standards include grammar.)
In other words, Governor Pence dropped the CCSS standards because Hoosiers didn’t write them, but he doesn’t care where standards come from or who writes them. Maybe that’s why he has turned to a lifelong Kennedy-Democrat (Sandra Stotsky, not a Hoosier) to help him shape Indiana’s new educational standards. We all cheer for bipartisanship, but it is always startling to see Tea Party Conservatives and Massachusetts Liberals bedded down together.
What did the Guv get for his trouble? Dr. Stotsky publicly denounced the Hoosier draft for being too consistent with the CCSS standards. She wants Indiana teachers to teach different phonics, grammar, reading comprehension, and writing skills than those taught in the 49 other states (good luck with that).
Dr. Stotsky notes that the Indiana draft had a 70% overlap with the CCSS standards… but seemed to be silent about how much overlap there was among CCSS and the standards in Texas, Nebraska, Virginia, or Alaska; or with the previous and clearly inferior Indiana standards that she apparently advised on; or with the previous Massachusetts standards that she has championed. I guess that just shows that academics can be as slippery as politicians when they think they have a spotlight.
I support the CCSS standards because they are the best reading standards I’ve ever seen (and, yes, I am aware of their limitations and flaws). But if anyone comes up with better standards, I’d gladly support those, too (no matter how uncommonly high the Hoosiers might have been who wrote them).
Thursday, February 27, 2014
I’ve been receiving queries about the CCSS from teachers, principals, and consultants trying to figure out the standards. They don’t always like my responses—in fact, some have argued back that I must be wrong. I’m not (he said, modestly).
But I’m getting ahead of myself, first the questions:
One of the differences between the writing standard 1a in grades 9-10 and 11-12 is that students “introduce precise claims” (9-10) while in 11-12 students “introduce precise, knowledgeable claims.” I’m working with a group of teachers in clarifying the difference. It seems as though a precise claim would be also be grounded in knowledge rather than intuition or guesses or… Can you clarify?
Our team is now debating the differences between recount and retell. We have found definitions of recount/retell, but we can’t seem to find credible resources that will clarify the differences. Since the Core uses retell in the K and 1st grade Core standards, and switches to recount in the 2nd grade standards, we feel it is critical that we are clear in explaining the differences. Can you help us to clarify the differences, or point us to a credible source to cite as we clarify the difference?
My response is that these well-meaning educators are not approaching these standards appropriately. They are looking for a narrow precision of meaning in a document not intended to provide that. I know that close reading is in right now, but a close reading of the standards—trying to make these fine distinctions by analyzing the words and structure closely—will undermine successful educational efforts rather than supporting them.
We aren’t lawyers and these aren’t legal documents.
Grant Wiggins has argued that the verbs in the standards need to be much more precise if they are going to provide a good roadmap for assessment. Grant Wiggins Blog Entry
But I’d argue back that it’s more important for the standards to support quality instruction rather than a spiffy test design.
These standards, because they are from the “fewer, bigger, better” school of standard writing, are intentionally not so precise. They leave a lot out, leaving many important choices and decisions to teachers and curriculum makers.
If the standards say students need to “summarize text,” then those who try to formulate a very precise conception of summarization are going to undermine, rather than facilitate, student learning.
Instead of that kind of hermeneutic verb analysis, it would be better to brainstorm in the other direction. That is, try to be inductive rather than deductive:
Think of all the kinds of texts and information sources that might be appropriate for students to summarize (consider different lengths of texts too, and any features that could make them difficult to summarize). Ponder, too, all the subskills entailed in summarization, such as recognizing and omitting unimportant information, identifying main ideas, creating generalization statements to replace lists of ideas, paraphrasing, and so on.
That is what the standards are asking us to teach, and those who try to serve such rich dishes of learning are likely to be successful. I’d want my kids to dine at their table—the dishes sound nutritious and delicious. But those who try to split hairs between recounting and retelling –trying to make sure that kids are served one but not the other—will be serving leftovers long past the date of expiration. No, thanks.
Friday, December 20, 2013
I am a 4th grade math teacher, and I love CC standards. I’ve been teaching to them and my students are making HUGE gains in math. My question is about PARCC. I have looked online at the protocol questions and cannot figure out what students will really be expected to do. It looks like they will need to cut, paste, and type. My fear is that the online component of the test is going to skew the results and students will be unnecessarily frustrated trying to show their thinking using "tools". It seems the test is automatically biased towards wealthier schools with more technology, technology teachers, and parents that buy technology for the children as "toys". How can we be sure that PARCC is assessing their reading and math, not their technology skills? Also, how can we help prepare our students for the types of technology skills they will be required to perform with PARCC?
Like you, I’m nervous about the technology of the new tests. We’re in a tech revolution, and yet, I don’t see as much of that technology in schools as is widely presumed. Even schools that have lots of I-Pads or computers often don’t have the bandwidth needed or the onsite tech support. There are definitely home and school disparities when it comes to tech availability.
Another issue has to do with whether tech is really necessary—in an academic sense—in the testing. Looking at the available prototypes for the tests, I would say yes and no. For example, students have traditionally marked answers on tests and worksheets simply by checking off an item or filling in a bubble grid; nothing particularly academic in those skills. The new assessments will have them doing “drag-and-drop” and the like instead. Is that really an advance?
But there are items in which students must access webpages and identify sentences in text, and of course, there is writing and revising with these tools. All of these examples seem, to me, to be authentic academic tasks. There is nothing wrong with drag-and-drop items, but if they weren’t there, the assessments would tell us pretty much the same thing. That’s not true of these other skills. In all of these latter cases, students are asked to negotiate tasks that are common in college and the workplace, and as such kids should be able to handle them.
I suspect when the feds required that these new tests be tech-based, they thought NCLB would be reauthorized. That might have allowed the federal government to incent school districts to upgrade their technology. Unfortunately, that hasn’t happened. Many schools are now scrambling to upgrade their technology (often these efforts seem aimed only at the test—one hopes they’ll soon figure out that they have to use these for instruction as well).
In any event, your question is a good one. It is that the technology disadvantage of some kids will affect performance. That could mean that kids who, though they can read well, may score poorly because of unfamiliarity with keyboards, data screens, etc. That might not be misleading, however. Reading in the 21st century is more than reading a book or magazine; it really does require critical reading of multiple texts available on the Internet; just like writing does usually involve typing on a computer or other device. Monitoring whether our kids can do these tasks successfully is appropriate. The side benefit of that, one hopes, is that schools will move more quickly to making such tools more widely available.