Thursday, February 27, 2014
I’ve been receiving queries about the CCSS from teachers, principals, and consultants trying to figure out the standards. They don’t always like my responses—in fact, some have argued back that I must be wrong. I’m not (he said, modestly).
But I’m getting ahead of myself, first the questions:
One of the differences between the writing standard 1a in grades 9-10 and 11-12 is that students “introduce precise claims” (9-10) while in 11-12 students “introduce precise, knowledgeable claims.” I’m working with a group of teachers in clarifying the difference. It seems as though a precise claim would be also be grounded in knowledge rather than intuition or guesses or… Can you clarify?
Our team is now debating the differences between recount and retell. We have found definitions of recount/retell, but we can’t seem to find credible resources that will clarify the differences. Since the Core uses retell in the K and 1st grade Core standards, and switches to recount in the 2nd grade standards, we feel it is critical that we are clear in explaining the differences. Can you help us to clarify the differences, or point us to a credible source to cite as we clarify the difference?
My response is that these well-meaning educators are not approaching these standards appropriately. They are looking for a narrow precision of meaning in a document not intended to provide that. I know that close reading is in right now, but a close reading of the standards—trying to make these fine distinctions by analyzing the words and structure closely—will undermine successful educational efforts rather than supporting them.
We aren’t lawyers and these aren’t legal documents.
Grant Wiggins has argued that the verbs in the standards need to be much more precise if they are going to provide a good roadmap for assessment. Grant Wiggins Blog Entry
But I’d argue back that it’s more important for the standards to support quality instruction rather than a spiffy test design.
These standards, because they are from the “fewer, bigger, better” school of standard writing, are intentionally not so precise. They leave a lot out, leaving many important choices and decisions to teachers and curriculum makers.
If the standards say students need to “summarize text,” then those who try to formulate a very precise conception of summarization are going to undermine, rather than facilitate, student learning.
Instead of that kind of hermeneutic verb analysis, it would be better to brainstorm in the other direction. That is, try to be inductive rather than deductive:
Think of all the kinds of texts and information sources that might be appropriate for students to summarize (consider different lengths of texts too, and any features that could make them difficult to summarize). Ponder, too, all the subskills entailed in summarization, such as recognizing and omitting unimportant information, identifying main ideas, creating generalization statements to replace lists of ideas, paraphrasing, and so on.
That is what the standards are asking us to teach, and those who try to serve such rich dishes of learning are likely to be successful. I’d want my kids to dine at their table—the dishes sound nutritious and delicious. But those who try to split hairs between recounting and retelling –trying to make sure that kids are served one but not the other—will be serving leftovers long past the date of expiration. No, thanks.
Friday, December 20, 2013
I am a 4th grade math teacher, and I love CC standards. I’ve been teaching to them and my students are making HUGE gains in math. My question is about PARCC. I have looked online at the protocol questions and cannot figure out what students will really be expected to do. It looks like they will need to cut, paste, and type. My fear is that the online component of the test is going to skew the results and students will be unnecessarily frustrated trying to show their thinking using "tools". It seems the test is automatically biased towards wealthier schools with more technology, technology teachers, and parents that buy technology for the children as "toys". How can we be sure that PARCC is assessing their reading and math, not their technology skills? Also, how can we help prepare our students for the types of technology skills they will be required to perform with PARCC?
Like you, I’m nervous about the technology of the new tests. We’re in a tech revolution, and yet, I don’t see as much of that technology in schools as is widely presumed. Even schools that have lots of I-Pads or computers often don’t have the bandwidth needed or the onsite tech support. There are definitely home and school disparities when it comes to tech availability.
Another issue has to do with whether tech is really necessary—in an academic sense—in the testing. Looking at the available prototypes for the tests, I would say yes and no. For example, students have traditionally marked answers on tests and worksheets simply by checking off an item or filling in a bubble grid; nothing particularly academic in those skills. The new assessments will have them doing “drag-and-drop” and the like instead. Is that really an advance?
But there are items in which students must access webpages and identify sentences in text, and of course, there is writing and revising with these tools. All of these examples seem, to me, to be authentic academic tasks. There is nothing wrong with drag-and-drop items, but if they weren’t there, the assessments would tell us pretty much the same thing. That’s not true of these other skills. In all of these latter cases, students are asked to negotiate tasks that are common in college and the workplace, and as such kids should be able to handle them.
I suspect when the feds required that these new tests be tech-based, they thought NCLB would be reauthorized. That might have allowed the federal government to incent school districts to upgrade their technology. Unfortunately, that hasn’t happened. Many schools are now scrambling to upgrade their technology (often these efforts seem aimed only at the test—one hopes they’ll soon figure out that they have to use these for instruction as well).
In any event, your question is a good one. It is that the technology disadvantage of some kids will affect performance. That could mean that kids who, though they can read well, may score poorly because of unfamiliarity with keyboards, data screens, etc. That might not be misleading, however. Reading in the 21st century is more than reading a book or magazine; it really does require critical reading of multiple texts available on the Internet; just like writing does usually involve typing on a computer or other device. Monitoring whether our kids can do these tasks successfully is appropriate. The side benefit of that, one hopes, is that schools will move more quickly to making such tools more widely available.
Tuesday, December 3, 2013
My friends at the Thomas Fordham Institute asked that I weigh in on the controversy over the close reading lessons being touted by School Achievement Partners. I wrote a blog for their site and have included a link to it here. You might be interested in my assessment of those lessons and on some of their claims about close reading. Here it is:
Commentary on Gettysburg Address Close Reading Lessons
Since I was posting that article, I thought it would be a good time to provide a couple of other links. This fall, I had an article in American Educator about how Common Core is changing reading lessons:
American Educator article on Reading Lessons and Common Core
I also published an article in Educational Leadership on the emphasis on informational text in the classroom.
Educational Leadership Article on Informational Text
I hope you find these links useful. I appreciate the generosity of the Thomas Fordham Institute, the American Federation of Teachers, and the ASCD for making these available to you.
Wednesday, November 13, 2013
What is the biggest educational change promoted by the Common Core?
There are so many choices: kids will be reading more challenging texts; close reading will revolutionize the reading lessons; high school English, science, and social studies teachers will teach disciplinary literacy; there will be greater attention to argument, multiple text, informational text, and writing from sources, and so on?
So which is the biggest change? Perhaps one that you haven’t even thought of…
Past standards were long lists of skills, knowledge, and strategies; lists so endless that they were less standards than curriculum guides. Until CCSS, the typical standards looked like a scope and sequence chart rather than a list of outcomes.
In fact, the lists were so long that most of the young people who have become teachers since 1991 have no idea what the difference is between standards and curricula. When you have such complete lists of outcomes, you end up with an extensive list of lessons rather than learning goals.
Standards are goals; they are the outcomes that we want our children to accomplish. Standards tell you what the point is, but they really don’t tell you what needs to be taught.
Example: the standards require that students be able to write/compose high quality narratives, expositions, and arguments. However, the standards do not expressly require schools to teach students to use manuscript hand, cursive writing, or keyboarding.
That has some critics in a tizzy, but it is as it should be. The standard tells you the outcome that must be accomplished, but not everything that a student may need to learn to reach the goal is specified. That's where the teacher comes in… what do we need to teach to accomplish these standards? That is up to us.
Just try to teach kids to compose without making it possible for them to express their ideas in printed, written, or typed words… that wouldn’t make any sense, and I assume most schools and publishers will eventually figure out the reason for this "omission" and kids will still be taught to put their words on paper (even though CCSS doesn’t even mention it).
The same can be said about teaching students to comprehend text. The standards don't require you to teach comprehension strategies, but research suggests that if you do you will be more likely to get the students to the standard.
The standards say teach students to summarize… but they don’t specify all of the possible subskills, pre-skills, or types of texts that students should be able to summarize. Try teaching summarization by just having students practice summarizing and you won’t be likely succeed.
So the big change? The CCSS takes us back to a time when the educational goals were separated from the curriculum, which puts teachers back in charge of the curriculum.
Now if we could just get teachers to see tests as something separate from goals and curriculum.
Monday, November 4, 2013
It seems that there is a lot of conflicting information coming out about accuracy and complex text. In the April edition of The Reading Teacher, Richard Allington wrote an article pertaining to struggling readers. In this article he says that there are studies showing the benefits to teaching children using text where their accuracy is high. Our district just raised the running record accuracy rate expectation to 95-98% accuracy based on the current research. Yet, your blog postings pull in the opposite direction. How do teachers know what is right and what is wrong? After all, teachers want to do what is best and most effective towards student learning.
What a great question. In my blog post, I cited particular studies and Dick Allington’s focused on a completely different set of studies. This is what teachers find so confusing.
The experimental studies that I cited randomly assigned students to different treatment groups, so that children were matched to books in different ways, which allows a direct comparison of the impact of these methods—and gives us some certainty that the differences in learning were due to the different ways students were matched with text and not to something else.
Allington cites several correlational studies that examine existing patterns of relationship. These studies show that the lowest readers will tend to be placed in relatively harder texts and that they tend to make the least gains or to be the least motivated.
The problem with correlational studies of this kind is that they don’t allow us to attribute causation. From such evidence we can’t determine what role, if any, the student-book match made in kids’ learning.
The students may have lagged because of how they were matched to books. But their low learning gains could also be due to other unmeasured instructional or demographic differences (many differences between high and low readers have been documented, but those were not controlled or measured in these studies). It could just be that the lowest readers make the least gains and that it has nothing to do with how they are matched to books. That’s why you need experiments (to determine whether the correlations matter).
I looked at studies that actually evaluated the effectiveness of this instructional practice (and these studies found either that student-text match made no difference or that harder placements led to more learning). While Dick looked at studies that revealed that there was a relationship between these variables, omitting all mention of these contradictory direct tests or of any of the correlational evidence that didn’t support his claims.
There were two experimental studies in his review, but neither of them manipulated this particular variable, so these results are correlational, too. For example, Linnea Ehri and her colleagues created a program in which teachers provided intensive reading support to young struggling readers (mainly explicit instruction in phonological awareness and phonics). However, teachers varied in how much reading they had the students do during the intervention and how they matched children to books; the kids who did a lot of reading of easier materials seemed to learn the most. That is an interesting finding, but it is still just a correlation.
One possibility is that there were other differences that weren’t measured (but that were somehow captured indirectly by the text-match variable). Perhaps the teachers were just responding to the students who were making the biggest gains and were undershooting their levels since they were gaining so fast. That would mean that it wasn’t the student-book match that was leading to learning, but that the better learning was influencing teacher decision-making about student-book match. How could we sort that confusing picture out? With experiments that systematically observe the impact of book placement separate from other variables; such as the the experimental studies that I cited.
A couple of other points worth noting: the kids who gained the least in the Ehri study were placed in texts in the way that you say your school is doing. In the Ehri study, the kids who made the biggest gains were in even easier materials than that; materials that should have afforded little opportunity to learn (which makes my point—there is no magic level that kids have to be placed in text to allow them to learn).
Another important point to remember: Allington’s article made no distinction based on grade levels or student reading levels. His claim is that all struggling readers need to spend much or most of their time reading relatively easy texts, and his most convincing data were drawn from studies of first-graders. However, the Common Core State Standards do not raise text levels for beginning readers. When students are reading at a first-grade level or lower (no matter what their ages), it may be appropriately cautious to keep them in relatively easy materials (though there are some discrepant data on this point too--that suggest that grouping students for instruction in this way damages children more than it helps them).
Experimental studies show that by the time students are reading like second-graders, it is possible for them to learn from harder text (as they did in the Morgan study). If we hold students back at their supposed levels, we are guaranteeing that they cannot reach the levels of literacy needed for college and career readiness by the time they leave high school.