Showing posts with label Assessment. Show all posts
Showing posts with label Assessment. Show all posts

Thursday, July 2, 2015

Why Research-Based Reading Programs Alone Are Not Enough

Tim,

Every teacher has experienced this. While the majority of the class is thriving with your carefully planned, research supported instructional methods, there is often one kid that is significantly less successful. We work with them individually in class, help them after school, sometimes change things up to see what will work, bring them to the attention of the RtI team that is also using the research supported instructional methods. But what if the methods research support for the majority of kids don't work for this kid?

Several months ago I read an article in Discover magazine called "Singled Out" by Maggie Koerth-Baker. Regarding medicine rather than education, the article is about using N of 1 experiments to find out whether an individual patient reacts well to a particular research backed treatment. http://discovermagazine.com/2014/nov/17-singled-out

"But even the gold standard isn't perfect. The controlled clinical trial is really about averages, and averages don't necessarily tell you what will happen to an individual."

Ever since I read the article, I've been wondering what an N of 1 experiment would look like in the classroom. This would be much easier to implement in the controlled numbers of a special education classroom, but we do so much differentiation in the regular classroom now, I'd like to find a way to objectively tell if what we do for individuals is effective in the short term, rather than waiting for the high stakes testing that the whole class takes. Formative assessment is helpful, but I suspect we need something more finely tuned to tease out what made the difference. We gather tons of data to report at RtI meetings, but at least at my school, it's things like sight word percentages, reading levels, fluency samples, not clear indicators of say, whether a child As a researcher, how would you set up an N of 1 experiment in an elementary classroom?

My response:
This letter points out an important fact about experimental research and its offshoots (e.g., quasi-experiments, regression discontinuity designs): when we say a treatment was effective that doesn’t mean everyone who got the special whiz-bang teaching approach did better than everyone who didn’t. It just means one group, on average, did better than the other group, on average.

For example, Reading First was federal program that invested heavily in trying to use research-based approaches to improve beginning reading achievement in Title I schools. At the end of the study, the RF schools weren't doing much better than the control schools overall. But that doesn't mean there weren’t individual schools that used the extra funding well to improve their students’ achievement, just that there weren’t enough of those schools to make a group difference.

The same happens when we test the effectiveness of phonics instruction or comprehension strategies. A study may find that the average score for the treatment group was significantly higher than that obtained by the control group, but there would be kids in the control group who would outperform those who got the treatment, and students in the successful treatment who weren’t themselves so successful.

That means that even if you were to implement a particular procedure perfectly and with all of the intensity of the original effort (which is rarely the case), you'd still have students who were not very successful with the research-based training.

Awhile back, Atul Gawande, wrote in the New Yorker about the varied results obtained in medicine with research-based practices (“The Bell-Curve”). Dr. Gawande noted that particular hospitals, although they followed the same research-based protocols, were so scrupulous and vigorous in their application of those methods that they obtained better results.

For example, in the treatment of cystic fibrosis, it's a problem when a patient’s breathing capacity falls below level. If the lung capacity reaches that benchmark, standard practice would be to hospitalize the patient to try to regain breathing capacity. However, in the particularly effective hospitals, doctors didn’t wait for the problem to become manifest. As soon as things started going wrong for a patient, breathing capacity started to decline, they intervened.

It is less about formal testing (since our measures usually lack the reliability of those used in medicine) or about studies with Ns of 1, than about thorough and intensive implementation of research-based practices and careful and ongoing monitoring of student performance within instruction.

Many educators and policymakers seem to think that once research-based programs are selected, then we no longer need to worry about learning. That neglects the fact that our studies tell us less about what works, than they do about what may work under some conditions. Our studies tell us about practices that have been used successfully, but people are so complex that you can’t guarantee such programs will always work that way. It is a good idea to use practices that have been successful--for someone--in the past, but such practices do not have automatically positive outcomes. In the original studies, teachers would have worked hard to try to implement successfully; later, teachers may be misled into thinking that if they just take kids through the program the same levels of success will automatically be obtained. 

Similarly, in our efforts to make sure that we don't lose some kids, we may impose testing regimes aimed at monitoring success, such as DIBELing kids several times a year… but such instruments are inadequate for such intensive monitoring and can end up being misleading.

I’d suggest, instead, that teachers use those formal monitors less frequently—a couple or three times a year, but to observe the success of their daily lessons more carefully. For example, a teacher is having students practice hearing differences in the endings of words. Many students are able to implement the skill successfully by the end of the lesson, but some are not. If that’s the case, supplement that lesson with more practice rather than just going onto the next prescribed lesson (or do this simultaneous to the continued progress through the program). If the lesson was supposed to make it possible for kids to hear particular sounds, then do whatever you can to enable them to hear those sounds.

To monitor ongoing success this carefully, the teacher does have to plan lessons that allow students many opportunities to demonstrate whether or not they could implement the skill. The teacher also has to have a sense of what success may look like (e.g., the students don’t know these 6 words well enough if they can’t name them in 10 seconds or less; the students can’t spell these particular sounds well enough if they can’t get 8 out of 10 correct; the student isn’t blending well enough if they… etc.).


If a program of instruction can be successful, and you make sure that students do well with the program—actually learning what is being presented by the program—then you should have fewer kids failing to progress.

Sunday, January 25, 2015

Concerns about Accountability Testing

Why don’t you write more about the new tests?

I haven’t written much about PARCC or SBAC—or the other new tests that other states are taking on—in part because they are not out yet. There are some published prototypes, and I was one of several people asked to examine the work product of these consortia. Nevertheless, the information available is very limited, and I fear that almost anything I may write could be misleading (the prototypes are not necessarily what the final product will turn out to be).

However, let me also say that, unlike many who strive for school literacy reform and who support higher educational standards, I’m not all that enthused about the new assessments. 

Let me explain why.

1. I don't think the big investment in testing is justified. 

I’m a big supporter of teaching phonics and phonological awareness because research shows that to be an effective way to raise beginning reading achievement. I have no commercial or philosophical commitment to such teaching, but trust the research. There is also strong research on the teaching of vocabulary, comprehension, and fluency, and expanding the amount of teaching is a powerful idea, too.

I would gladly support high-stakes assessment if it had a similarly strong record of stimulating learning, but that isn't the case.

Test-centered reform is expensive, and it has not been proven to be effective. The best studies of it that I know reveal either extremely slight benefits, or somewhat larger losses (on balance, it is—at best—a draw). Having test-based accountability, does not lead to better reading achievement.

(I recognize that states like Florida have raised achievement and they had high-stakes testing. The testing may have been part of what made such reforms work, but you can't tell if the benefits weren't really due to the other changes (e.g., professional development, curriculum, instructional materials, amount of instruction) that were made simultaneously.)

2. I doubt that new test formats—no matter how expensive—will change teaching for the good.

In the early 1990s, P. David Pearson, Sheila Valencia, Robert Reeve, Karen Wixson, Charles Peters, and I were involved in helping Michigan and Illinois develop innovative tests; tests that included entire texts and with multipe-response question formats that did away with the one-correct answer notion. The idea was that if we had tests that looked more like “good instruction,” then teachers who tried to imitate the tests would do a better job. Neither Illinois nor Michigan saw learning gains as a result of these brilliant ideas.

That makes me skeptical about both PARCC and SBAC. Yes, they will ask some different types of questions, but that doesn’t mean the teaching that results will improve learning. I doubt that it will.

I might be more excited if I didn’t expect companies and school districts to copy the formats, but miss the ideas. Instead of teaching kids to think deeply and to reason better, I think they’ll just put a lot of time into two-part answers and clicking. 

3. Longer tests are not really a good idea.

We should be trying to maximize teaching and minimize testing (minimize, not do away with). We need to know how states, school districts, and schools are doing. But this can be figured out with much less testing. We could easily estimate performance on the basis of samples of students—rather than entire student bodies—and we don’t need annual tests; with samples of reliable sizes, the results just don’t change that frequently.

Similarly, no matter how cool a test format may seem, it is probably not worth the extra time needed to administer. I suspect the results of these tests will correlate highly with the tests that they replace. If that's the case, will you really get any more information from these tests? And, if not, then why not use these testing days to teach kids instead? Anyone interested in closing poverty gaps, or international achievement gaps, is simply going to have to bite the bullet: more teaching, not more testing, is the key to catching up.

4. The new reading tests will not provide evidence on skills ignored in the past.

The new standards emphasize some aspects of reading neglected in the past. However, these new tests are not likely to provide any information about these skills. Reading tests don't work that way (math tests do, to some extent). We should be able to estimate the Lexile levels that kids are attaining, but we won’t be able to tell if they can reason better or are more critical thinkers (they may be, but these tests won’t reveal that).

Reading comprehension tests—such as those used by all 50 states for accountability purposes—can tell us how well kids can comprehend. They cannot tell which skills the students have (or even if reading comprehension actually depends on such a collection of discrete skills). Such tests, if designed properly, should provide clues about the level of language difficulty that students can negotiate successfully, but beyond that we shouldn’t expect any new info from the items.

On the other hand, we should expect some new information. The new tests are likely to have different cut scores or criteria of success. That means these tests will probably report much lower scores than in the past. Given the large percentage of boys and girls who “meet or exceed” current standards, graduate from high school, and enter college, but who lack basic skills in reading, writing, and/or mathematics, it would only be appropriate that their scores be lower in the future.


However, I predict that when those low-test scores arrive, there will be a public outcry that some politicians will blame on the new standards. Instead of recognizing that the new tests are finally offering honest info about how their kids are doing, they’ll believe that the low scores are the result of the poor standards and there'll be a strong negative reaction. Instead of militating for better schools, the public will be stimulated to support lower standards.

The new tests will only help if we treat them differently than the old tests. I hope that happens, but I'm skeptical.

Sunday, May 18, 2014

IRA 2014 Presentations

I made four presentations at the meetings of the International Reading Association in New Orleans this year. One of these was the annual research review address in which I explained the serious problems inherent in the "instructional level" in reading and in associated approaches like "guided reading" which have certainly outlived their usefulness.

IRA Talks 2014



Thursday, April 3, 2014

Apples and Oranges: Comparing Reading Scores across Tests

I get this kind of question frequently from teachers who work with struggling readers, so I decided to respond publicly. What I say about these two tests would be true of others as well.

I am a middle school reading teacher and have an issue that I'm hoping you could help me solve. My students' placements are increasingly bound to their standardized test results. I administer two types of standardized tests to assess the different areas of student reading ability. I use the Woodcock Reading Mastery Tests and the Terra Nova Test of Reading Comprehension. Often, my students' WRMT subtest scores are within the average range, while their Terra Nova results fall at the the lower end of the average range or below. How can I clearly explain these discrepant results to my administrators? When they see average scores on one test they believe these students are no longer candidates for remedial reading services.

Teachers are often puzzled by these kinds of testing discrepancies, but they can happen for a lot of reasons.

Reading tests tend to be correlated with each other, but this kind of general performance agreement between two measures doesn’t mean that they would categorize student performance identically. Performing at the 35%ile might give you a below average designation with one test, but an average one with the other. Probably better to stay away from those designations and use NCE scores or something else that is comparable across the tests.

An important issue in test comparison is the norming samples that they use. And, that is certainly the case with these two tests. Terra Nova has a very large and diverse nationally representative norming sample (about 200,000 kids) and the GMRT is based on a much smaller group that may be skewed a bit towards struggling students (only 2600 kids). When you say that someone is average or below average, you are comparing their performance with those of the norming group. Because of their extensiveness, I would trust the Terra Nova norms more than the WMRT ones; Terra Nova would likely give me a more accurate picture of where my students are compared to the national population. The GMRT is useful because it provides greater information about how well the kids are doing in particular skill areas, and it would help me to track growth in these skills.

Another thing to think about is reliability. Find out the standard error of the tests that you are giving and calculate 95% confidence intervals for the scores. Scores should be stated in terms of the range of performance that the score represents. Lots of times you will find that the confidence intervals of the two tests are so wide that they overlap. This would mean that though the score differences look big, they are not really different. Let’s say that the standard error of one of the tests is 5 points (you need to look up the actual standard error in manual), and that your student received a standard score of 100 on the test. That would mean that the 95% confidence interval for this score would be: 90-110 (in other words, I’m sure that if the student took this test over and over 95% of his scores would fall between those scores). Now say that the standard score for the other test was 8 and that the student’s score on that test was 120. That looks pretty discrepant, but the confidence interval for that one is 104-136. Because 90-110 (the confidence interval for the first test) overlaps with 104-136 (the confidence interval of the second test), these scores look very different and yet they are actually the same.

You mention the big differences in the tasks included in the two tests. These can definitely make a difference in performance. Since WMRT is given so often to lower performing students, that test wouldn’t require especially demanding tasks to spread out performance, while the Terra Nova, given to a broader audience, would need a mix of easier and harder tasks (such as longer and more complex reading passages) to spread out student performance. These harder tasks push your kids lower in the group and may be so hard that it would be difficult to see short-term gains or improvements with such an test. WMRT is often used to monitor gains, so it tends to be more sensitive to growth.

You didn’t mention which edition of the tests you were administering. But these tests are revised from time to time and the revisions matter. GMRT has a 2012 edition, but studies of previous versions of the tests reveal big differences in performance from one edition to the other (despite the fact that the same test items were being used). The different versions of the tests changed their norming samples and that altered the tests performances quite a bit (5-9 points). I think you would find the Terra Nova to have more stable scores, and yet, comparing them across editions might reveal similar score inflation.

My advice is that when you want to show where students stand in the overall norm group, only use the Terra Nova data. Then use the GMRT to show where the students’ relative strengths and weaknesses are and to monitor growth in these skills. That means your message might be something like: “Tommy continues to perform at or near the 15% percentile when he is compared with his age mates across the country. Nevertheless, he has improved during the past three months in vocabulary and comprehension, though not enough to improve his overall position in the distribution.“ In other words, his reading is improving and yet he remains behind 85% of his peers in these skills.


Thursday, February 6, 2014

To Special Ed or not to Special Ed: RtI and the Early Identification of Reading Disabilities

My question centers on identifying students for special education. Research says identify students early, avoid the IQ-discrepancy model formula for identification, and use an RTI framework for identification and intervention. 

That said, I have noticed that as a result of high stakes accountability linked to teacher evaluations there seems to be a bit of a shuffle around identifying students for special education. While we are encourages to "identify early", the Woodcock Johnson rarely finds deficits that warrant special education identification.  Given current research  on constrained skills theory ( Scott  Paris)  and late emerging reading difficulties (Rollanda O’Connor), how do we make sure we are indeed identifying students early? 

If a student has been with me for two years (Grades 1 and 2) and the instructional trajectory shows minimal progress on meeting benchmarks, (despite quality research-based literacy instruction), but a special education evaluation using the Woodcock Johnson shows skills that fall within norms, how do we service these children? Title I is considered a regular education literacy program. Special Education seems to be pushing back on servicing these students, saying they need to "stay in Title I."  Or worse, it is suggested that these students be picked up in SPED for phonics instruction, and continue to be serviced in Title I for comprehension. 

I am wondering what your thoughts are on this. The "duplication of services" issue of being service by both programs aside, how does a school system justify such curriculum fragmentation for its most needy students? Could you suggest some professional reading or research that could help me make the case for both early identification of students at risk for late emerging reading difficulties, and the issue of duplication of services when both Title I and SPED service a student?

This is a great question, but one that I didn’t feel I could answer. As I’ve done in the past with such questions: I sent it along to someone in the field better able to respond. In this case, I contacted Richard Allington, past president of the International Reading Association, and a professor at the University of Tennessee. This question is right in his wheelhouse, and here is his answer:

I know of no one who advocates early identification of kids as pupils with disabilities (PWDs). At this point in time we have at least 5 times as many kids identified as PWDs [as is merited]. The goal of RTI, as written in the background paper that produced the legislation, is a 70-80% decrease in the numbers of kids labeled as PWDs. The basic goal of RTI is to encourage schools to provide kids with more expert and intensive reading instruction. As several studies have demonstrated, we can reduce the proportion of kids reading below grade to 5% or so by the end of 1st grade. Once on level by the end of 1st about 85% of kids remain on grade level at least through 4th grade with no additional intervention. Or as two other studies show, we could provide 60 hours of targeted professional development to every K-2 teacher to develop their expertise sufficiently to accomplish this. In the studies that have done this fewer kids were reading below grade level than when the daily 1-1 tutoring was provided in K and 1st. Basically, what the research indicates is that LD and dyslexics and ADHD kids are largely identified by inexpert teachers who don't know what to do. If Pianta and colleagues are right, only 1 of 5 primary teachers currently has both the expertise and the personal sense of responsibility for teaching struggling readers. (It doesn't help that far too many states have allowed teachers to avoid responsibility for the reading development of PWDs by removing PWDs from value-added computations of teacher effectiveness).

I'll turn to senior NICHD scholars who noted that, "Finally, there is now considerable evidence, from recent intervention studies, that reading difficulties in most beginning readers may not be directly caused by biologically based cognitive deficits intrinsic to the child, but may in fact be related to the opportunities provided for children learning to read." (p. 378)

In other words, most kids that fail to learn to read are victims of inexpert or nonexistent teaching. Or, they are teacher disabled not learning disabled. Only when American schools systems and American primary grade teachers realize that they are the source of the reading problems that some kids experience will those kids ever be likely to be provided the instruction they need by their classroom teachers.

As far as "duplication of services" this topic has always bothered me because if a child is eligible for Title i services I believe that child should be getting those services. As far as fragmentation of instruction this does not occur when school districts have a coherent systemwide curriculum plan that serves all children. But most school districts have no such plan and so rather than getting more expert and more intensive reading lessons based on the curriculum framework that should be in place, struggling readers get a patchwork of commercial programs that result in the fragmentation. Again, that is not the kids as the problem but the school system as the problem. Same is true when struggling readers are being "taught" by paraprofessionals. That is a school system problem not a kids problem. In the end all of these school system failures lead to kids who never becomes readers.

Good answer, Dick. Thanks. Basically, the purpose of these efforts shouldn’t be to identify kids who will qualify for special education, but to address the needs of all children from the beginning. Once children are showing that they are not responding adequately to high quality and appropriate instruction, then the intensification of instruction—whether through special education or Title I or improvements to regular classroom teaching should be provided. Quality and intensity are what need to change; not placements. Early literacy is an amalgam of foundational skills that allow one to decode from print to language and language skills that allow one to interpret such language. If students are reaching average levels of performance on foundational skills, it is evident that they are attaining skills levels sufficient to allow most students to progress satisfactorily. If they are not progressing, then you need to look at the wider range of skills needed to read with comprehension. The focus of the instruction, the intensity of the instruction, and the quality of the instruction should be altered when students are struggling; the program placement or labels, not so much.

Sunday, July 21, 2013

The Lindsay Lohan Award for Poor Judgment or Dopey Doings in the Annals of Testing


Lindsay Lohan is a model of bad choices and poor judgments. Her crazy decisions have undermined her talent, wealth, and most important relationships. She is the epitome of bad decision making (type “ridiculous behavior” or “dopey decisions” into Google and see how fast her name comes up). Given that, it is fitting to name an award for bad judgment after her.

Who is the recipient of the Lindsay? I think the most obvious choice would be PARCC, one of the multi-state consortium test developers. According to Education Week, PARCC will allow its reading test to be read to struggling readers. I assume if students suffer from dyscalculia they’ll be able to bring a friend to handle the multiplication for them, too.

Because some students suffer from disabilities it is important to provide them with tests that are accessible. No one in their right mind would want blind students tested with traditional print; Braille text is both necessary and appropriate. Similarly, students with severe reading disabilities might be able to perform well on a math test, but only if someone read the directions to them. In other cases, magnification or extended testing times might be needed.

However, there is a long line of research and theory demonstrating important differences in reading and listening. Most studies have found that for children, reading skills are rarely as well developed as listening skills. By eighth grade, the reading skills of proficient readers can usually match their listening skills. However, half the kids who take PARCC won’t have reached eighth grade, and not everyone who is tested will be proficient at reading. Being able to decode and comprehend at the same time is a big issue in reading development. 

I have no problem with PARCC transforming their accountability measures into a diagnostic battery—including reading comprehension tests, along with measures of decoding and oral language. But if the point is to find out how well students read, then you have to have them read. If for some reason they will not be able to read, then you don’t test them on that skill and you admit that you couldn’t test them. But to test listening instead of reading with the idea that they are the same thing for school age children flies in the face of logic and a long history of research findings. (Their approach does give me an idea: I've always wanted to be elected to the Baseball Hall of Fame, despite not having a career in baseball. Maybe I can get PARCC to come up with an accommodation that will allow me to overcome that minor impediment.)  


The whole point of the CCSS standards was to make sure that students would be able to read, write, and do math well enough to be college- and career-ready. Now PARCC has decided reading isn’t really a college- or career-ready skill. No reason to get a low reading score, just because you can't read. I think you will agree with me that PARRC is a very deserving recipient of the Lindsay Lohan Award for Poor Judgment; now pass that bottle to me, I've got to drive home soon.