Showing posts with label What Works Clearinghouse. Show all posts
Showing posts with label What Works Clearinghouse. Show all posts

Sunday, April 5, 2015

Response to Complaint about What Works Clearinghouse

I have recently encountered some severe criticism leveled at reviews and reviewers from What Works Clearinghous  (see http://www.nifdi.org/research/reviews-of-di/what-works-clearinghouse). I am concerned about recommending this site to teachers as a resource for program evaluations. I'm wondering if you agree with the criticisms, and if yes, where you would recommend teachers go for evidence-based program reviews. I know that NELP and NRP reports are possibilities but are also static documents that do not get updated frequently with new findings, so some of the information really isn't current. Perhaps the Florida Center for Reading Research is an alternative? Do you have others than you would recommend?

I don’t agree with these criticisms and believe What Works Clearinghouse (WWC) has a valuable role to play in offering guidance to educators. I often recommend it to teachers and will continue to do so. It is the best source for this kind of information.

WWC is operated by the U.S. Department of Education. It reviews research claims about commercial programs and products in education. WWC serves as a kind of Good Housekeeping seal of approval. It is helpful because it takes conflict of interest out of the equation. WWC and its reviewers have no financial interest in whether a research claim is upheld or not.

I am an advisor to the WWC. Basically, that means I’m available, on a case-by-case basis, to help their review teams when questions come up about reading instruction or assessment. Such inquiries arise 2-3 times per year. I don’t think my modest involvement in WWC taints my opinion, but the whole point of WWC is to reduce the commercial influence on the interpretation of research findings, so it would be dishonorable for me not be open about my involvement.  

I wish the “studies” and “reports” you referred me to were as disinterested. 
The DI organization has long been chagrined that the WWC reviews of DI products and programs haven’t been more positive. That the authors of these reports have a rooting interest in the results should be noted.

Different from the disinterested reviews of the Clearinghouse which follow a consistent rule-based set of review procedures developed openly by a team of outstanding scientists, these reports are biased, probably because they are aimed at trying to poke a finger in the eye of the reviewers who were unwilling to endorse their programs. That’s why there is so much non-parallel analysis, questionable assumptions, biased language, etc.

For example, one of the reports indicates how many complaints have been sent to the WWC (62 over approximately 7 years of reviewing). This sounds like a lot, but what is the appropriate denominator… is it 62 complaints out of X reviews? Or 62 complaints about X decisions included in each of the X reviews? Baseball umpires make mistakes, too; but we evaluate them not on the number of mistakes, but the proportion of mistakes to decisions. (I recommend WWC reviews, in part, because they will re-review the studies and revise as necessary when there are complaints).

Or, another example: These reports include a table citing the “reasons for requesting a quality review of WWC findings,” which lists the numbers and percentage of times that complaints have focused on particular kinds of problems (e.g., misinterpretation of study findings, inclusion/exclusion of studies. But there is no comparable table showing the disposition of these complaints. I wonder why not? (Apparently, one learns in another portion of the report, that there were 146 specific complaints, 37 of which led to some kind of revision—often minor changes in a review for the sake of clarity; that doesn’t sound so terrible to me.)

The biggest complaint leveled here is that some studies should not have been included as evidence since they were studies of incomplete or poor implementations of a program.

The problem with that complaint is that issues of implementation quality only arise when a report doesn’t support a program’s effectiveness. There is no standard for determining how well or how completely a program is implemented, so for those with an axe to grind, any time their program works it had to be well implemented and when it doesn’t it wasn’t.

Schoolchildren need to be protected from such scary and self-interested logic.



Wednesday, January 15, 2014

Is There Research on that Reading Intervention?


I am a reading specialist working in an urban school district with struggling readers in K-5.  Do you have any suggestions on intervention programs that you find the most beneficial to students?  Currently, we are using LLI (Fountas and Pinnell), Sonday, Read Naturally and Soar to Success, at the interventionist's discretion. Is there any research supporting or refuting these programs?  Is there another program that you find more effective?  We also use Fast Forward and Lexia as computer-based interventions.  What does the research say about these tools? 

Shanahan response:
The best place to get this kind of information is the What Works Clearinghouse (WWC). This is a kind of Consumer Reports for educators that will tell you if commercial products have been studied and how they did. The benefit to you is that all the information is in one place, it is being provided by the U.S. Department of Education so it won’t be biased towards some company, and they vet the research studies to make sure the information is sound.


Some things to be aware of when you seek this information:

          Don’t read too much into the fact that there is no evidence on a program.
This happens a lot. Instructional programs aren’t like drugs; no one is required to prove that they work before they can be sold. While some companies do commission studies of their products, most do not. The key thing to remember is that a lack of research on a product does not mean that product doesn’t work. In such cases, I usually look to see if a product is as thorough or demanding as those that do have evidence.

Don’t overestimate programs that do have direct research support.
Programs do not have automatic effects. A positive result tells you that this program can work under some conditions and with some students. It means that in those circumstances this program did better than… whatever the control or comparison group did. It is good to know that someone was able to get a positive result with such a program (that should help teacher confidence), but often a program that works may not work in your circumstances or with your teachers or with your students. Just because something worked, that doesn’t mean that you could make it work.


A basic ethical obligation of a researcher is to report the results of their studies, even when the studies don’t come out the way they wanted. Commercial companies don’t have this same obligation. What that means is that if a company commissions a study and it gets a positive result, they will allow it to be released; but that isn’t necessarily true when the results don’t show that their product worked. That means available research on a particular program or product may be overestimating the impact. (That’s one of the reasons that I like that WWC is so strong on the evidence: they can’t know about studies that got lost in a file drawer, but they can certainly make sure the available studies meet the highest evaluation standards).

Pay attention to the control group.
In medicine, there are standards of care. Typically, a new treatment is compared with the standard of care so that you know that if it “worked” it would be better than what you are already doing. In education, we have no shared standards of instruction, so you need to pay attention to what the intervention did better than. It might have done better than what you are already delivering, and that would certainly encourage you to change programs, but it might be doing better than instruction that you, too, are already outperforming.