You mention a number of times that you don't believe that Daniel Willingham has read the research or the papers he is writing about, which if you were familiar with his work at all you would realise is ridiculously pejorative point of view and I would suggest says more about you than him. Especially as your questioner specifically mentions and links to an article (I assume this one: http://goo.gl/h3ofUF) while you repeatedly refer to "his book" without mentioning which one.
Furthermore as some of Willingham's writing has been amongst the most influential recent contributions to practice referred to and used by practicing teachers, to admit to making judgements based on "short excerpts or off-shoots that have come to my attention" strikes me as rather extraordinary for someone writing under the byline "Distinguished Professor Emeritus of urban education".
I wrote that Professor Willingham’s claims reveal that he has read the research summaries/meta-analyses rather than the studies themselves since he was ignoring key features of the original studies that were important to the claims he was making. You indicate that this is "ridiculously pejorative." It is not. It is a commonplace among scholars. They read the actual studies in their fields of study, but they usually rely on summary pieces when it comes to areas outside of their expertise (Shanahan, Shanahan, Misichia, 2010). In fact, Thursday I received a note from Dan admitting that to be the case in this instance and if you check his blog yesterday you would see that he does not take my criticism as pejorative, but accepts my conclusion—since I have first-hand knowledge of the research he was talking about.
You are correct, I mentioned his book without specifying which of his four books I was speaking of; however, since it is his first book on reading education after 30 years in the field of psychology, I didn’t think readers would be likely to mix this up with his books on other topics.
You claim that Willingham’s book is one of "the most influential recent contributions to practice referred to and used by practicing teachers.” That may be true, but I don’t know how you would measure such influence. His work isn’t cited much in the practitioner literature, which is not surprising given how recently it appeared. As I indicated in my blog entry, I haven’t read his book. However, that is largely because I have been in the field for more than 40 years and have both extensive instructional and research experience addressing and studying the problems that he is opining on. Scholars don’t usually spend considerable time on what is considered to be "tertiary literature” (as opposed to primary literature—that is, the original empirical research studies, or secondary literature—that is, meta-analyses and systematic syntheses of such studies). Tertiary literature is usually not that informative to people in the field because they have already read the studies that such authors might use to form their opinions. Researchers usually wouldn’t read many books by practitioners either, though I do, because it is possible that a teacher would have as-yet-unstudied insights that could be useful, but Professor Willingham is not a teacher nor has he spent much time in schools, so that wouldn’t drive me to his book either. I’m glad that you are finding his insights to be useful, but you just have to be aware that he is writing outside of his field of practice and research, so some of his claims may unintentionally be at odds with the facts of the matter, as was the case here. Tertiary literature often misses the nuance of the information being shared.
His insight that we can overdo strategy instruction is a good one and well reasoned from the summary evidence. His claim that two weeks of strategy instruction is sufficient is a claim without any kind of research support (his reasoning, not stated in the article, was that since the outcome effect is so big, you wouldn’t need as great a dosage as was provided in the studies).
Think of a medical analogy: research finds that if you have strep throat and the doctor puts you on a 10-day schedule of antibiotics, then 90% of patients will be cured of the infection by the end of the schedule and the infection will not re-occur within 30 days. There are no studies of a 2 or 3 day medication schedule with this malady, but there is a physician, a podiatrist let’s say, in any event someone outside this particular field of study, who is reasoning that since the effect is so big, the dosage could be too high; he writes a book saying that patients should stop the medication after only 3 days use. I promise you most researchers and physicians in the area of Nose, Ears, Throat would reject such a claim without any direct research evidence, and frankly, without consideration of many other issues addressed in the original studies (like the differences in outcomes among patients with certain conditions).
Sorry you didn’t like the article. Good luck.
Interesting! I greatly enjoy learning more about what the research has to say about best practices.
I have a clarification question. You said, "Strategies are not all equal..The weakest: teaching students to think about how to respond to different question types (effect sizes so small that I wouldn’t waste my time)." As a long-time test preparation coach, I was stunned to read this. If I may, can I make sure I correctly understood what you're saying? By teaching them (or not) to think about how to respond to different questions, do you mean that the reading instruction shouldn't primarily focus on *how to respond* to questions or did you mean that they shouldn't be taught primarily to focus on *thinking* about how to respond or, possibly, that they shouldn't be taught how to respond to *different* questions? Or maybe all of the above?
Since I teach students to think about and articulate their answers to reading test questions (actually I should qualify that to say I encourage them to think about what the passage tells them which may be used to answer the question - so, in other words, it isn't the student's own answer but a passage-informed answer) *before* looking at the answer choices. I find this very helpful for many students since it often prevents them from spending an inordinate length of time considering the legitimacy of each answer choice (a pitfall for many, I have found). I realize that my field involves a specialized application of approaches based on (or not) reading comprehension research, so perhaps the research you are referring to is not purporting to study impacts on test scores but rather long-term reading abilities. Depending on your perspective (a different can of worms), reading test scores and reading abilities may or may not overlap.
Regardless, I would very interested in clarification as to what you specifically meant here. Though I can't speak for all in my profession, I personally do my best to utilize research-based best practices in my work with students.
In addition, I would definitely love to read more about the research that has studied these questions if you could kindly point me to it. Many thanks!
Yes, I do mean that all of those activities are a waste of time if the goal is improving reading achievement. Various strategies encourage teaching kids question types and how to think about those aspects of text (one popular example is QAR). Consistently, the research on these instructional approaches have a tiny effect size (averaging about .15--which is a substantially smaller payoff than one gets if they teach kids to summarize what they are reading). Furthermore, the research is also consistent and clear that teaching kids to respond to particular kinds of questions (such as those that we find on reading comprehension tests) cannot possibly work because those questions are so highly correlated with each other that they are not actually tapping different cognitive outcomes. There is no consistent difference in performance between literal recall and inferential or higher order thinking questions. There are no differences between main idea, comparison, drawing conclusions, etc. questions. There are no measurable differences between right there, think and search, author and me, etc. questions. Spend your time on teaching kids to read better and you'll be way ahead of the game.
Can I be clear, then?
The teaching of reading strategies is useful = yes. Does it have more impact if the reading content is directly related to the curriculum students are studying?
E.g. school A removes 6 children for “comprehension” intervention and builds this around an already-published SEN resource - contents includes football, how to make a cup of tea, where pizzas came from etc.
School B removes 6 children for the same purpose but builds this around non-fiction articles on The Russian Revolution to support their reading of Animal Farm in English. Is there any guidance on which will have greater impact? Everything I’ve read over the last few years suggests the latter?
Professor Shanahan, thank you for your informed comments. Can you direct me to the studies showing benefits of CRS in primary years?
Leave me a comment and I would like to have a discussion with you!
Copyright © 2023 Shanahan on Literacy. All rights reserved. Web Development by Dog and Rooster, Inc.
See what others have to say about this topic.