I'm a curriculum and instruction supervisor for a smaller district. We feel like we have a pretty firm grasp on assessing and diagnosing when it comes to phonemic awareness, decoding, fluency, and comprehension. However, we're struggling with vocabulary. Is there any assessment you would recommend that would give us a feel if a student is approaching standard or at standard for that area?
In recent years, I’ve become concerned about the amount of school testing.
My complaint isn’t with the annual accountability tests (though, those are on overdose, too). No, my grievance is with the many screening and monitoring tests at epidemic levels in our schools.
It is sensible to be aware of student progress. But teachers can only use so much info and testing isn’t the only way to get it.
I oppose, for instance, weekly tests of oral reading fluency—they aren’t accurate enough and fluency growth rates don’t justify it. Testing that 2-4 times per year should be sufficient.
A better plan, perhaps, is to have teachers monitoring performance during daily fluency lessons… five observations per day within a half-hour lesson would provide 9 looks per report card. And, there’d be little need for all that formal testing.
But I digress.
One of the big drawbacks with classroom assessment is that it overemphasizes foundational skills (e.g., phonemic awareness, phonics, fluency). Don’t get me wrong, those are perfectly reasonable things to dipstick and their results really can help us to target instruction.
As sensible as that approach is, it tends to lead teachers to see the kids (and their needs) through the lens of these tests.
If we only monitor progress in foundational skills, then teachers can only see foundational skills. Reading instruction then devolves to that… kids either are “doing fine” with reading (even if they are struggling with language, comprehension, and composing), or they need extra help and that extra help will be aimed at, you guessed it, foundational skills—even if the kiddos need something else.
Teachers come to not even see those other aspects of literacy. After all, the principal isn’t monitoring kids’ success with vocabulary or comprehension. But if we’re talking about foundational skills—the ones regularly tested—then “oh doctor.”
Given all that, your desire to assess vocabulary is a laudable one.
I can find no reliable/valid vocabulary test that can easily be given multiple times per year, that is predictive of kids’ growth in reading, and that provides a good measure of student progress. There are lots of vocabulary measures, but none that fits that bill.
You can use something like the Peabody Picture Vocabulary Test (PPVT) which will provide a reasonably good estimate of overall vocabulary performance. But I’m not convinced that those scores would allow you to track student growth in any meaningful way.
That’s not a knock on the PPVT or its rivals. Vocabulary assessments just have not been designed to monitor vocabulary learning in any meaningful way (Pearson, Hiebert, & Kamil, 2007).
Vocabulary growth is just so diffuse. Kids watch a television show and bang they know a new word. The same for talking to a friend on the playground or eating dinner with mom or taking part in a reading lesson. Vocabulary comes from pretty much everywhere (as a boy, I learned “serviette” from the Three Stooges).
Words can get pretty specialized or particular in their use, too. I now have an extensive French vocabulary. If I come across words in that lexicon, I can understand them; that is, I have a good receptive French vocabulary. But, s’il vous plait, don’t ask me what the French word for something is because my expressive vocabulary is embarrassing.
Keeping track of such dispersed and sundry learning is a formidable—and so far an impossible—task.
I think the best you can do is to adopt or create a formal vocabulary curriculum. Decide on a set of words and/or morphemes that the kids in your district will master by a particular grade level. Then teach the heck out of those words.
Vocabulary assessment, in this context, should aim to monitor student progress with the taught vocabulary. These results should correlate reasonably well with students’ overall growth in word learning. Kids who learn the most words from instruction will likely pick up a lot of words from their own reading, and so on.
As with most correlations, there will be kids whose learning isn’t typical.
For example, what about the kids who are smart, but who don’t read much? They may learn most of the taught words, without curating many on their own. Or those, who do read a lot, but aren’t particularly diligent students? They may look like they are making no vocabulary progress, while doing reasonably well on their own. Such exceptions will probably balance out.
This plan should work reasonably well—as long as the kids don’t already know most of the target words. Screenings of the year’s vocabulary agenda early on should give you a sense of that. Without that information, you could convince yourself that there was a lot of learning going on, when you are only seeing that the students already knew the target words. In such a case, your curriculum is undershooting the kids.
After the first year, you could even check out what growth on your vocabulary instrument would mean. There is no reason why you could not correlate student growth in vocabulary with your state test results.
I would discourage you from trying to do this with a multiple-choice instrument. That adds too many other variables to the assessment. Just print up a list of the words for the kids to write definitions for. Don't try to get into fine-grained distinctions. The student either understood a word or not. You need to keep both the test design and the scoring simple enough that 3-4 administrations per year would be possible.
Thanks for keeping the attention on language and meaning.
Copyright © 2019 Shanahan on Literacy. All rights reserved. Web Development by Dog and Rooster, Inc.