"I don’t think [the Clay Observation Scale is] any more of a concern than using DIBELS,” he said, referring to the Dynamic Indicators of Basic Early Literacy Skills, a test that is widely perceived to be the Bush administration’s favored measure for gauging students’ reading progress under Reading First. That test was devised by Reading First consultants and is being used to tout the federal initiative’s success.
Notice how Edweek carries Allington's water here by writing the lurid innuendo so Allington doesn't have to. Also notice how Allington never denies that the Clay Survey is an accurate measure of reading. A sort of non-denial denial.
Of course, DIBELS has more valid research showing its validity as a testing measure than does the Clay Survey. In particular the Oral Reading Fluency test being used as one measure in Reading First does correlate highly with real reading ability. To the extent it is biased, DIBELS favors readers who are good decoders. In contrast, the Clay Survey is biased in favor of readers who score well on reading predictive text, in particular predictive text that the student's have been repeatedly reviewed in the Reading Recovery program, i.e., readers who can't necessarily read non-predictive text which hasn't been used in the Reading Recovery program.
Than Allington makes this nonsensical comment:
The question now is are we going to take all the interventions off the Reading First Web sites that don’t meet the What Works criteria. I don’t have a lot of confidence that anyone in Washington actually cares about the evidence.
First of all, Reading First only requires reading programs that are "based on" SBRR and is not limited to validated programs. Second, Reading Recovery still has the sticky problem of not providing systematic and explicit instruction in phonics which is a real statutory requirement.
Then the Edweek article morphs into a poorly written Reading First hit piece which I won't waste time debunking. Then we get more water carrying:
Critics have also noted that most of the studies were conducted by researchers affiliated with Reading Recovery, which is not unusual among the studies the clearinghouse reviews.
Well, yeah, but the problem with this in Reading Recovery's case is with the exception of one independent study, all the positive research has been conducted by Reading Recovery researcher and, more importantly, that research contained serious methodological flaws. Namely, the Reading Recovery researchers systematically excluded students who failed to make progress in the program. 1/3 of the students; a significant portion. This isn't scientific research; it's junk science.
What is your source for exclusion data for the studies?
Reading Recovery only uses predictive text initially to establish early reading behaviors. The texts the children have to read in order to sucessfully finish the program are not predictive.
Post a Comment