I've been perusing the various Background Papers for the Broader, Bolder Initiative looking for some valid research pertaining to an actual implementation of one of the Broader, Bolder ideas. What I didn't find was:
Nevertheless, there is solid evidence that policies aimed directly at education-related social and economic disadvantages can improve school performance and student achievement. The persistent failure of policy makers to act on that evidence—in tandem with a school-improvement agenda—is a major reason why the association between social and economic disadvantage and low student achievement remains so strong.
What I find is a lot of observational studies that find various correlations between traits associated with at-risk children and their families and the fact that these children tend to perform worse than their mainstream peers. The causal jump is then assumed.
For example, studies have shown that at-risk kids report in questionnaires that they experience more hunger, which might be broadly defined to include everything from extreme malnutrition to missing a snack once a week, than their middle-class peers. Since the performance of at-risk kids is less than their mainstream peers, Broader, Bolder reasons that hunger causes distraction and distraction causes lower performance. And, therefore, we should provide more nutrition to at-risk kids.
But since there's no such thing as the nutrition fairy, this broader, bolder plan has to implemented somehow. For example, we might fund the public schools so that they might provide free and reduced lunches to qualifying at-risk students. Actually, we do that already. Then, how about if we fund breakfast programs for qualifying at-risk students. We do that too. I'm confused.
You see, the question isn't whether we should be providing more nutrition to at-risk students. That low-hanging fruit has already been picked. The question now is whether we should expand or supplement these existing programs and are educationally significant gains in student achievement to be forthcoming.
This is the question that should have been answered before Broader, Bolder issued their manifesto. But it wasn't. At best, we have some small scale research, usually rife with methodological flaws, that was conducted so that the researcher could provide evidence that their belief was correct. You can always find evidence that some kids learned something when you changed some condition. Round up enough experimental subjects and conduct enough experiments and you are bound to find some statistically significant, though not necessarily educationally significant, increase in performance whether by chance, good fortune, or hoax.
What we want is research in which the researcher started withe assumption that their idea was wrong. Then the researcher collects evidence that shows either that the researcher's beliefs are false or not false. This is called testing the null hypothesis. Kozloff gives us an example of this type of research:
I believe program X works, but I'm going to assume that it doesn't work and I'm going to collect data to try to show that it doesn't work. If the data do not show that X does not work, I will conclude that maybe it does work. Maybe.
We don't see this kind of research cited by Broader, Bolder. What we see is "research" that is attempting to persuade us that we should join the researcher in accepting his beliefs and that the researcher is not very interested in the possibility that he is wrong.
Broader, Bolder needs a healthy dosage of humility, especially since so many of its bromides remain untested.
(Picture adapted from Telling the Difference Between Baloney and Serious Claims About What Works, Kozloff and Madigan, DI News, Summer 2007)
8 comments:
I respectfully request permission to use the term "educationally significant" in future policy discussions with everyone I meet. This is an excellent phrase and should be easily understandable vs. trying to explain the nuances of effect size and statistical significance.
Good post. On the specific solution discussed of adding money to school food programs. I agree that the jump to such a solution is illogical, but knowing that it will be done, a secondary argument against school food is that it can produce a negative if students are consuming raw sugar (Orange juice) or foods that asians and latinos (my school's main population) are prone to be allergic to (milk products).
Bill, feel free to use the phrase "statistically significant evidence of educationally insignificant outcomes," especially when discussing NSF-funded math curricula. I do--and have for some time.
Hey, Guys, "statistically significant evidence of educationally insignificant outcomes"
is a perfect description of the What Works Clearinghouse." It also applies to the great bulk of ed research lit. "External validity ve. internal validity" of a design and "practical vs. statistical significance" make the same point, but Eric's phrasing is clearer.
I like Eric's formulation. Though, I'm not so sure that the WWC is following the spirit of Eric's formulation. Dick, weren't you the one who wrote that the WWC basically just blesses the research ceremony, which is, I think, exactly right. I mean how do you let reading Recovery get away with using the Clay Survey as the testing instrument and Everyday Math get away with suspicious post hoc research?
I like Eric's formulation.
I'm flattered. Here's how it came about. I investigated some TERC claims and saw that the effect sizes were real small (though mostly positive) and statistically significant (Who is the audience for this publication?!!). The little voice in my head (maybe it's a conscience--I'm not sure it works with all education researchers or journal editors) said, "Huh, this is statistically significant evidence of educationally insignificant outcomes."
Anyway, thanks for the pointer to Telling the Difference Between Baloney and Serious Claims About What Works (Powerpoint is on the web). I'd like to mail hardcopy to my Governor from an anonymous out-of-state address. Here's why.
What would be even more useful is a snappy term for statistically significant results, yet small effect size, i.e., educationally insignificant results.
That's all I see in this social oriented educationa research.
Here's a rough approximation of the kind of preposterous claims I came across.
... We found that by feeding kids a mid morning snack that self-reported grades, though not achievement test results, rose by 0.06 standard deviations in science. Therefore, we consider this conclusive evidence that w can solve all our education problems by feeding kids morefood during the school day.
Sorry, Ken, "statistically significant evidence of educationally insignificant results" will just have to do for now. The snappy phrase generator is already working overtime.
What do you think of, "Lower Standards, Higher Prices--because special interests deserve special treatment?"
Won't the election-year action be with the NEA's Great Public Schools For Every Student by 2020 rather than Broader, Bolder? (Do we need a snake oil label generator and companion education buzzword bingo card generator?)
Post a Comment