
Sorry for the light blogging, kids. I've been on vacation in sunny Nevis for the past week. Should be back on my game shortly.
The CSRQ Center reviewed 37 quantitative studies for effects of Accelerated Schools on student achievement. Three studies met CSRQ Center standards for rigor of research design. The Center considers the findings of these three studies conclusive, which means that the Center has confidence in the results of these studies. About one third of the results reported in these studies demonstrated a positive impact of Accelerated Schools on student achievement, and the average effect size for these significant results is +0.76.The AS studies can be found in Appendix A of the report.
The CSRQ Center reviewed 115 quantitative studies for effects of SFA on student achievement. Thirty-three of these studies met CSRQ Center standards for rigor of research design. Upon review, the Center considers the findings of 31 of these studies conclusive, meaning the Center has confidence in the results reported. The findings of the other 2 studies are considered suggestive, which means the Center has limited confidence in them. Overall, the 33 studies report a mix of results showing positive effects and no effect of SFA; of the 91 separate achievement test findings across the 33 studies, just over half (52%) demonstrate a statistically significant positive impact. The average effect size of these positive effects is +0.63.The SFA studies can be found in Appendix U of the report.
The CSRQ Center reviewed 56 quantitative studies for effects of DI on student achievement. Twelve studies met CSRQ Center standards for rigor of research design. The Center considers the findings of 10 of these studies conclusive, which means that the Center has confidence in the results reported. The findings of two studies are considered suggestive, which means the Center has limited confidence in the results. The findings in the conclusive and suggestive studies showed mixed results: some studies demonstrated a positive impact of DI on student achievement and other studies showed no significant effects. About 58% of the findings reported in the studies that met standards demonstrated positive effects; the average effect size of those significant findings was +0.69.The DI studies can be found in Appendix K of the report.
On pages 48 and 51, the meta-analysis shows that 17 studies lasted less than a year and 17 lasted over a year. The effect size can be calculated per comparison and per study but all of the results show large effect sizes: .95 for studies less than a year and .78 for studies more than a year... On page 44, the age of the publications was analyzed (1972–1980: 6 studies, 1981–1990: 22 studies, 1991–1996: 6 studies) and all of the effect sizes were large (.73, .87, 1.00, respectively).
Fifteen of the studies were conducted by researchers who have been somehow connected with Direct Instruction. In contrast, the majority of the studies (18 studies) were conducted by non-DI-connected researchers. The effect size for studies by DI-connected researchers was .99—a large effect size. The effect size for studies by non-DI-connected researchers was .76—also a large effect size.
[I]f the difference between sample averages is no more than two-tenths of a standard deviation, the difference should be regarded as small; a difference of half a standard deviation should be regarded as moderate; and a difference of eight-tenths of a standard deviation or larger should be regarded as a large difference.These are the generally accepted standards for evaluating social science research.
The public risks whiplash keeping up with the latest twists and turns in the education research. One day, block scheduling is a fresh approach that all high schools ought to adopt. The next, it's derided as stale and moldy.Wow. That's pretty good. A journalist actually understanding the quality of research. How often do you see newspapers acknowledge that research isn't sometimes all that good. Usually, they just accept the researcher's conclusion uncritically, especially if it coincides with their ideological agenda, and play-up whatever data is handed to them.
...These conflicts reflect a seldom acknowledged truth in education: There's a lot of uncertainty about what works and what doesn't.
In the meantime, schools invest millions of dollars in innovations and reforms that sound impressive, but don't amount to much more than a new coat of paint on a rickety old house. And when those reforms don't deliver the promised results, schools cling to them anyway because they've spent too much money and time to walk away.
Now go back and re-read the part I emphasized; it's probably the most important thing you'll read this week. If you're a journalist or edu-blogger, understanding this little truism will result in you making fewer silly statements, such as touting your favorite edu-theory which inevitably lacks any indicia of success.The emphasis on student achievement in the federal No Child Left Behind Act and the requirement to use data to substantiate outcomes are prompting researchers to devise more reliable ways to capture effectiveness.
"The biggest revolution caused by No Child Left Behind is the revolution in education research," says Georgia State University's Gary Henry, a scholar in educational policy and evaluation. "We are getting better at figuring out what works. But what we are seeing is almost nothing that has a very large effect."
Even when the research shows a gain, it's a very small gain produced under the best of circumstances. That's because most reforms only tug at the edges and don't address central flaws in public education: A teacher's track record of improving student performance is hidden from public view, and that performance is not used as a factor in teacher salaries.