Certainly, any approach to learning theory that suggests that an experiment can be conducted in (say) a double-blind model in order to test hypotheses in terms of (say) achievement of learning outcomes in my view demonstrates a fundamental misunderstanding of the nature of the enquiry.Stephen stopped by in the comments and defended his position
My arguments are not without foundation and evidence. What I am criticizing (as one who knows the field understands) is a particular approach to testing and evidence that has been subject to widespread criticism both inside and outside the sciences.
and suggested that we drop by his site and see what he had to say. So, I did. I found this.
That education is a complex phenomenon, and therefore resistant to static-variable experimental studies, does not mean that it is beyond the realm of scientific study. It does mean, however, that the desire for simple empirically supported conclusions (such as, say, "experiments show phonics is more effective") is misplaced. No such conclusions are forthcoming, or more accurately, any such conclusion is the result of experimental design, and not descriptive of the state of nature.
I'm not convinced. Education research is execrable most of the time. But, legitimate education research does exist which permits general conclusions to be drawn. Often, in education research, we are content if we learn whether an intervention increases student performance. We don't necessarily need to know why the intervention works.
For example, let's put together a hypothetical education experiment for a reading intervention for grades K-2. Let's call it the RITE program. The study will consist of 4000 students in the intervention group and 4000 students in the control group spread over many classrooms with many teachers. We will making sure demographic factors and other student factors are taken into account when splitting up the groups. The control group will be given a "research based, phonics reading program." The measurement tool we'll use is the SAT-9 which appears to be a good measure of reading ability. All students will receive pre-tests and post-tests to measure student achievement. For good measure we'll have an external evaluator conduct the study to reduce the bias effect. Here are the results of our study:
The First bar is for the control group, 36% performed better than than the 50th percentile while 33% performed below the 25th percentile. The next bar is for the intervention group who've gotten one year of the intervention. This group performs about the same as the control group. The next bar is for the intervention group in the program for two years; performance is starting to improve, but the students are not quite up to national norms yet. The last bar shows the intervention group who've been in the program for all three years. This group performs above the national norm -- 61% of students are performing above the 50th percentile, while only 14% are performing below the 25th percentile.
The effect size of the three year intervention group is over a standard deviation which is a large effect size for educational interventions and practically unheard of in education. Due to our large sample size, our results are statistically significant at a high level and we can confidently achieve them by faithful replicating the intervention. We don't need to know whether the intervention group used a whole language program or a phonics based program, whether the students were exposed to rich literature, or any other messy detail. Such details, though important, are irrelevant to us. As are other external factors, because whatever factors affected the intervention group also affected the control group.
And, while it is possible that other interventions might work as well or better than this one, we know with a high degree of certainty that this intervention will significantly boost student achievement.
By the way, the study is real. RITE stands for the Rodeo Institute for Teacher Excellence (Houston). The evaluator was the Texas Institute for Measurement, Evaluation, and Statistics. The report was published in 2002. See more about it here (and here).
If we had more classroom research like this and if schools adopted successful research-based interventions with fidelity instead of trying to extrapolate out the parts they think are the cause for success (invariably they're wrong), student achievement could be markedly improved in this country.