Consider, for example, the meta-analysis of 2,575 studies of learning outcomes conducted by Fraser et al. (1987). These researchers studied the effect sizes assocaited with 27 instructional variables on learning outcomes. The four variables that yielded the largest effect sizes were reinforcement (Skinnerian rewards), acceleration (advanced activities for students with outstanding test scores on college-level selection tests) reading training (coaching programs in reading techniques), and cues/feedback (mastery learning principles). These variables yielded effect sizes of 1.17, 1.00, .97, and .97 standard deviations, respectively (Fraser et al., 1987). These instructional variables are far removed from the direct teaching of higher order thinking skills or cognitive functions ... In fact, the variables in Fraser et al.'s study that most approximated thinking skills training--higher order questions and advanced organizers--yielded effect sizes of .34 and .23 respectively (Fraser et al., 1987).
The Fraser et al. study is Synthesis of Educational ProductivityResearch, International Journal of Education Research, 11, pp. 145-252.
How is it that a profession finds itself in a position of being diametrically opposed to its own research?