But do they?
There is quite a bit of evidence that shows that they do not. The evidence shows that teachers do not know how to improve a deficient curriculum and that teacher performance is limited by the instructional program they're using. The highest level of student performance is dictated by the curriculum, not by the ability of the teacher.
In 1981, Zig Engelmann and Don Steely performed a study that directly addressed the issue of how much teachers rely on instructional material, and how effectively they interpret, expand, and ask questions to teach skills and concepts that are presented in the students’ textbook. It is important to note that the teachers in this study worked with average students, not at-risk students.
The researchers first analyzed the more popular reading programs in grades 4–6. According to Englemann all the programs had the same format:
They presented a series of topics over the course of the year—main idea, fact versus opinion, context clues for word meaning, cause and effect, key words, and a few others. Each lesson would take a few days. The lessons were in cycles so that a lesson on main idea would occur now and would be followed by lessons on all the other topics. After this round of lessons, the cycle would then be repeated to further develop the topic. On the average, the lessons for any given topic occurred about every 50-60 school days.
The researcher s interviewed 17 teachers on questions about critical beliefs and facts. The teachers believed that 86% of children should master any skill that is taught and that only 58% could master master main idea.
After the interview, the teachers were taped teaching two topics from the program they used. The teachers were taped teaching main idea and another topic selected by each teacher. The teachers were instructed to teach the way they normally would.
The researchers analyzed the teaching and compared it to the directions and specifications provided by the program. They also designed tests that evaluated how much of what was taught was learned by students. The testing occurred immediately after the teaching of each topic. Each test was custom designed, so that it used the same language specified in the book and items very similar to those the teacher had presented.
The reasearhers had analyzed the programs the teachers were to use before the experiment:
The analysis of the programs (which occurred before the teaching) implied that the teaching would be unsuccessful... The problem was created by the programs and by the examples it presented. For the first example, the main idea was the first sentence. For the next two examples, the main idea was either the first sentence or the last sentence. The final example contained no sentence that expressed the main idea. This sequence is analytically crazy. You don’t give naïve students three examples of a sentence expressing the main idea followed by an example that breaks this rule, without warning or preparation.
The study noted many inconsistencies between what the teachers said they did and what they actually did. For example, the tapes of each teacher teaching showed that the teachers followed the specifications of the program very closely. However, their verbal reports indicated that they deviated from the directions more than they actually did. They asked a fair number of additional questions and added some explanations, but they presented all the examples the material indicated and followed what wording the teacher’s material suggested.
Here's Engelmann's conclusions from the study:
The comparison of their teaching with the directions and specifications of the programs they used showed that no teacher taught better than the program. In other words, they tried to improve on what the program presented, but they didn’t know how. The fact that no teacher taught better than the program specifications casts doubt on the assertions that teachers are able to adapt and respond to the students in a way that is superior to a canned program. The program may be viewed as a limiter. Except for an occasional exception, a careful analysis of the program reveals how the best teachers will perform. Where the program has logical problems, the observed teaching has predictable performance problems.
The data showed that only ten percent of the students mastered main idea and only a little more than half of them got a score above 50 percent correct. The topic that had the lowest scores was context clues. Only 15 percent of the students got half or more of the items correct. No student achieved mastery. Overall performance across all topics showed that 12 percent of the students achieved mastery at the level of 90 percent correct. Interestingly, the topics that educators suggested were most important—higher-order skills like identifying main idea and using context clues—were the areas of poorest student performance.
The questionnaire we designed revealed that the teachers were aware of the mastery problems. Teachers indicated that over half of their students needed one more week of instruction on just main idea. We sent the same questionnaire we used with the teachers we taped to 3,000 other teachers. About 500responded. Their responses were quite consistent with those of our teachers.I wrote an article on this research for the Association for Direct Instruction News (Spring 1982). I ended with this observation:This study made me feel very sad, not so much because the results surprised me, but because the tapes of the teachers revealed both concern and a lot of raw talent. Most of the teachers who volunteered forth is study were clearly intelligent people who were trying very hard to do an important job. Their verbal responses and the questionnaire responses suggest that these teachers are quite aware of the … learning problems that their students experience. They know, for example, that students tend to confuse the title or the first sentence with the main idea. Teachers simply don’t know how to avoid this problem, how to teach in a way that will help solve it, and how to provide explanations and examples that correct the problem. As it is, their talent, their potential to be super teachers, is unfulfilled, in the same way the potential of their students is.
(This passage is adapted from pp. 24-27 of The Outrage of Project Follow Through, chapter 6.)
Corroborating this evidence is the flat performance of student achievement on the NAEP for the past 30 years. Lost of fad have come and gone during those 30 years, all the while student performance has remained essentially flat. All the curricular tinkering was for naught. In fact, we put this theory to the test on a very large scale basis in the second half of Project Follow Through. The results were predictable:
There were 15 self-sponsored districts, and many implemented in more than one school. If Follow Through had been limited to the 15 self-sponsored communities, it still would have been the largest educational experiment ever conducted.
The performance of these sites would be important to those in the educational community who believed that teacher autonomy and collaboration would transform failed schools into effective ones. Nearly all the self-sponsored sites promoted teacher autonomy and school autonomy. They permitted enormous latitude in how teachers developed material and practices to meet the needs of children, and nearly all had provisions for teachers working collaboratively to meet the children's need. If judged on the appeal of goals and practices, these sites should have been 10s. In fact, all failed. Incredibly, the self-sponsored sites performed below the average of the sponsored sites, which was considerably below the level of children in Title 1 schools.
Instructional innovations are not coming from our teacher core, from schools, or from our education schools. And, there's no reason to believe that any of these institutions are capable of anything more than the continuation of the status quo. Teacher's do not seem to know best, they seem only to know the status quo. Improvements in education won't be coming from inside the system, at least not voluntarily.