But do they?
There is quite a bit of evidence that shows that they do not. The evidence shows that teachers do not know how to improve a deficient curriculum and that teacher performance is limited by the instructional program they're using. The highest level of student performance is dictated by the curriculum, not by the ability of the teacher.
In 1981, Zig Engelmann and Don Steely performed a study that directly addressed the issue of how much teachers rely on instructional material, and how effectively they interpret, expand, and ask questions to teach skills and concepts that are presented in the students’ textbook. It is important to note that the teachers in this study worked with average students, not at-risk students.
The researchers first analyzed the more popular reading programs in grades 4–6. According to Englemann all the programs had the same format:
They presented a series of topics over the course of the year—main idea, fact versus opinion, context clues for word meaning, cause and effect, key words, and a few others. Each lesson would take a few days. The lessons were in cycles so that a lesson on main idea would occur now and would be followed by lessons on all the other topics. After this round of lessons, the cycle would then be repeated to further develop the topic. On the average, the lessons for any given topic occurred about every 50-60 school days.
The researcher s interviewed 17 teachers on questions about critical beliefs and facts. The teachers believed that 86% of children should master any skill that is taught and that only 58% could master master main idea.
After the interview, the teachers were taped teaching two topics from the program they used. The teachers were taped teaching main idea and another topic selected by each teacher. The teachers were instructed to teach the way they normally would.
The researchers analyzed the teaching and compared it to the directions and specifications provided by the program. They also designed tests that evaluated how much of what was taught was learned by students. The testing occurred immediately after the teaching of each topic. Each test was custom designed, so that it used the same language specified in the book and items very similar to those the teacher had presented.
The reasearhers had analyzed the programs the teachers were to use before the experiment:
The analysis of the programs (which occurred before the teaching) implied that the teaching would be unsuccessful... The problem was created by the programs and by the examples it presented. For the first example, the main idea was the first sentence. For the next two examples, the main idea was either the first sentence or the last sentence. The final example contained no sentence that expressed the main idea. This sequence is analytically crazy. You don’t give naïve students three examples of a sentence expressing the main idea followed by an example that breaks this rule, without warning or preparation.
The study noted many inconsistencies between what the teachers said they did and what they actually did. For example, the tapes of each teacher teaching showed that the teachers followed the specifications of the program very closely. However, their verbal reports indicated that they deviated from the directions more than they actually did. They asked a fair number of additional questions and added some explanations, but they presented all the examples the material indicated and followed what wording the teacher’s material suggested.
Here's Engelmann's conclusions from the study:
The comparison of their teaching with the directions and specifications of the programs they used showed that no teacher taught better than the program. In other words, they tried to improve on what the program presented, but they didn’t know how. The fact that no teacher taught better than the program specifications casts doubt on the assertions that teachers are able to adapt and respond to the students in a way that is superior to a canned program. The program may be viewed as a limiter. Except for an occasional exception, a careful analysis of the program reveals how the best teachers will perform. Where the program has logical problems, the observed teaching has predictable performance problems.
The data showed that only ten percent of the students mastered main idea and only a little more than half of them got a score above 50 percent correct. The topic that had the lowest scores was context clues. Only 15 percent of the students got half or more of the items correct. No student achieved mastery. Overall performance across all topics showed that 12 percent of the students achieved mastery at the level of 90 percent correct. Interestingly, the topics that educators suggested were most important—higher-order skills like identifying main idea and using context clues—were the areas of poorest student performance.
The questionnaire we designed revealed that the teachers were aware of the mastery problems. Teachers indicated that over half of their students needed one more week of instruction on just main idea. We sent the same questionnaire we used with the teachers we taped to 3,000 other teachers. About 500responded. Their responses were quite consistent with those of our teachers.I wrote an article on this research for the Association for Direct Instruction News (Spring 1982). I ended with this observation:This study made me feel very sad, not so much because the results surprised me, but because the tapes of the teachers revealed both concern and a lot of raw talent. Most of the teachers who volunteered forth is study were clearly intelligent people who were trying very hard to do an important job. Their verbal responses and the questionnaire responses suggest that these teachers are quite aware of the … learning problems that their students experience. They know, for example, that students tend to confuse the title or the first sentence with the main idea. Teachers simply don’t know how to avoid this problem, how to teach in a way that will help solve it, and how to provide explanations and examples that correct the problem. As it is, their talent, their potential to be super teachers, is unfulfilled, in the same way the potential of their students is.
(This passage is adapted from pp. 24-27 of The Outrage of Project Follow Through, chapter 6.)
Corroborating this evidence is the flat performance of student achievement on the NAEP for the past 30 years. Lost of fad have come and gone during those 30 years, all the while student performance has remained essentially flat. All the curricular tinkering was for naught. In fact, we put this theory to the test on a very large scale basis in the second half of Project Follow Through. The results were predictable:
There were 15 self-sponsored districts, and many implemented in more than one school. If Follow Through had been limited to the 15 self-sponsored communities, it still would have been the largest educational experiment ever conducted.
The performance of these sites would be important to those in the educational community who believed that teacher autonomy and collaboration would transform failed schools into effective ones. Nearly all the self-sponsored sites promoted teacher autonomy and school autonomy. They permitted enormous latitude in how teachers developed material and practices to meet the needs of children, and nearly all had provisions for teachers working collaboratively to meet the children's need. If judged on the appeal of goals and practices, these sites should have been 10s. In fact, all failed. Incredibly, the self-sponsored sites performed below the average of the sponsored sites, which was considerably below the level of children in Title 1 schools.
Instructional innovations are not coming from our teacher core, from schools, or from our education schools. And, there's no reason to believe that any of these institutions are capable of anything more than the continuation of the status quo. Teacher's do not seem to know best, they seem only to know the status quo. Improvements in education won't be coming from inside the system, at least not voluntarily.
So how many ed schools use Zig's book? Surely educators ought to be able to assess arguments regarding efficacy. And why would a university institutional review board sign-off on experiments that don't leverage what is already known? Moreover, "Professional educators shall maintain high levels of competence throughout their careers," (Pennsylvania's Code of Professional Practice and Conduct for Educators).
The "thorough and efficient" education clause of your state constitution has been held to confer a fundamental right to Pennsylvania children. Where is their advocate?
Where are teachers really empowered to make meaningful changes? Some of us would like to, but the educrates stand in the way.
I'm sceptical that traditional methods will be effective with the lower half of the curve. Their history is not that good with this segment. So, tell me what do you, as an educator, think the solution is?
Your suggestion from another thread about [flexible] ability grouping is a good start.
If you are skeptical that traditional methods will not work with the lower portion of students (lower 25%, at risk), what will? I'm a high school math teacher, and the suggestions I always get are "hands-on activities", "cooperative learning", and "relate it to real life". My experience (21 years) tells me that there is nothing magical about any of these, and often they simply waste time and do not enhance learning. (They might be fun, but the goal is to learn).
I have yet to see lessons that teach exponents, polynomials and factoring with any depth that are also hands-on or related to real life situations that a student in the "lower portion" would accept as valid.
Is there a source for this? The things I've seen barely touch the surface of such topics.
I'm almost ready to throw in the towel. Sometimes I even wonder why students at this level are studying anything more than a basic type Algebra (signed numbers, equations, graphs).
I respect your opinions. Thank you for your responses.
No, none of these things is going to work. What I think is needed is explicit teaching plus more. That plus more is quality control measures.
Knowledge acquisition: Teaching needs to be clear so that students learn what is intended to be learned without inducing misrules. Teaching must be done in small increments and student performance must be questioned frequently for feedback so that corrections can be administered immediately. New material must build on old information.
Knowledge Retention: once material has been learned, sufficient distributed practice must be given so that the material is retained. Unit tests need to be administered often and students must demonstrate mastery of the material before being allowed to proceed.
There will be a Direct Instruction Algebra program being released this spring. They’ve been working on it and four years and field testing it. This would be the easy way out.
Surely educators ought to be able to assess arguments regarding efficacy.
But why should they? What's their motivation to pursue effective techniques? There's certainly no obvious benefit since ineffective techniques continue to attract adherents, and funding, long after their hollowness is exposed.
Since educators don't seem to be able to assess the efficacy of ideas and techniques it would be worthwhile to ask the question as question and not a rhetorical device.
I think if teachers were familiar with the results of Project Follow Through that many would love to try DI.
But rarely do teachers make that kind of decision.
rarely do teachers make that kind of decision
I have no doubt that is the case, but I'm unclear about the specific mechanism. I'd expect teachers to go through a process of "unwrapping the standards" to produce a course of studies. Where actual curriculum choices arose, I'd expect them to weigh the available efficacy evidence.
Somehow it seems the "powers that be" hold their noses and say "ooooh icky" and dismiss DI, or some other cost-effective program. Why would a teachers' union allow teachers to be railroaded down a path of certain failure?
I can't tell you how many times I've sat around a table for "team planning" and I've wanted to pull my hair out. Lots of cutsey activities are thrown out but no real methods of teaching or assessing are offered. Your previous comment regarding a procedure for teaching math will work in any subject area. The research clearly shows that small increments along with frequent assessment (not necessarily a pencil/paper test) provides the feedback teachers and students need.
I feel that expert content knowledge would also be a help but often I don't see it at the upper elementary level. It's hard to plan three-tiered questions on the higher-order thinking skills when you can't manipulate the content yourself.
I teach direct instruction and it works. I am not knocking all of the program at all, but if I stuck just to my DI program (Open Court Reading), my students would exit elementary school never having been guided through a chapter book. All of our stories are excerpts designed to teach a skill. What is lost is a love of reading. Field trips, music,school plays, PE, most of SS, & science are also lost as we fill our days with an extensive program that takes hours and hours of time to teach. It works, but there is a cost to children's school experience. It isn't the way I was taught at all- as a student, or a teacher candidate.
Jane, your comment is full of many misconceptions.
Open Court is not a DI program. In the real DI programs, most of the stories read are chapter books, starting in the 2nd level.
Love of reading is lost when children don't learn to read well. In this respect, open Court does a better job than many reading programs out there.
The full DI program, which includes reading, writing, math, and spelling takes 1/2 a day to teach. The other half of the day is free.
"The full DI program, which includes reading, writing, math, and spelling takes 1/2 a day to teach. "
I have been able to obtain a version of the reading program in the "TEACH YOUR CHILD TO READ IN 100 EASY LESSONS".
What is the best way to obtain the Math, Writing and Spelling components for someone in a homeschool context.
I agree and I disagree. I have met some talented teachers that do fabulous and effective things that are not in the curriculum. I also know lots of teachers who stick with the textbook... Now that I think of it, the teachers who are the most impressive tend to replace chunks of the book with things that work better rather than trying to add stuff to the curriculum. I've been reading Zig's book with interest--and one lesson that seems to come through in these passages is that most teachers aren't good curriculum designers. This was kind of surprising to me (I'm almost incapable of teaching a course more than once without redesigning it), and it's changing how I think about my students who are going to be high school teachers.
Since appropriate motivation on the part of teachers is assumed, and not to be discussed in polite company, maybe a list is in order. So, in no particular order and with no particular intent, a list of popular assumptions:
1) Parents aren't involved enough until they're a pain in the ass.
2) Education is far too ethereal to be measured by mere numbers.
3) Administrators are stupid, evil or crazy. Possibly, probably, a combination.
4) Of course public education has to be mandatory.
5) Of course education has to be tax-supported.
6) Of course education is under-funded. Always.
7) Teaching can be a creative endeavor if there's no need to demonstrate competence.
8) Teachers are simultaneously under-appreciated and saddled with unrealistic expectations.
9) Public education is for all kids but you can't teach them:
-if their English isn't good enough,
-if their parents didn't read to them,
-if they watch too much TV, play video games or listen to rap music,
-if they're hungry, nuts, angry or crippled,
-if they're not properly medicated,
-if they aren't white or asian or
-if their parents don't make enough money.
There are others I'm sure but these are some of my favorites.
Post a Comment