May 18, 2010

Why It's So Difficult to Improve Educational Outcomes

To understand the problems of K-12 education, that is getting students to learn what we think is important for them to learn, and the great historical difficulty we've had improving education outcomes, you need to look at ground zero of the learning process.

I like Stephen Downes' metaphorical induction model of knowledge of learning (with my clarifications).  If you haven't already, go read it and don't come back until you understand it. I'll reproduce the diagram below.




There are three important factors in this model. 

  1. The top inductor representing teacher (or resource) effects.  Following the electronics metaphor, a teacher (or resource, such as a book) with more windings is more capable of transferring data to the student than a teacher with less windings.  The inductor windings represent teacher effectiveness.
  2. The bottom inductor which represents student effects.  A student with more windings will be more capable of inducing a transfer of data from the teacher/resource than a student with less windings.  The student windings represent cognitive ability and other student factors.
  3. The distance between the two inductors represents the presentation of the data from the teacher/resource to the learner/student.  I've termed the distance the "inductive gap."  The teacher presentation represents what actually takes place in the classroom that effects learning.  It is the curriculum, the classroom management, and pedagogy.  A poor presentation makes learning more difficult by increasing the distance between the teacher and student inductors.  A good presentation reduces the distance, making learning more likely.
The benefit of the above model is that not only does it capture most of the important variables, but it also illustrates that what the teacher intends to teach (the data) is not necessarily what the student has actually learned (knowledge).  This is because it it is difficult for the teacher to ascertain what exactly the student has learned.  Such feedback can only be ascertained indirectly with imperfect testing instruments.  This feedback must also be properly interpreted by the teacher. (The drawback of the model is that it relies on the reader's knowledge of the operation of a rather esoteric electronics circuit.)

Complicating matters even further is that the model is different for each piece of data that the teacher wants transferred to (i.e., learned by) the student.  Thousands of these transferences must take place during the K-12 cycle.

Moreover, the model ignores student retention effects.  Just because the student has actually induced the intended knowledge from the teacher presentation does not mean that the knowledge will be retained by the student.  The ravages of forgetfulness are brutal and must be contended with by the teacher.  Often retention is ignore or, worse yet, derided by educators ("drill and kill").

As Stephen Downes puts it, what teachers are really doing is training a neural net in each student's brain.  That training does not happen instantaneously.  It happens over time and is imperfect.  See Willingham's article on inflexible knowledge. This makes the teacher's interpretation of the learner's feedback even more difficult as the neural network is being trained.

To help understand how the model plays out in the real world and it's implications for instructional design and how the "inductive gaps" affect the learning process, I'm going to provide a lengthy quote from Zig Engelmann's upcoming book.  In the passage Zig calls the inductive gaps "inferential  gaps."


[We] have learned from extensive applications that nothing may be assumed to be taught to at-risk children unless it appears on at least three consecutive lessons...When we apply this formula to the first draft of the material, we presume that when the program is field-tested, our estimates will be confirmed. However, we remain perfectly aware that in some cases the practice estimates are wrong. They may vary in either direction—providing too much practice, or providing too little. More frequently the error is in the direction of too little practice.

An important issue that we must address in creating a sequence of activities is how large the inferential gaps are between one exercise type and the next type in the sequence…

… If we were to unintentionally design a program with enormous gaps for teaching reading, we might first teach letter names, teach the short sounds for the vowels, and then require learners to sound out regularly spelled words like run and hat. Obviously, the gap between the exercise types is large because students haven’t been taught the sounds for the consonants, or how to blend the sounds together to identify words.

Although some bright students may be able to formulate workable inferences about how to derive the sounds from the names of some consonants, most students will fail the instruction because of the large gap between what they know and what they are expected to do. Discovery learning assumes that students are able to fill large inferential gaps between what they know and what they are expected to learn. Proponents of structured instruction believe that only small sequence-related inferences are appropriate.

Note that there will always be inferential gaps between exercise types. The only issue is how large they are. This is an empirical issue. If we believe that students should be successful, we would design instruction so the inferential gaps are small enough for students to succeed. If students do not succeed, their failure suggests that the inferential gaps are too large, which means that the sequence should be redesigned to make the gaps smaller. Direct observation of how students respond to a sequence is necessary because that is often the only way these gaps are identified. Typically students are progressing through a sequence well and then encounter an exercise type that is too difficult for them. If the exercise seems clear and apparently provides adequate practice, the problem is not with this exercise type, but with the sequence of activities. In other words, student performance implies that there is a gap in the sequence that is too large for the students.

As suggested above, the size of reasonable gaps is not the same for all students. The children we have worked with have ranged from those who could not take even the smallest imaginable steps without considerable practice, to children who drew correct inferences that were far in advance of what they had been taught.

At the extreme low end was a pair of twins who had spent the first four years of their lives with virtually no human contact and who could identify some real objects, like a shoe, a ball, and a cup, but could not identify any two-dimensional representations. Even when the teacher prompted the relationship by holding a red ball next to a picture of a red ball, the children could not identify the object in the picture. After many trials, they could identify pictures of balls, shoes, and cups without the corresponding three-dimensional object next to it; however, these children had to practice identifying more than 10 illustrated objects before they could generalize and identify an illustrated object that had not been taught.

At the other extreme are the highly talented students who make a mockery out of the three-lesson rule. They learn names of new things in only a couple of trials and are able to take great leaps from what they know to remotely related inferences they are scheduled to learn much later.

If the designer assumes that every minor variation in what is to be taught requires explicit instruction, the instructional sequence may be many times more laborious than it needs to be for the average learner who goes through the program. On the other hand, if the designer makes elitist assumptions that characterize analyses of Dewey and Bruner, the inferential leaps required by the program are so large that they may be made by fewer than one fourth of the students. For example, a math program that presents a single example of each problem type assumes that students will formulate an algorithm for solving the problem presented that will generalize to the full set of related problems that are not taught. In fact, possibly only one fourth of the average students will solve the problems or benefit from the experience of struggling with them. The percentage of low performers making this leap is virtually zero percent.

The only way to determine whether the program is highly effective with the intended student population is to provide an empirical test of the sequence. This test will not only identify the missing inferences but will reveal both their character and size. In other words, they provide the designer with precise information about how to address the missing inference.

I'm going to highlight one observation from Zig's passage:

The children we have worked with have ranged from those who could not take even the smallest imaginable steps without considerable practice, to children who drew correct inferences that were far in advance of what they had been taught.

This is the one that does quite a bit of educational mischief.  Educators draw all sorts of bad conclusions from the students who can draw "correct inferences that [are] far in advance of what they had been taught" and apply those conclusions to the students who cannot "take even the smallest imaginable steps without considerable practice." 

But I don't want to get too ahead of myself just yet.

In my next post I'll discuss how the various ed reforms are doomed to failure since their real world effects often fail to address or have limited effect on the real educational variables as illustrated in the above model.

5 comments:

Kathy said...

I experience the same thing Zig writes about in his post at my school in my reading tutoring program.

Now I know that most folks don't want to hear about programs that one is using but I simply cannot sit back and not write about the reading program I use at my school. It does exactly as Zig describes is needed and as you describe.

The program, developed in the late 60's by the Southwest Regional Laboratory, a think tank paid for by Federal dollars to develop a reading method for beginning reading, is the type of instruction described by Zig. Dick Schutz, who posts here, was the director of this laboratory.

The kdg program, which I currently use in my school's tutoring program, consists of 52 code sequenced books. Code is slowly introduced and practiced in each book. Dick will have to post the exact number of repetitions that his research found but I think new code is repeated 5 times in the book it is introduced and then repeated another 5 times in the next 5 books. I think however I might not have these numbers exactly correct.

Children are never asked to make huge leaps to new material. The code is highly controlled and a child is never asked to read words that he has not been previously taught the code. The large amount of reading( 52 books for the kdg level) is where all the practice takes place.

However what sets these books apart from anything else I have ever used it that all the learning is transparent to the instructor so there is really no need for testing. It is not a mystery about what a child is learning. You simply have to listen to him read the books. If he gets stuck on a word like /sit/ and cannot say the sound for the short /i/ then you know the child has not learned the short i. If a child is sounding out every single word as he reads you know the child needs to reread the book to work on fluency. The child's progress guides the instructor. The instructor does not guide the child.

The students I have do exactly as Zig describes. Some learn new sounds in one exposure and the new word goes to fluency with one read through. Others need multiple exposures and will sound out a new word many times. However the books are designed to handle this. They repeat and repeat new sounds with new words.

What is unique however is that kids who are quick to make all the needed connections can just read the book once and move to the next one. The child sets the pace. The risk of providing too much practice is not a problem. The child simply moves to the next book where a new sound will be introduced.

Kids who need more practice will read the book again. If they still need more repeated readings can take place.

We have also found that the top kids who move quickly through the books don't need to complete the whole program. They finish reading instruction after reading through 4 sets of the books. ( there are 8 sets which take a child from beginning reading to a 3.1 reading level). Other kids need to read all 8 sets. Some kids need to read some sets more than once.

The books are the instruction. They control all the introduction of the code. They provide all the practice. And all of the learning is completely transparent. Even the parents who use the books at home can tell exactly what their child has learned or had not learned.

I have been using these books for seven years. This is the first year we moved all our tutoring to the kdg level and what a difference it has made. We have found that kdg kids easily learn code, blending and gain fluency much easier than when we started with first graders. Also kdg kids have had less damage from balanced literacy instruction and that has made a huge difference.

My students are bombarded all day long in class with traditional balanced literacy mal-instruction. I can't imagine how much easier things might go without that handicap.

What Zig describes is not hard to design, at least in beginning reading instruction. Kids thrive in this type of instruction.

Kathy Nell

Kathy said...

Since there seems to be a word limit to the posting I had to continue my post in a second post.

What Zig has said is 100% accurate as I see it play out everyday in front of me. It is amazing to watch. Children are not frustrated in this system of learning and thrive in it. In the beginning of the year we had a very hostile kdg boy, which I will call Eddie. That is not his real name. Eddie was angry when we began with him in Sept. He wanted to read so badly. He could not figure out Balanced literacy and the leveled texts. His confusion translated to disruptive behavior in class. When we first began with him he would cry and put his head on the desk and refuse to read.

To see this boy today is simply amazing. He is our best student. He is reading like a first grade child, has excellent fluency, is happy and runs to the tutoring session. His tutoring is complete. I think he will do just fine in first grade back in the traditional balanced literacy, as he now completely understands how reading works.

Based on all my experience of using instruction like Zig describes, it makes for happy kids and kids who learn and teachers who know what kids have learned.

Balanced literacy is exactly the opposite of this. All instruction is teacher controlled and nothing is explicit. Kids are expected to make huge leaps and teachers do judge the rest of the class by the top kids that Zig describes as easily making the needed connections from one piece of material to the next.

Kdg kids are expected to read leveled texts as early as the first few weeks of school. Those books are full of code that will never be taught in a kdg room. The frustration this creates for many kids is actually painful to watch.

Teaching kdg kids to read is actually quite easy and most kids do not struggle with the needed skills- blending and learning code. Yes a few kids do need more practice but this can be handled if folks only knew it was the instruction used in schools(BL) that was making it all worse.

Kathy

Kathy said...

My last comment is that balanced literacy is not transparent. The testing used, the DRA, measures nothing. Teachers are left in the dark about what a child knows or does not know. Parents are also clueless. And most importantly the kids do not know what they know. They see other kids reading and wonder what the heck they are doing wrong. They then conclude they are stupid. And this is where all the trouble begins. Things go steadily downhill for the child. He never really figures out how to read. He manages to memorize some words, learn some code, bits and pieces of this and that and forever stays a year or two below grade level in reading. He never likes to read and generally starts to hate school.

Ask any teacher to tell you why a child is struggling in reading. They really cannot answer this question other than with such statements as he does not know his sight words, he makes mistakes, he guesses, his comprehension is weak or he makes reversals. ( a favorite because to most teachers this means the child must be LD and that takes them off the hook).

Balanced literacy bases all of its instruction on what the top kids in the class can do and this is its biggest failure. It assumes all kids can make these huge leaps as Zig describes. Teach short /a/ today and all kids will learn it with one worksheet and one book with some short /a/ words. If they don't then the teacher concludes the kid must be learning disabled. I have had many first grade teachers tell me that they taught the short /a/ and that Johnny still cannot read it.

I have no idea why what Zig has said in this post is so hard to understand or why it creates so much controversy. It seems so straight forward to me.

When you are involved in instruction the is designed like he describes, it makes teaching so much fun.

Kids are learning and they know they are learning. That is what is so much fun to watch.

kathy

Parry Graham said...

I'm liking this one, Ken, and can't wait to see where you go with it.

Kathy, I very much appreciate your real-life examples and how they demonstrate the concepts Ken is outlining.

One of the challenges I have found is that, because there are good, effective examples of how this instructional framework can play out positively with young children learning how to read, people sometimes extrapolate out and suggest that we should be able to create similar effective instructional frameworks all the way up the K-12 ladder.

As someone who has worked in elementary, middle, and high schools, I have found that designing this type of instructional framework becomes increasingly complex as the instructional content becomes more varied and complex, and as the gaps between learners increase. Adapting the type of approach that Kathy describes to middle or high school history, science, or literature content strikes me as prohibitively complex on a large scale, which is (I believe) one reason why the educational model still largely rests on static content (primarily textbooks), static and tiered courses, and individual instructors tasked with independently determining how the process plays out.

Parry

Dick Schutz said...

I have found that designing this type of instructional framework becomes increasingly complex as the instructional content becomes more varied and complex, and as the gaps between learners increase.

The thing is any content is "varied and complex" if you don't have the background prerequisites to handle it. Whether the "gaps between learners increase" depends upon your view. With prevailing instruction, all the "gaps" are very predictable very early in K. But all the mis-instruction that inadvertently goes on introduces increasingly large psychological obstacles that have to be overcome.

Downe' induction metaphor doesn't do anything for me. But your clean-up makes sense. And so do the quotes from Zig. Zig writes from his experience and it works for him. What Zig views as "inferential leaps" can also be viewed as "information gaps."

The instructional design challenge is how to deal with these "gaps." Zig uses "lessons" and "exercise types" that are scripted. "Script" isn't quite fair or accurate, because school personnel involved have to be "trained" to "think and act like Zig" If they don't do this, they "lack fidelity", they won't get the results Zig would get and they'll drop the program at the first chance.

Zig seeks very tight control of both teacher and student. But there is latitude between this and the "no clear feedback" that characterizes prevailing instruction.

Zig's blind spot was/is that he accepted "standardized achievement tests" and "comparative experiment" methodology as arbiters of the effectiveness of his programs. With no direct information about the reliability of delivering defined instructional outcomes, DI gets needlessly bumped around.

I don't want to interrupt your story. It's off to a good start.