I like Stephen Downes' metaphorical induction model of knowledge of learning (with my clarifications). If you haven't already, go read it and don't come back until you understand it. I'll reproduce the diagram below.
There are three important factors in this model.
- The top inductor representing teacher (or resource) effects. Following the electronics metaphor, a teacher (or resource, such as a book) with more windings is more capable of transferring data to the student than a teacher with less windings. The inductor windings represent teacher effectiveness.
- The bottom inductor which represents student effects. A student with more windings will be more capable of inducing a transfer of data from the teacher/resource than a student with less windings. The student windings represent cognitive ability and other student factors.
- The distance between the two inductors represents the presentation of the data from the teacher/resource to the learner/student. I've termed the distance the "inductive gap." The teacher presentation represents what actually takes place in the classroom that effects learning. It is the curriculum, the classroom management, and pedagogy. A poor presentation makes learning more difficult by increasing the distance between the teacher and student inductors. A good presentation reduces the distance, making learning more likely.
Complicating matters even further is that the model is different for each piece of data that the teacher wants transferred to (i.e., learned by) the student. Thousands of these transferences must take place during the K-12 cycle.
Moreover, the model ignores student retention effects. Just because the student has actually induced the intended knowledge from the teacher presentation does not mean that the knowledge will be retained by the student. The ravages of forgetfulness are brutal and must be contended with by the teacher. Often retention is ignore or, worse yet, derided by educators ("drill and kill").
As Stephen Downes puts it, what teachers are really doing is training a neural net in each student's brain. That training does not happen instantaneously. It happens over time and is imperfect. See Willingham's article on inflexible knowledge. This makes the teacher's interpretation of the learner's feedback even more difficult as the neural network is being trained.
To help understand how the model plays out in the real world and it's implications for instructional design and how the "inductive gaps" affect the learning process, I'm going to provide a lengthy quote from Zig Engelmann's upcoming book. In the passage Zig calls the inductive gaps "inferential gaps."
[We] have learned from extensive applications that nothing may be assumed to be taught to at-risk children unless it appears on at least three consecutive lessons...When we apply this formula to the first draft of the material, we presume that when the program is field-tested, our estimates will be confirmed. However, we remain perfectly aware that in some cases the practice estimates are wrong. They may vary in either direction—providing too much practice, or providing too little. More frequently the error is in the direction of too little practice.
An important issue that we must address in creating a sequence of activities is how large the inferential gaps are between one exercise type and the next type in the sequence…
… If we were to unintentionally design a program with enormous gaps for teaching reading, we might first teach letter names, teach the short sounds for the vowels, and then require learners to sound out regularly spelled words like run and hat. Obviously, the gap between the exercise types is large because students haven’t been taught the sounds for the consonants, or how to blend the sounds together to identify words.
Although some bright students may be able to formulate workable inferences about how to derive the sounds from the names of some consonants, most students will fail the instruction because of the large gap between what they know and what they are expected to do. Discovery learning assumes that students are able to fill large inferential gaps between what they know and what they are expected to learn. Proponents of structured instruction believe that only small sequence-related inferences are appropriate.
Note that there will always be inferential gaps between exercise types. The only issue is how large they are. This is an empirical issue. If we believe that students should be successful, we would design instruction so the inferential gaps are small enough for students to succeed. If students do not succeed, their failure suggests that the inferential gaps are too large, which means that the sequence should be redesigned to make the gaps smaller. Direct observation of how students respond to a sequence is necessary because that is often the only way these gaps are identified. Typically students are progressing through a sequence well and then encounter an exercise type that is too difficult for them. If the exercise seems clear and apparently provides adequate practice, the problem is not with this exercise type, but with the sequence of activities. In other words, student performance implies that there is a gap in the sequence that is too large for the students.
As suggested above, the size of reasonable gaps is not the same for all students. The children we have worked with have ranged from those who could not take even the smallest imaginable steps without considerable practice, to children who drew correct inferences that were far in advance of what they had been taught.
At the extreme low end was a pair of twins who had spent the first four years of their lives with virtually no human contact and who could identify some real objects, like a shoe, a ball, and a cup, but could not identify any two-dimensional representations. Even when the teacher prompted the relationship by holding a red ball next to a picture of a red ball, the children could not identify the object in the picture. After many trials, they could identify pictures of balls, shoes, and cups without the corresponding three-dimensional object next to it; however, these children had to practice identifying more than 10 illustrated objects before they could generalize and identify an illustrated object that had not been taught.
At the other extreme are the highly talented students who make a mockery out of the three-lesson rule. They learn names of new things in only a couple of trials and are able to take great leaps from what they know to remotely related inferences they are scheduled to learn much later.
If the designer assumes that every minor variation in what is to be taught requires explicit instruction, the instructional sequence may be many times more laborious than it needs to be for the average learner who goes through the program. On the other hand, if the designer makes elitist assumptions that characterize analyses of Dewey and Bruner, the inferential leaps required by the program are so large that they may be made by fewer than one fourth of the students. For example, a math program that presents a single example of each problem type assumes that students will formulate an algorithm for solving the problem presented that will generalize to the full set of related problems that are not taught. In fact, possibly only one fourth of the average students will solve the problems or benefit from the experience of struggling with them. The percentage of low performers making this leap is virtually zero percent.
The only way to determine whether the program is highly effective with the intended student population is to provide an empirical test of the sequence. This test will not only identify the missing inferences but will reveal both their character and size. In other words, they provide the designer with precise information about how to address the missing inference.
I'm going to highlight one observation from Zig's passage:
The children we have worked with have ranged from those who could not take even the smallest imaginable steps without considerable practice, to children who drew correct inferences that were far in advance of what they had been taught.
This is the one that does quite a bit of educational mischief. Educators draw all sorts of bad conclusions from the students who can draw "correct inferences that [are] far in advance of what they had been taught" and apply those conclusions to the students who cannot "take even the smallest imaginable steps without considerable practice."
But I don't want to get too ahead of myself just yet.
In my next post I'll discuss how the various ed reforms are doomed to failure since their real world effects often fail to address or have limited effect on the real educational variables as illustrated in the above model.