I know I'm going to regret this.
I think Stephen Downes is mostly right in his proposed
induction model of knowledge and learning.
What we have here is a model where the input data induces the creation of knowledge. There is no direct flow from input to output; rather, the input acts on the pre-existing system, and it is the pre-existing system that produces the output.
In electricity, this is known as induction, and is a common phenomenon. We use induction to build step-up or step-down transformers, to power electric motors, and more. Basically, the way induction works is that, first, an electric current produces a magnetic field, and second, a magnetic field creates an electric current.
Why is this significant? Because the inductive model (not the greatest name in the world, but a good alternative to the transmission model) depends on the existing structure of the receiving circuit, what it means is that the knowledge output may vary from system to system (person to person) depending on the pre-existing configuration of that circuit.
Stephen is proposing a mutual induction circuit theory of knowledge transference. As Stephen says, the magnetic flux due to the current of the top circuit is inducing a current in the bottom circuit.
Knowledge is not directly transferred into a learner, but rather knowledge is acquired indirectly through an inductive process. Specifically, knowlege is typically acquired through an "inductive reasoning" process.
That is, the learner observes stimuli (examples and non-examples); (2) performs a series of logical operations on what it observes; and (3) arrives at (induces, figures out, discovers, “gets”) a general idea revealed by the examples and nonexamples.
For example, let's say a teacher is trying to teach a student the concept of "over" in the sense that when an object is "over" another object it is directly vertically above that object. The teacher may tell or the student may look up the definition of the word "over" and arrive at the verbal definition "directly vertically over." But, this verbal definition isn't necessarily directly stored in the student's memory verbatim, if at all. And, even if it is, it doesn't necessarily follow that the learner retrieves this verbal defnition when deciding if something is "over" something else,
i.e., when determining the metes and bounds of the concept "over." Which is not to say that knowing the verbal definition is not typically useful in learning a new concept. It often is.
Concepts, such as "over," are typically learned by the learner observing various examples and non-examples of the concept and then inductively reasoning to the general concept revealed by the examples and non-examples.
Take a look at the following teacher presentation:
The student is presented with two examples (1 and 2) followed by two non-examples (3 and 4) from which the student must inductively reason the general concept revealed thereby. Then the student is tested to determine if the he or she has learned the concept.
This is about a clear and unambiguous of a presentation as it gets. Some students might have induced the general concept with only these four examples/non-examples. Some might have induced it with less. Some will require many more. Moreover, the concept itself is fairly simple. And, the result is that the "inductive gap" that the student has to traverse in order to induce the general concept is likely small.
In contrast, for more difficult concepts, such as "democracy" or "mammal," or if the presentation of examples/nonexamples is ambiguous or otherwise more difficult to traverse, such as if the student's exposure to the concept "over" comes solely through reading isolated uses of the word "over" in connected text spaced over a period of time and readings, the "inductive gap" is larger.
As a result, inducing the general concept would be more difficult. Fewer students would be capable of traversing this gap without assistance.
Let's look at another example. Some students learn how to read with little, no, or poor/ambiguous instruction and are able to to traverse the hundreds, perhaps thousands, of large inductive gaps needed to learn how to make sense out of connected text. Other students won't be able to induce the ability to read unless the inductive gaps are made smaller. One way to make these gaps smaller is to explicitly teach the various letter-sound correspondences, often referred to as "phonics." In fact, the gaps can be (unintentionally) made wider by misteaching. For example, many "whole language" pedagogical methods actually increase the difficulty many students have learning to read by inadvetently teaching unproductive strategies.
One goal of formal education is to present the material to the student in such a way that the student is able to manage and traverse the inductive gaps inherent in learning the general concepts underlying the material.
However, what does not necessarily follow is Stephen's next assertion.
What it means is that you can't just apply some sort of standard recipe and get the same output. Whatever combination of filtering, validation, synthesis and all the rest you use, the resulting knowledge will be different for each person. Or alternatively, if you want the same knowledge to be output for each person (were that even possible), you would have to use a different combination of filtering, validation, synthesis for each person.
Certainly some "recipes" are more effective than others for getting the same output,
i.e., learning the material. Some "recipes" may be successful in getting, say, 95% of the students to learn the desired output. Others might only be successful in getting 50% to the same output. It simply doesn't follow that a different recipe is needed for each student.
Nobody really knows what's going on in a students brain while they are inductively reasoning (or as Stephen puts it "[selecting of the right] combination of filtering, validation, synthesis") their way to learning a particular concept. All an educator can really do is to determine whether the student is able to accurately communicate the general concept by verbalizing, demonstrating, or otherwise applying the general concept correctly to new examples which correlate highly with those responses or applications of those people/experts who understand the general concept.
That's why educators test.
But when Stephen concludes:
Even when you are explicitly teaching content, and when what appears to be learned is content, since the content itself never persists from the initial presentation of that content to the ultimate reproduction of that content, what you are teaching is not the content. Rather, what you are trying to induce is a neural state such that, when presented with similar phenomena in the future, will present similar output. Understanding that you are train a neural net, rather than store content for reproduction, is key.
He is essentially right.
We apply or generalize knowledge through deductive reasoning.
The learner has/knows/can say a general idea (concept, rule/proposition, routine); (2) can use the general idea (definition of a concept, or statement of a rule, or features of the things handled by the routine; e.g., math problems, words) to examine a possible new example using the information in #2; (3) and can “decide” whether the new thing FITS (is an example of) the definition, rule, or routine (“Can you solve this with FOIL?”); and (4) is able to “treat” the example accordingly -- name it (concept), explain it (with the rule), solve it (with the routine).
Whether this can be characterized as "teaching content" is irrelevent. Having the proper "neural state" means that the "content" has been learned regardless of how directly or indirectly it was taught.
[Update: cleaned up numerous typos]