May 4, 2010

Hell freezes over

I know I'm going to regret this.

I think Stephen Downes is mostly right in his proposed induction model of knowledge and learning.



What we have here is a model where the input data induces the creation of knowledge. There is no direct flow from input to output; rather, the input acts on the pre-existing system, and it is the pre-existing system that produces the output.

In electricity, this is known as induction, and is a common phenomenon. We use induction to build step-up or step-down transformers, to power electric motors, and more. Basically, the way induction works is that, first, an electric current produces a magnetic field, and second, a magnetic field creates an electric current.

Why is this significant? Because the inductive model (not the greatest name in the world, but a good alternative to the transmission model) depends on the existing structure of the receiving circuit, what it means is that the knowledge output may vary from system to system (person to person) depending on the pre-existing configuration of that circuit.

Stephen is proposing a mutual induction circuit theory of knowledge transference. As Stephen says, the magnetic flux due to the current of the top circuit is inducing a current in the bottom circuit.

Knowledge is not directly transferred into a learner, but rather knowledge is acquired indirectly through an inductive process. Specifically, knowlege is typically acquired through an "inductive reasoning" process.

That is, the learner observes stimuli (examples and non-examples); (2) performs a series of logical operations on what it observes; and (3) arrives at (induces, figures out, discovers, “gets”) a general idea revealed by the examples and nonexamples.

For example, let's say a teacher is trying to teach a student the concept of "over" in the sense that when an object is "over" another object it is directly vertically above that object. The teacher may tell or the student may look up the definition of the word "over" and arrive at the verbal definition "directly vertically over." But, this verbal definition isn't necessarily directly stored in the student's memory verbatim, if at all. And, even if it is, it doesn't necessarily follow that the learner retrieves this verbal defnition when deciding if something is "over" something else, i.e., when determining the metes and bounds of the concept "over." Which is not to say that knowing the verbal definition is not typically useful in learning a new concept. It often is.

Concepts, such as "over," are typically learned by the learner observing various examples and non-examples of the concept and then inductively reasoning to the general concept revealed by the examples and non-examples.

Take a look at the following teacher presentation:


The student is presented with two examples (1 and 2) followed by two non-examples (3 and 4) from which the student must inductively reason the general concept revealed thereby.  Then the student is tested to determine if the he or she has learned the concept.

This is about a clear and unambiguous of a presentation as it gets.  Some students might have induced the general concept with only these four examples/non-examples.  Some might have induced it with less. Some will require many more.  Moreover, the concept itself is fairly simple.  And, the result is that the "inductive gap" that the student has to traverse in order to induce the general concept is likely small.


 

In contrast, for more difficult concepts, such as "democracy" or "mammal," or if the presentation of examples/nonexamples is ambiguous or otherwise more difficult to traverse, such as if the student's exposure to the concept "over" comes solely through reading isolated uses of the word "over" in connected text spaced over a period of time and readings, the "inductive gap" is larger.



As a result, inducing the general concept would be more difficult.  Fewer students would be capable of traversing this gap without assistance.

Let's look at another example.  Some students learn how to read with little, no, or poor/ambiguous instruction and are able to to traverse the hundreds, perhaps thousands, of large inductive gaps needed to learn how to make sense out of connected text.  Other students won't be able to induce the ability to read unless the inductive gaps are made smaller.  One way to make these gaps smaller is to explicitly teach the various letter-sound correspondences, often referred to as "phonics."  In fact, the gaps can be (unintentionally) made wider by misteaching.  For example, many "whole language" pedagogical methods actually increase the difficulty many students have learning to read by inadvetently teaching unproductive strategies.

One goal of formal education is to present the material to the student in such a way that the student is able to manage and traverse the inductive gaps inherent in learning the general concepts underlying the material.

However, what does not necessarily follow is Stephen's next assertion.

What it means is that you can't just apply some sort of standard recipe and get the same output. Whatever combination of filtering, validation, synthesis and all the rest you use, the resulting knowledge will be different for each person. Or alternatively, if you want the same knowledge to be output for each person (were that even possible), you would have to use a different combination of filtering, validation, synthesis for each person.

Certainly some "recipes" are more effective than others for getting the same output, i.e., learning the material.  Some "recipes" may be successful in getting, say, 95% of the students to learn the desired output.  Others might only be successful in getting 50% to the same output.  It simply doesn't follow that a different recipe is needed for each student.

Nobody really knows what's going on in a students brain while they are inductively reasoning (or as Stephen puts it "[selecting of the right] combination of filtering, validation, synthesis") their way to learning a particular concept.  All an educator can really do is to determine whether the student is able to accurately communicate the general concept by verbalizing, demonstrating, or otherwise applying the general concept correctly to new examples which correlate highly with those responses or applications of those people/experts who understand the general concept.

That's why educators test.

But when Stephen concludes:

Even when you are explicitly teaching content, and when what appears to be learned is content, since the content itself never persists from the initial presentation of that content to the ultimate reproduction of that content, what you are teaching is not the content. Rather, what you are trying to induce is a neural state such that, when presented with similar phenomena in the future, will present similar output. Understanding that you are train a neural net, rather than store content for reproduction, is key.
He is essentially right.

We apply or generalize knowledge through deductive reasoning.

The learner has/knows/can say a general idea (concept, rule/proposition, routine); (2) can use the general idea (definition of a concept, or statement of a rule, or features of the things handled by the routine; e.g., math problems, words) to examine a possible new example using the information in #2; (3) and can “decide” whether the new thing FITS (is an example of) the definition, rule, or routine (“Can you solve this with FOIL?”); and (4) is able to “treat” the example accordingly -- name it (concept), explain it (with the rule), solve it (with the routine).

Whether this can be characterized as "teaching content" is irrelevent.  Having the proper "neural state" means that the "content" has been learned regardless of how directly or indirectly it was taught.

[Update:  cleaned up numerous typos]

2 comments:

Stephen Downes said...

No reason to regret this, I'm a reasonable person.

I will take the 'inductive gap' as metaphorical rather than literal, as the entire model is metaphorical.

This is important because there is no sense in which the content jumps over or traverses the gap.

This is because the inductive gap is in fact a semantic gap. Whatever signification any signal had in the source is lost; the signification a signal has in the destination is based on a completely different semantic map.

That's why, though the teacher has an intended meaning of the word 'over', the resulting meaning may be different when interpreted by the learner.

There is no principle-based means of ensuring that semantic alignment will occur. This has been studied extensively. It's a matter of logic; the evidence underdetermines the conclusion. See, eg., Quine, 'On the Indeterminacy of Translation'.

You can say "Certainly some 'recipes' are more effective than others for getting the same output, i.e., learning the material." But if you can't say why (as you seem to agree, when you say "Nobody really knows what's going on in a students brain while they are inductively reasoning")" then you can't say whether you have obtained the actual result desired, or some stand-in for the result.

The more simple the assessment, the more likely youy have obtained a stand-in. That's why it is an error to teach, and test for, simple stand-alone concepts such as 'over'. While it's easy to produce the behaviour, you can only be confident that the understanding is sementically correct if the learner engages in relatively complex learning and assessment tasks.

As the complexity increases, your confidence in the assessment increases (that's why we make people whose understanding really matters - like airline pilots and surgeons) go through extensive practicums or internships.

However, with this comes increased costs and difficulty in managing the assessment.

Also, paradoxically, with this comes lower scores. Because the much more complex assessment reduces the number of false positives obtained from guessing, reading teacher expressions, pattern detection, and other misleading responses that yield correct results in simple tests.

That's why you can't just look at test results to determine wheter a teaching method is effective. You also want some way of describing your confidence in the results - not confidence in the sense of statistical significance, but confidence in the sense of semantical reliability.

Anyhow, I appreciate your willingness to engage with the model.

KDeRosa said...

I agree with most of your comment, Stephen.

... There is no principle-based means of ensuring that semantic alignment will occur.

... then you can't say whether you have obtained the actual result desired, or some stand-in for the result.


No, but in practice we can structure the "tests" to achieve a reasonable degree of certainty that the student correct responses aren't false positives and the incorrect responses aren't false negatives. that includes both the immediate simple tests, the delayed tests for retention, and the more complex tests for application. The perfect need not be the enemy of the good.

That's why it is an error to teach, and test for, simple stand-alone concepts such as 'over'.

With this I disagree. If the student is not able to distinguish new but similar examples from non-examples of the concept just taught, then it is reasonable to conclude that the student hasn't induced the correct generalized concept in the case where the teacher knows from past experience that others who have learned the concept, such as through more complex testing, are able to make the discrimination. Again, this doesn;'t guarantee the absence of a false positive, but I thinkthe teacher can be sufficiently cetain that the student has correctly learned and to move on.

That's why you can't just look at test results to determine wheter a teaching method is effective. You also want some way of describing your confidence in the results

this may well be true for complex areas of advanced learning. however, I don't think it's an issue on the K-12 level.