tag:blogger.com,1999:blog-25541994.post4489701113513341753..comments2024-03-26T14:44:37.985-04:00Comments on D-Ed Reckoning: Hell freezes overKDeRosahttp://www.blogger.com/profile/06853211164976890091noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-25541994.post-12371451214166657922010-05-05T20:00:52.512-04:002010-05-05T20:00:52.512-04:00I agree with most of your comment, Stephen.
... ...I agree with most of your comment, Stephen.<br /><br /><i>... There is no principle-based means of ensuring that semantic alignment will occur.<br /><br />... then you can't say whether you have obtained the actual result desired, or some stand-in for the result.</i><br /><br />No, but in practice we can structure the "tests" to achieve a reasonable degree of certainty that the student correct responses aren't false positives and the incorrect responses aren't false negatives. that includes both the immediate simple tests, the delayed tests for retention, and the more complex tests for application. The perfect need not be the enemy of the good.<br /><br /><i> That's why it is an error to teach, and test for, simple stand-alone concepts such as 'over'. </i><br /><br />With this I disagree. If the student is not able to distinguish new but similar examples from non-examples of the concept just taught, then it is reasonable to conclude that the student hasn't induced the correct generalized concept in the case where the teacher knows from past experience that others who have learned the concept, such as through more complex testing, are able to make the discrimination. Again, this doesn;'t guarantee the absence of a false positive, but I thinkthe teacher can be sufficiently cetain that the student has correctly learned and to move on.<br /><br /><i>That's why you can't just look at test results to determine wheter a teaching method is effective. You also want some way of describing your confidence in the results</i><br /><br />this may well be true for complex areas of advanced learning. however, I don't think it's an issue on the K-12 level.KDeRosahttps://www.blogger.com/profile/06853211164976890091noreply@blogger.comtag:blogger.com,1999:blog-25541994.post-20261668259966774432010-05-05T18:08:38.537-04:002010-05-05T18:08:38.537-04:00No reason to regret this, I'm a reasonable per...No reason to regret this, I'm a reasonable person.<br /><br />I will take the 'inductive gap' as metaphorical rather than literal, as the entire model is metaphorical. <br /><br />This is important because there is no sense in which the content jumps over or traverses the gap.<br /><br />This is because the inductive gap is in fact a semantic gap. Whatever signification any signal had in the source is lost; the signification a signal has in the destination is based on a completely different semantic map.<br /><br />That's why, though the teacher has an intended meaning of the word 'over', the resulting meaning may be different when interpreted by the learner.<br /><br />There is no principle-based means of ensuring that semantic alignment will occur. This has been studied extensively. It's a matter of logic; the evidence underdetermines the conclusion. See, eg., Quine, 'On the Indeterminacy of Translation'.<br /><br />You can say "Certainly some 'recipes' are more effective than others for getting the same output, i.e., learning the material." But if you can't say why (as you seem to agree, when you say "Nobody really knows what's going on in a students brain while they are inductively reasoning")" then you can't say whether you have obtained the actual result desired, or some stand-in for the result.<br /><br />The more simple the assessment, the more likely youy have obtained a stand-in. That's why it is an error to teach, and test for, simple stand-alone concepts such as 'over'. While it's easy to produce the behaviour, you can only be confident that the understanding is sementically correct if the learner engages in relatively complex learning and assessment tasks.<br /><br />As the complexity increases, your confidence in the assessment increases (that's why we make people whose understanding really matters - like airline pilots and surgeons) go through extensive practicums or internships.<br /><br />However, with this comes increased costs and difficulty in managing the assessment.<br /><br />Also, paradoxically, with this comes lower scores. Because the much more complex assessment reduces the number of false positives obtained from guessing, reading teacher expressions, pattern detection, and other misleading responses that yield correct results in simple tests.<br /><br />That's why you can't just look at test results to determine wheter a teaching method is effective. You also want some way of describing your confidence in the results - not confidence in the sense of statistical significance, but confidence in the sense of semantical reliability.<br /><br />Anyhow, I appreciate your willingness to engage with the model.Stephen Downeshttps://www.blogger.com/profile/06140591903467372209noreply@blogger.com