May 25, 2010

The Other Problem with Standards

"We took our licks, we got outvoted," [Republican Texas Board of Education member David Bradley] said of the last time the [Texas] standards were debated and approved in 1999. ... "Now it's 10-5 in the other direction. ... We're an elected body, this is a political process. Outside that, go find yourself a benevolent dictator."

I love the benevolent dictator part.

Standards, especially national ones, are a political beast. They will be politicized. And, about half the time your political party will be out of power. Politics are a zero-sum game. Deal with it.

May 24, 2010

Induction is not Constructivism

(Update:  cleaned up a bunch of typos and reworded the post to clarify.)

Brian Rude asks a good question,

The induction model certainly makes some sense. But isn't it another name for constructivism?

I see how Brian got confused due to my oversimplified model, which conflates inductive reasoning with the inductive-like process of the learning metaphor I proposed.  And, since constructivism relies heavily on inductive reasoning, Brian's conclusion is a fair take-away.

So, let me clarify.  Take a look at this less over-simplified schematic.

The diagram shows how a learner converts an observation of some stimulus into a thought/memory. i.e., how the learner learns. I've re-labeled the induction process I was explaining in the previous post to a a sub-induction process and added on three primary super-processes of learning: direct memory, deductive reasoning, and inductive reasoning.  (Bear in mind that this is merely a hypothetical conceptual model, what is actually happening in the brain is still largely unknown.)

The point I'm trying to get across in the model is that the observed stimulus ALWAYS gets transformed as it becomes knowledge.  I could have used an alternate model and placed the sub-induction directly under the "direct memory" process and indicated that the "sub-inductive" process was subsumed in the "deductive" and "inductive" reasoning processes to get the same point across.  Something like this.

I think the first diagram is conceptually clearer.  The main take-away for either model is that there is no direct access into the brain.

The learning flow would go something like this:  1. learner observes stimulus, 2. learner processes stimulus via one of the super learning process, 3. learner  then the sub-learning process, and 4. then the extracted "knowledge" goes into the learner's memory.

How the observation is made or how the stimulus is presented to the learner determines what super-process the learner will use.  Let's look at some examples (now would be a good time to review these three posts on the nature of knowledge).

The teacher tells the learner the following fact (i.e., a verbal association): "The U.S. Constitution was written in Philadelphia."  The learner learns this fact via direct memory (I couldn't think of a better name, sorry).  However, the verbal statement does not merely get imprinted in the learner's memory verbatim (not that anybody seriously believes this in any event).  The fact gets abstracted by the sub-induction process into the learners existing knowledge, something like this:

The knowledge is the connection.

This knowledge could have been learned other ways as well.  Let's say the learner has been exposed to the following two facts: that "the U.S. Constitution was written at the constitutional convention" and that "the constitutional convention was held in Philadelphia."  From these two facts, the learner can use deductive reasoning to deduce that  "The U.S. Constitution was written in Philadelphia."

How about another one: learning a rule relationship.  Here's the rule: "the steeper the inclined plane, the less time it takes the ball to roll down the inclined plane." This rule can be learned deductively or inductively.

In the deductive method, the teacher might start off with: “The question is, Is there a connection between how steep an inclined plane is and how long it takes a ball to roll down it?”

The teacher then tells the student the rule-relationship (the steeper the inclined plane, the less time it takes the ball to roll down the inclined plane) and then show examples using inclined planes of different angles. These examples would confirm the rule. The knowledge of the rule is processed by the learner through the deductive reasoning process and then stored via the sub-induction process. (Sorry, no fancy connection map this time.)

In the inductive method, the teacher has the learner do an experiment by rolling balls down inclined planes of different angles, measuring how long it takes each ball to roll down, and then has the learner draw a conclusion.

This way requires more skills. (In the deductive method, the learner merely compares examples with the rule. “Yup, the ball takes less time when the angle is steeper.”) For example, the learner has to change the angles, measure the times, write the measurements, compare and contrast the instances, and figure out the connection. This means the teacher would have to teach these pre-skills before learners do the experiment.

The knowledge of the rule is processed by the learner through the inductive reasoning process and then stored via the sub-induction process. However, the learner takeaway is the same, that is, the connection mapping in the learner's memory is the same.

Which finally brings us 'round to constructivism.

Stripping away all the pedagogical blather, constructivism is merely a teaching pedagogy that favors learning through inductive reasoning as the preferred pathway.  Constructivists favor learning by experience or by doing. This means that the learner will be observing stimuli (examples and non-examples of something) and generating general ideas revealed by the examples and non-examples.  Hey, that sounds suspiciously like inductive reasoning.

Here's what I wrote about the inductive reasoning process that learners go through when they observe a stimulus during the learning process.

Knowledge is not directly transferred into a learner, but rather knowledge is acquired indirectly through an inductive process. Specifically, knowlege is typically acquired through an "inductive reasoning" process.

That is, the learner observes stimuli (examples and non-examples); (2) performs a series of logical operations on what it observes; and (3) arrives at (induces, figures out, discovers, “gets”) a general idea revealed by the examples and nonexamples.

In contrast, direct instruction relies more heavily on deductive reasoning pathway for teaching certain forms of knowledge, such as rule relationships. Using deductive reasoning, the learner goes from general (rule) to specific (examples). In the deductive method the teacher teaches the rule statement first. Then examples and nonexamples are then presented. Then the teacher tests all examples and nonexamples to see if the learner has learned the rule.

Constructivism can be faulted for many things, but its reliance on the inductive reasoning pathway of learning is not one of them. That is a perfectly valid pathway which proponents of direct instruction use (such as for teaching sensory/basic concepts). Constructivism's faults lie elsewhere -- over-reliance on the induction method of teaching and generally a failure to attend to the important minutiae of teaching for determining whether the learner has learned the intended knowledge and is retaining it. The latter is a self-imposed error based on ideology because constructivism makes it more difficult to get the minutiae right.

May 19, 2010

Common Core Standards Released

Here they go.  Ta Da.

I remain unimpressed.  Frankly, I don't see how these standards are going to be the impetus to improve any aspect of education.

As a lawyer I occasionally draft contracts and, let me tell you, you have to use much more precise language than this to actually get someone to reliably do what you want (and are paying) them to do.

I'd bet that all fifty states could adopt these standards, not change a single thing they are doing, and claim compliance.

Some of these standards are incredibly silly.

Take for example this one under print concepts for kindergarten (p. 16)

Demonstrate understanding of the organization and basic features of print. ... Follow words from left to right, top to bottom, and page-by-page.
This looked familiar to me and sure enough Minnesota had (has) a similar standard.

Follow print (words and text) from left to right and top to bottom.

Bob Dixon lampooned Minnesota's standard as being ridiculous and I'm thinking the same criticism applies to the Core standard.

Students should certainly do this when learning to read. My question is this: Before Minnesota developed this standard, was anyone there not teaching kids to read English from left to right and top to bottom? Apparently. The people who sat on that committee and collectively decided to write this out as a standard for the children of Minnesota—did they feel literate and scholarly and innovative when the final vote was tallied? I hope this standard makes a significant contribution toward correcting the problem with the way people used to teach reading in Minnesota.

This sentiment applies to pretty much all of the Core standards, even the less ridiculous ones.

In fact Dixon's criticism of the Standards movement is one of the best I've seen and applies to Core's standards.

These standards are harmful because they are, for the most part, meaningless verbal detritus on the one hand, but textbook publishers live and die off them, on the other. Even with respect to clearly incomprehensible standards, publishers have to come up with something to stick in a textbook that helps create the illusion that the textbook is aligned with some set of standards. I am empathetic with the publishers … to a point. The standards are a major incentive for the publishers to produce crap. Over the years, I’ve worked with several major publishers, and none of them has aspired to produce crap. They do it, though, because the market demands that they do it.

And standards and state tests, taken together, are very harmful. First, because the standards are so bad, it is nearly impossible to assess them. In short, the standards and the state tests don’t align, except in the most meaningless and specious ways. But here is the biggest problem of them all, and the reason the tests and standards are so damaging. IF the standards were really “good” according to some criteria that would make sense to the average educated person on the street, and if they were precise enough to be aligned with assessment tools that were actually technically sound, widespread failure would continue, unabated. Figure 1 shows Doug Carnine’s illustration of the problem.

The black box in the middle is the magic by which teachers start out with goals for students and end up with students performing brilliantly on tests that are valid and reliable. The black box is the instruction, and the states and just about everyone else are so clueless about instruction that they give it very little attention. With the best standards and the best assessments, the system is doomed to failure if, at the center of it all, we don’t have the best instruction. As it stands now, the standards are, for the most part, ridiculous, and few if any of the state assessments have been certified as valid and reliable.

So using the inductive learning model to judge the effectiveness of Standards reform, let's evaluate standards as an education reform.
Standards aren't going to affect teacher or student effectiveness.  The  theory is that standards will improve curriculum effects.  But, I don't see that happening.  Standards, like the Core standards, that are easily subvertible will be subverted and educators will continue to do what they are presently doing -- because that's what they want to do.  And if NCLB 1.0 taught us anything, its that standardized assessments, even the high stakes variety, are simply not capable of improving student outcomes.  At best, improved standards and perfect assessments, might have a very indirect effect on curriculum and instruction.  Just because you write the perfect standard does not mean that educators will be able to teach it to the kids who they are currently unable to teach it to today without the standards.

Induction/Inferential Model of Learning

I've revised Downes' Induction Model of Learning schematic to incorporate my clarifications.

Specifically, I added the labeling for Teacher Effects (the top coil), Student Effects (the bottom coil), and Curriculum/Presentation Effects (the distance between the coils).

In the real world electronic circuit, current flowing through the top coil creates a magnetic field.  The magnetic field affects the bottom coil (depending upon the distance between the coils and the strength of the coils) which induces a current in the bottom circuit.

The model is not without its flaws; however, I think it is useful for conveying the simple notion that knowledge is not simply pumped directly into the student's head by the teacher.  Instead, the student observes the stimulus/data being presented by the teacher and induces "knowledge."  Hopefully, that "knowledge" is the  general idea revealed by the example(s) presented by the teacher.  Often it is not.

Anyway, the important take-away revealed by this model is that teacher effects, student effects, and curriculum/presentation effects are all interrelated and affect what the student learns, doesn't learn, or imperfectly learns.

A strong teacher and an average curriculum might only be able to induce the intended knowledge in a strong student.  Improving the curriculum will likely reach weaker students.  Improving the student, say by fixing his tooth ache which was causing a distraction, might also improve what the student learns.

Similarly, a strong teacher and a strong student might be hampered with a very weak curriculum.

Let me make three general statements regarding teacher effects, student effects, and curriculum/presentation effects based on my observations of our education system.

Teacher Effects:  We don't really know what makes a good teacher better than a bad teacher and we certainly cannot train a random teacher to be a good teacher outside the parameters of a specific curriculum.  teacher unions and tenure resist changes from the status quo.

Student Effects:  Have a hereditary/genetic and an environmental component, each contributing about 50%.  it is politically incorrect to even discuss the hereditary/genetic component so everyone pretends that there is only an environmental component.  Then everyone is surprised when environmental interventions directed to the student fail to achieve the expected benefits.

Curriculum/Presentation Effects:  To a naive observer there appear to be very many different curricula out there.  In actuality, most curricula are only superficially different and have little to no effect on what a student learns.  The few commercially-available curricula that have large effects are generally disfavored by educators for a variety of reasons (usually unrelated to student learning)

What do you think/

Next post will discuss education reforms and how they are explainable with the above model and why it is evident that most do not stand a chance of improving student outcomes

May 18, 2010

Why It's So Difficult to Improve Educational Outcomes

To understand the problems of K-12 education, that is getting students to learn what we think is important for them to learn, and the great historical difficulty we've had improving education outcomes, you need to look at ground zero of the learning process.

I like Stephen Downes' metaphorical induction model of knowledge of learning (with my clarifications).  If you haven't already, go read it and don't come back until you understand it. I'll reproduce the diagram below.

There are three important factors in this model. 

  1. The top inductor representing teacher (or resource) effects.  Following the electronics metaphor, a teacher (or resource, such as a book) with more windings is more capable of transferring data to the student than a teacher with less windings.  The inductor windings represent teacher effectiveness.
  2. The bottom inductor which represents student effects.  A student with more windings will be more capable of inducing a transfer of data from the teacher/resource than a student with less windings.  The student windings represent cognitive ability and other student factors.
  3. The distance between the two inductors represents the presentation of the data from the teacher/resource to the learner/student.  I've termed the distance the "inductive gap."  The teacher presentation represents what actually takes place in the classroom that effects learning.  It is the curriculum, the classroom management, and pedagogy.  A poor presentation makes learning more difficult by increasing the distance between the teacher and student inductors.  A good presentation reduces the distance, making learning more likely.
The benefit of the above model is that not only does it capture most of the important variables, but it also illustrates that what the teacher intends to teach (the data) is not necessarily what the student has actually learned (knowledge).  This is because it it is difficult for the teacher to ascertain what exactly the student has learned.  Such feedback can only be ascertained indirectly with imperfect testing instruments.  This feedback must also be properly interpreted by the teacher. (The drawback of the model is that it relies on the reader's knowledge of the operation of a rather esoteric electronics circuit.)

Complicating matters even further is that the model is different for each piece of data that the teacher wants transferred to (i.e., learned by) the student.  Thousands of these transferences must take place during the K-12 cycle.

Moreover, the model ignores student retention effects.  Just because the student has actually induced the intended knowledge from the teacher presentation does not mean that the knowledge will be retained by the student.  The ravages of forgetfulness are brutal and must be contended with by the teacher.  Often retention is ignore or, worse yet, derided by educators ("drill and kill").

As Stephen Downes puts it, what teachers are really doing is training a neural net in each student's brain.  That training does not happen instantaneously.  It happens over time and is imperfect.  See Willingham's article on inflexible knowledge. This makes the teacher's interpretation of the learner's feedback even more difficult as the neural network is being trained.

To help understand how the model plays out in the real world and it's implications for instructional design and how the "inductive gaps" affect the learning process, I'm going to provide a lengthy quote from Zig Engelmann's upcoming book.  In the passage Zig calls the inductive gaps "inferential  gaps."

[We] have learned from extensive applications that nothing may be assumed to be taught to at-risk children unless it appears on at least three consecutive lessons...When we apply this formula to the first draft of the material, we presume that when the program is field-tested, our estimates will be confirmed. However, we remain perfectly aware that in some cases the practice estimates are wrong. They may vary in either direction—providing too much practice, or providing too little. More frequently the error is in the direction of too little practice.

An important issue that we must address in creating a sequence of activities is how large the inferential gaps are between one exercise type and the next type in the sequence…

… If we were to unintentionally design a program with enormous gaps for teaching reading, we might first teach letter names, teach the short sounds for the vowels, and then require learners to sound out regularly spelled words like run and hat. Obviously, the gap between the exercise types is large because students haven’t been taught the sounds for the consonants, or how to blend the sounds together to identify words.

Although some bright students may be able to formulate workable inferences about how to derive the sounds from the names of some consonants, most students will fail the instruction because of the large gap between what they know and what they are expected to do. Discovery learning assumes that students are able to fill large inferential gaps between what they know and what they are expected to learn. Proponents of structured instruction believe that only small sequence-related inferences are appropriate.

Note that there will always be inferential gaps between exercise types. The only issue is how large they are. This is an empirical issue. If we believe that students should be successful, we would design instruction so the inferential gaps are small enough for students to succeed. If students do not succeed, their failure suggests that the inferential gaps are too large, which means that the sequence should be redesigned to make the gaps smaller. Direct observation of how students respond to a sequence is necessary because that is often the only way these gaps are identified. Typically students are progressing through a sequence well and then encounter an exercise type that is too difficult for them. If the exercise seems clear and apparently provides adequate practice, the problem is not with this exercise type, but with the sequence of activities. In other words, student performance implies that there is a gap in the sequence that is too large for the students.

As suggested above, the size of reasonable gaps is not the same for all students. The children we have worked with have ranged from those who could not take even the smallest imaginable steps without considerable practice, to children who drew correct inferences that were far in advance of what they had been taught.

At the extreme low end was a pair of twins who had spent the first four years of their lives with virtually no human contact and who could identify some real objects, like a shoe, a ball, and a cup, but could not identify any two-dimensional representations. Even when the teacher prompted the relationship by holding a red ball next to a picture of a red ball, the children could not identify the object in the picture. After many trials, they could identify pictures of balls, shoes, and cups without the corresponding three-dimensional object next to it; however, these children had to practice identifying more than 10 illustrated objects before they could generalize and identify an illustrated object that had not been taught.

At the other extreme are the highly talented students who make a mockery out of the three-lesson rule. They learn names of new things in only a couple of trials and are able to take great leaps from what they know to remotely related inferences they are scheduled to learn much later.

If the designer assumes that every minor variation in what is to be taught requires explicit instruction, the instructional sequence may be many times more laborious than it needs to be for the average learner who goes through the program. On the other hand, if the designer makes elitist assumptions that characterize analyses of Dewey and Bruner, the inferential leaps required by the program are so large that they may be made by fewer than one fourth of the students. For example, a math program that presents a single example of each problem type assumes that students will formulate an algorithm for solving the problem presented that will generalize to the full set of related problems that are not taught. In fact, possibly only one fourth of the average students will solve the problems or benefit from the experience of struggling with them. The percentage of low performers making this leap is virtually zero percent.

The only way to determine whether the program is highly effective with the intended student population is to provide an empirical test of the sequence. This test will not only identify the missing inferences but will reveal both their character and size. In other words, they provide the designer with precise information about how to address the missing inference.

I'm going to highlight one observation from Zig's passage:

The children we have worked with have ranged from those who could not take even the smallest imaginable steps without considerable practice, to children who drew correct inferences that were far in advance of what they had been taught.

This is the one that does quite a bit of educational mischief.  Educators draw all sorts of bad conclusions from the students who can draw "correct inferences that [are] far in advance of what they had been taught" and apply those conclusions to the students who cannot "take even the smallest imaginable steps without considerable practice." 

But I don't want to get too ahead of myself just yet.

In my next post I'll discuss how the various ed reforms are doomed to failure since their real world effects often fail to address or have limited effect on the real educational variables as illustrated in the above model.

May 14, 2010

A Bridge Too Far

Over at Bridging Differences, Deb Meiers writes:

That was an amazing and surprising find re. Milwaukee charters. I thought that at the very least they'd get the advantage of being in a more diverse (integrated) setting with more middle-class kids and that being chosen (even by lottery) would produce a kind of halo effect. Why it didn't is what should baffle the media. But it doesn't.
I comment:

Or perhaps, your implicit assumption that diverse (integrated) settings with more middle class kids confers an educational advantage which leads to improved student performance is invalid.

The assumption rests on shaky empirical support in the first place. So, one would think that this additional piece of potentially-negative evidence might lead an un-biased thinker to question her underlying assumptions. Why it doesn't baffles me.

PS: This might be one of the best examples of irony I've ever seen in an education blog.

PPS: It's also a good example of why we never make any progress in education. Policy thinkers become so wedded to their pet assumptions and will bend over backwards to discount contrary evidence. Classic Confirmation Bias.

May 10, 2010

Same as it ever was -- continued

The Obama administration has posted "a series of documents outlining the research that supports the proposals in its blueprint for revising the Elementary and Secondary Education Act (ESEA)."

Here is the document for Career- and College-ready students analyzed in my last post.

Embarrassingly, the document doesn't appear to cite any actual research.  More embarrassingly, what the document sets out as research clearly isn't research.

I wonder if these documents were submitted to the What Works Clearinghouse for review?

From the Department of Huh?

Some cognitive dissonance from Secretary of Education, Arne Duncan.

The No Child law, passed in 2001 by bipartisan majorities, focused the nation’s attention on closing achievement gaps between minorities and whites, but it included many provisions that created what Education Secretary Arne Duncan on Friday called “perverse incentives.”

In an effort to meet the law’s requirements for passing grades, many states began dumbing down standards, and teachers began focusing on test preparation rather than on engaging class work.

“We’ve got to get accountability right this time,” Mr. Duncan told reporters Friday. “For the mass of schools, we want to get rid of prescriptive interventions. We’ll leave it up to them to figure out how to make progress.”

So, let me get this straight.  Under NCLB 1.0. states were permitted to set their own standards and assessments.  In other words, the Feds "[left] it up to[the states] to figure out how to make progress." And, many states chose to create "dumbed down standards" due to "perverse incentives."
Fair enough.  This time DoE wants to pressure the states into adopting national standards.  Clearly, the Feds with their long history of educational success know much better how to educate than the states.
But now when it comes to meeting those tough new federal standards, Arne wants to again "leave it up to [the states] to figure out how to make progress."  Even though they weren't doing such a good job of making progress towards their own dumbed-down standards. 
Reading First failed, not because it was too prescriptive, but because it wasn't prescriptive enough.  States got to figure it out for themselves how to select/develop interventions as long as they made it appear to to be based on evidence.
The game for states, in case anyone hasn't figured it out yet, is to appear as though they are doing something important, pretend to care an awful lot for the children, appear to follow the scientific evidence, let the chips fall where they may, wait for someone to come up with a new politically correct narrative to explain why some external factor caused them to fail, and agitate for a kinder,gentlerNCLB 3.0 
NCLB 2.0:  Reading First but with even less oversight and compliance.
Now that's what I call smart regulation and failing to heed history's lessons.

The Blueprint for Failure

The Obama Administration's plan to fix NCLB and American education is long on lofty rhetoric and short on humility and specifics.

The plans lays out five giggle inducing "key priorities

Let's look at point one in today's post:  College and Career Ready Students.

Every student should graduate from high school ready for college and a career. Every student should have meaningful opportunities to choose from upon graduation from high school. ... Four of every 10 new college students, including half of those at 2-year institutions, take remedial courses, and many employers comment on the inadequate preparation of high school graduates. And while states have developed assessments aligned with their standards, in many cases these assessments do not adequately measure student growth or the knowledge and skills that students need, nor do they provide timely, useful information to teachers. We must follow the lead of the nation's governors and challenge students with state-developed, college- and career-ready standards, and more accurately measure what they are learning with better assessments. We must reward the success of schools that are making significant progress, ask for dramatic change in the lowest-performing schools, and address persistent gaps in student academic achievement and graduation rates.

What we have here is a change in nomenclature for pulling the old bait-and-switch.

NCLB 1.0 calls for four levels of performance "advanced" (which it does nothing with), "proficient" (the student has learned what the state wants), "basic" (the studentent hasn't learned all of what the state wants), "below basic" (the student is in deep trouble).  Most states have set the bar for "proficient" at a really low level and despite this have been unable to get many students to attain this low level.

The Obama administration wants to either add a new level or rename one of the existing levels, creating a dummy or "career-ready" ghetto.

The NAEP "proficient" level is set at about the level that a college-ready student can attain (based on the comparable percentage of students that graduate college and are "proficient or above," about a third of all students).  Most states have set their "proficient" level well below the NAEP "proficient" level, so we are likely to see this standard raised, which is what the blueprint implies.

So what will be the level of the "career ready" track?  I'll ignore the lofty, yet empty rhetoric of the blueprint, and guess that it'll be at the NAEP "basic" level which is somewhat below where most states have set their "proficient" level.  That's the level that is used for the Urban studies and is gradually becoming the accepted norm.

Of course at such levels we get olitically unacceptable results.  Using the 2009 NAEP results for 8th grade reading, we see that about 43% of white and asian students will be in the college-ready track (proficient plus advanced) while only about 16% of blacks and Hispanics will be in the college-ready track.  Even worse than that, while 71% of white and asian career-ready students (ratio of "basic" to "below basic" plus "basic") will meet the standard, only 51% of black and Hispanic students will meet the standard.  That seesm lose- lose to me.  84% of black and Hispanic studenst will be relegated to the career-ready track, which is political suicide, and only half will meet that low standard.

Clearly the Obama Administartion hasn't thought this one through or hasn't hired anyone who knows the first thing about basic statistics.  This new plan will basically legislate a "separate but not-equal" education system which is exactly the reason the Department of Education was created to eliminate.

At least the 100% proficiency rate made statistical sense if the goal is to eliminate achievement level gaps, if only it were obtainable.

And that was only the first paragraph of the first point of the blueprint.  It gets worse.

May 5, 2010

How Not to Save the Schools (Necessarily)

During my hiatus, the one issue I wanted to comment on was Diane Ravitch's new book which is sadly filled with much soft-headed thinking.  Apparently, Ravitch has been hanging around our education Siren too long.

Stuart Buck has all ready done the heavy lifting analyzing the flaws in Ravitch's book, so there's no need for me to pile on.

I would like to address, however, Ravitch's proposed solution to our education woes -- a national core curriculum which many other education pundits also endorse.  I'm going to use Don hirsch's review of Ravitch's book as my stepping stone.  Hirsch's review should be read in conjunction with Buck's analysis becasue they approach the flaws in Ravitch's book from different angles and are comlementary.

... Ravitch argues that the recent nostrums of “choice” and “accountability” have not worked very well. What new ideas will?

She makes strong arguments in favor of a widely shared core curriculum. This reform, she asserts, would carry multiple benefits. It would assure the cumulative organization of knowledge by all students, and would help overcome the notorious achievement gaps between racial and ethnic groups. It would make the creation of an effective teaching force much more feasible, because it would become possible to educate American teachers in the well-defined, wide-ranging subjects they would be expected to teach—thus educating students and teachers simultaneously.

It would also foster the creation of much better teaching materials, with more substance; and it would solve the neglected problem of students (mostly low-income ones) who move from one school to another, often in the middle of the school year. It would, in short, offer American education the advantages enjoyed by high-performing school systems in the rest of the world, which far outshine us in the quality and fairness of their results.

There are a few flaws in this line of argument.

The first flaw is that there is no actual field-tested commercially available "shared core" curriculum having a research base in a public school (without selection-bias effects) which shows that the benefits that Ravitch and Hirsch think will flow have actually or will necessarily flowed.  There is some cognitive science research that suggests that some of these benefits might accrue, but there is a large gap between that research and a real-world curriculum that achieves actual results.  And mind you, I'm as sympathetic as the next guy that a (voluntary) common core curriculum is better than the alernatives.

The second flaw is the failure to see the elephant in the room -- the current education system -- which will do its darnedest to thwart, subvert, and otherwise screw-up any reform that upsets the status quo (which they very much like) as they've done in the past with every other "reform."

Under the current system, educators are not responsible for educating anyone.  If the student fails to learn, its the student's fault, not the schools.  Educators have a host of excuses (poverty, lack of parental support, etc.) and labels (learning disabled) they can use to excuse their failure to teach.  Under the current system, they get to largely teach how they want and at the end of the year will point to the kids that learned something (the easily educable) and say "I taught them."  They do what they want to do and the kids that have the cognitive ability to make the inductive leaps needed to learn the material are the ones that benefit.  The others not so much.  And, since most of the "reforms" are mostly directed to the other kids, the plan tends to be to do as little as possible to implement the reform, complain as loudly as possible, and wait until the next reform comes down the pike.

Good luck overcoming that.

Ravitch recognizes that consensus on a core curriculum would not be automatic and that “any national curriculum must be both nonfederal and voluntary, winning the support of districts and states because of its excellence.” She continues:

If it is impossible to reach consensus about a national curriculum, then every state should make sure that every child receives an education that includes history, geography, literature, the arts, the sciences, civics, foreign languages, health, and physical education. These subjects should not be discretionary or left to chance. Every state should have a curriculum that is rich in knowledge, issues, and ideas, while leaving teachers free to use their own methods, with enough time to introduce topics and activities of their own choosing.

Really?  Haven't educators been using "their own methods" to teach the stuff they've been trying to teach without much success?  Those methods simply don't work for a large demographic slice.  How can changing what is taught fare any better if those methods are deficient?  The problem of education today is not only what is taught, but how it is taught.
Another improvement over existing state standards is the recognition by the authors of the “Common Core” of its own limits—they devote a section to “What is not covered by the Standards.” The omissions turn out to be major, among them both teaching methods and the curriculum itself. Such acknowledgment of limits is very important. The new multistate document is unique in conceding that it is neither a curriculum nor a curriculum guide, and insisting at the same time that proficiency in reading and writing can be achieved only through a highly specific curriculum—still to be developed—that is “coherently structured to develop rich content knowledge within and across grades.” If these admonitions are taken seriously by the states, Ravitch will have powerful allies in advocating a core curriculum.
Agreed as to the reading and writing.  Now throw in math, science and all the rest of the "content" that is desired to be taught. That is the main problem -- how to teach everything such that is actually learned and retained by the students.  Somthing heretofore that has remained largely unaccomplished.
To teach that curriculum Ravitch evokes a vision of good neighborhood schools (often destined for closure by the new reformers

I don't remember these good neighborhood schools being able to actually educate the demographic that we want to educate today.  Those kids used to drop out long before high school and often even middle school.  The demographic that gets educated today is the same demographic that used to get educated back in the "good old days."  That's not good enough any more.

Yet if Ravitch’s proposals for a coherent, cumulative national—or at least widely shared—curriculum are to carry the day, she needs to put forward a more effective critique of the intellectual and scientific inadequacies of the anticurricular, child-centered movement. Her vision can hardly be put into effect while an army of experts in schools of education and a much bigger army of teachers and administrators, indoctrinated over nearly a century, are fiercely resisting a set curriculum of any kind. Ravitch has roundly attacked the entrepreneurs’ invisible-hand business model as not corresponding with the reality or the fundamental purposes of education. She needs to expose in greater analytic detail the inadequacies of the invisible-hand theory of child-centered schooling.

See Don gets it.

Except for the "Ravitch has roundly attacked the entrepreneurs’ invisible-hand business model as not corresponding with the reality or the fundamental purposes of education." comment.  To quote the great Adam Smith once again: "It is not from the benevolence of the butcher, the brewer or the baker, that we expect our dinner, but from their regard to their own self interest. We address ourselves, not to their humanity but to their self-love, and never talk to them of our own necessities but of their advantages."  The main problemof education is that the incentives of educators are not aligned with their providing a quality education to everyone.  They get paid no matter how poorly the services are provided with little risk of their losing their tenured sinecures. They don't have to provide a good service and so they don't, because it is much easier not to.

May 4, 2010

Hell freezes over

I know I'm going to regret this.

I think Stephen Downes is mostly right in his proposed induction model of knowledge and learning.

What we have here is a model where the input data induces the creation of knowledge. There is no direct flow from input to output; rather, the input acts on the pre-existing system, and it is the pre-existing system that produces the output.

In electricity, this is known as induction, and is a common phenomenon. We use induction to build step-up or step-down transformers, to power electric motors, and more. Basically, the way induction works is that, first, an electric current produces a magnetic field, and second, a magnetic field creates an electric current.

Why is this significant? Because the inductive model (not the greatest name in the world, but a good alternative to the transmission model) depends on the existing structure of the receiving circuit, what it means is that the knowledge output may vary from system to system (person to person) depending on the pre-existing configuration of that circuit.

Stephen is proposing a mutual induction circuit theory of knowledge transference. As Stephen says, the magnetic flux due to the current of the top circuit is inducing a current in the bottom circuit.

Knowledge is not directly transferred into a learner, but rather knowledge is acquired indirectly through an inductive process. Specifically, knowlege is typically acquired through an "inductive reasoning" process.

That is, the learner observes stimuli (examples and non-examples); (2) performs a series of logical operations on what it observes; and (3) arrives at (induces, figures out, discovers, “gets”) a general idea revealed by the examples and nonexamples.

For example, let's say a teacher is trying to teach a student the concept of "over" in the sense that when an object is "over" another object it is directly vertically above that object. The teacher may tell or the student may look up the definition of the word "over" and arrive at the verbal definition "directly vertically over." But, this verbal definition isn't necessarily directly stored in the student's memory verbatim, if at all. And, even if it is, it doesn't necessarily follow that the learner retrieves this verbal defnition when deciding if something is "over" something else, i.e., when determining the metes and bounds of the concept "over." Which is not to say that knowing the verbal definition is not typically useful in learning a new concept. It often is.

Concepts, such as "over," are typically learned by the learner observing various examples and non-examples of the concept and then inductively reasoning to the general concept revealed by the examples and non-examples.

Take a look at the following teacher presentation:

The student is presented with two examples (1 and 2) followed by two non-examples (3 and 4) from which the student must inductively reason the general concept revealed thereby.  Then the student is tested to determine if the he or she has learned the concept.

This is about a clear and unambiguous of a presentation as it gets.  Some students might have induced the general concept with only these four examples/non-examples.  Some might have induced it with less. Some will require many more.  Moreover, the concept itself is fairly simple.  And, the result is that the "inductive gap" that the student has to traverse in order to induce the general concept is likely small.


In contrast, for more difficult concepts, such as "democracy" or "mammal," or if the presentation of examples/nonexamples is ambiguous or otherwise more difficult to traverse, such as if the student's exposure to the concept "over" comes solely through reading isolated uses of the word "over" in connected text spaced over a period of time and readings, the "inductive gap" is larger.

As a result, inducing the general concept would be more difficult.  Fewer students would be capable of traversing this gap without assistance.

Let's look at another example.  Some students learn how to read with little, no, or poor/ambiguous instruction and are able to to traverse the hundreds, perhaps thousands, of large inductive gaps needed to learn how to make sense out of connected text.  Other students won't be able to induce the ability to read unless the inductive gaps are made smaller.  One way to make these gaps smaller is to explicitly teach the various letter-sound correspondences, often referred to as "phonics."  In fact, the gaps can be (unintentionally) made wider by misteaching.  For example, many "whole language" pedagogical methods actually increase the difficulty many students have learning to read by inadvetently teaching unproductive strategies.

One goal of formal education is to present the material to the student in such a way that the student is able to manage and traverse the inductive gaps inherent in learning the general concepts underlying the material.

However, what does not necessarily follow is Stephen's next assertion.

What it means is that you can't just apply some sort of standard recipe and get the same output. Whatever combination of filtering, validation, synthesis and all the rest you use, the resulting knowledge will be different for each person. Or alternatively, if you want the same knowledge to be output for each person (were that even possible), you would have to use a different combination of filtering, validation, synthesis for each person.

Certainly some "recipes" are more effective than others for getting the same output, i.e., learning the material.  Some "recipes" may be successful in getting, say, 95% of the students to learn the desired output.  Others might only be successful in getting 50% to the same output.  It simply doesn't follow that a different recipe is needed for each student.

Nobody really knows what's going on in a students brain while they are inductively reasoning (or as Stephen puts it "[selecting of the right] combination of filtering, validation, synthesis") their way to learning a particular concept.  All an educator can really do is to determine whether the student is able to accurately communicate the general concept by verbalizing, demonstrating, or otherwise applying the general concept correctly to new examples which correlate highly with those responses or applications of those people/experts who understand the general concept.

That's why educators test.

But when Stephen concludes:

Even when you are explicitly teaching content, and when what appears to be learned is content, since the content itself never persists from the initial presentation of that content to the ultimate reproduction of that content, what you are teaching is not the content. Rather, what you are trying to induce is a neural state such that, when presented with similar phenomena in the future, will present similar output. Understanding that you are train a neural net, rather than store content for reproduction, is key.
He is essentially right.

We apply or generalize knowledge through deductive reasoning.

The learner has/knows/can say a general idea (concept, rule/proposition, routine); (2) can use the general idea (definition of a concept, or statement of a rule, or features of the things handled by the routine; e.g., math problems, words) to examine a possible new example using the information in #2; (3) and can “decide” whether the new thing FITS (is an example of) the definition, rule, or routine (“Can you solve this with FOIL?”); and (4) is able to “treat” the example accordingly -- name it (concept), explain it (with the rule), solve it (with the routine).

Whether this can be characterized as "teaching content" is irrelevent.  Having the proper "neural state" means that the "content" has been learned regardless of how directly or indirectly it was taught.

[Update:  cleaned up numerous typos]

The State of Education: Same as It Ever Was

I'm glad to discover that an education blogger can take a five month sanity break and come back to find out that things really haven't changed at all.  Good news for me for sure; not so good for education though.

I skimmed through about 5000 posts in my news reader and didn't feel particularly compelled to comment, not because some weren't particularly good (or bad), but that I'd just be repeating the same observation I'd made previously.  That gets boring.

And, that, in a nutshell, is the problem in education:  despite the sad state of our education system, there are no significant improvements on the horizon. At least none that haven't been tried in the past, usually under a diferent name, and haven't already failed.

I'm still in favor of blowing up the entire system and starting over.  But, that's not going to happen.  So, in the meantime we are forced to make fun of the dopey, ineffective proposals being floated around and wait for some unexpected groundbreaking change to surface somewhere else that has the effect of blowing up the existing education system.  Kind of how the internet is in the process of blowing up print journalism, brick and mortar retail, cable tv, and the like.

We lack the political will to boot out the entrenched interests in education, so an external force needs to get the ball rolling for us.  We shall see.

In the  meantime, we're stuck making the same obseervations on the same old repackaged reforms that don't go to the root of the problem:  the perverse incentives of the existing command and control system.

In other news, I have about 1000 spam comments in my moderation queue.  Apparently, the spammers have found me. So, I've made some changes to the comment policy.  No more anonymous comments,  Comments are only open for 14 days.  And you have to deal with the captcha nonsense.  Hopefully, that will keep the spammers at bay while letting through the legitimate comments.

So, I think I'm back from hiatus for now.  As long as I stay motivated.  The problem is that doing real abalysis is very much like work.  Certainly too much to do it for free day in and day out.  So we shall see how long it lasts.