April 14, 2009

To Struggle, or Not to Struggle

I want to follow-up on my last post regarding the purported efficacy of designing instruction such that the novice K-12 student is required to "struggle" to learn the new material.

All the material that is typically learned on the K-12 level is amenable of being placed into meaningful relationships (or connections) with pre-existing skills and knowledge of the student. Learning will be more difficult when:

1. the student does not possess the prerequisite skills and knowledge assumed by the new material; and/or

2. the teacher has not displayed the meaningful relationships inherent in the new material or has given an explanation that is difficult for the student to follow.

In these situations learning will require higher-levels of analytic ability. And, the students most likely to learn in these situations are the high-IQ students because these students are better able to solve problems they haven't seen before (or that contain untaught vocabulary).

For the purposes of this discussion I want to focus on the purposeful failure (often for pedagogical reasons) of the teacher to display the meaningful relationships inherent in the new material to novice K-12 students under the belief that the ensuing "struggle" will better learn the new material and the underlying meaningful relationship/connection. (I think it's generally recognized that providing confusing explanations by the teacher and failing to ascertain whether a student possesses the skill and knowledge prerequisites is an indication of poor teaching.)

Let's start with the kinds of information, skills, and knowledge that the K-12 learns and what neds to be taught and learned. I've adapted this explanation from Martin Kozloff's Making Sense of What You Read and Hear, and Making Sense When You Teach.

Regardless of the subject (math, history, science), there are only six kinds of information, skills, or knowledge that can be communicated to and learned by the K-12 student.

Each kind of knowledge represents a connection. To understand the knowledge is to understand the connection. To use the knowledge (to apply it to possible examples of it) is to apply the connection.

1. Facts

Ex: “The U.S. Constitution was written in Philadelphia.”

For purposes of instruction a fact is a true and verifiable statement that connects one specific thing (Constitution) and another specific thing (Philadelphia).

Teach the connection.


2. Lists

Ex. 1: “The elements of sugar are carbon, hydrogen, and oxygen.”

Ex. 2: “Here is a list of facts about the U.S. Constitution. Written in Philadelphia between May and September, 1787; the draft was sent to the various states for ratification; the Constitution plus the Bill or Rights is a compromise between advocates of strong central government (Federalists) and advocates of strong state governments with a limited central government (anti-federalists); the Constitution was finally ratified in 1789.

Like with facts, these statements connect one specific thing (elements of sugar, Constitution) and a list (of other specific things).

Teach the connection(s).


3. Sensory concepts

Exs: blue, on.

The specific things (examples) of the concepts differ in many ways (size, shape), but they are connected by a common feature, such as color or position.

All of the defining features of the concept are in any example. Therefore, the concept can be shown by one example. However, a range of examples is needed for the learner to see what the common feature is and to cover the range of variations (e.g., from light to dark red).

Teach the range of examples needed for the learner to determine the common feature and the range of variations.


4. Higher-order concepts.

Exs: Democracy, society, mammal.

The specific things (examples) of the concepts are connected by a common feature or features; e.g., making societal decisions through elected representatives (representative democracy).

The defining features of higher-order concepts, however, are spread out. Therefore, you can’t simply show examples to teach a higher-order concept. You have to give a definition (that states the common, defining features) and then give examples and nonexamples to substantiate the definition.

Teach the definition of the common features and then substantiate the definition through suitable examples and non-examples.


5. Rules or propositions

These are statements that connect not specific things but whole groups of things (concepts or categories).

  • Categorical Propositions. Some rules or propositions state (assert, propose) how one kind of thing (concept or category) is part of or is not part of another kind of thing (concept of category). These are called categorical propositions. For example, all dogs (one kind of thing) are canines (another kind of thing). Or, No birds (one kind of thing) are reptiles (another kind of thing). or, Some bugs are delicious.

    Teach the rule or proposition.

  • Causal or hypothetical propositions. Other rules or propositions state, assert, or propose how one kind of thing (concept or category) changes with another kind of thing (concept or category). These are called causal or hypothetical propositions. You can tell that a statement asserts a causal or hypothetical proposition because it states (or suggests) something like “If…. If and only if.… Whenever…. The more… The less….one thing happens, then another thing (happens, comes into being, changes, increases, happens more often, decreases).

    The “thing” (variable, condition, antecedent event) that is the alleged cause of something else can work (have an effect) in different ways. For example, the alleged cause might be considered a necessary condition for something else to happen or change. (“If X does not happen, then Y will not happen.” Or, “If and only if X happens will Y happen.”) Or, the alleged cause might be considered a sufficient condition for something else to happen. (“Whenever X happens, Y will happen.”)

    For instance, Whenever temperature increases (one kind of thing), pressure increases (another kind of thing). [This proposition suggests that a rise in temperature is a sufficient condition (by itself) to cause an increase in pressure.] Or, If and only if there is sufficient oxygen, fuel, and heat (one category of thing) will there be ignition (another category of thing. [This proposition suggests that sufficient oxygen, fuel, and heat are a necessary condition for ignition.]

    Note: When you have identified all of the necessary conditions, you now have a set of variables that are a sufficient condition. Think of a causal model of fire, a cold, and a revolution.

    Teach the rule or proposition.


6. Routines

Routines are sequences of steps that usually must be done in a certain order. Solving math problems, sounding out words, and stating a theory or making a logical argument (each proposition in the theory or argument is like a step that leads to a conclusion).

Teach the routine.

NB
: A routine is a connection of a number of events, such as steps in solving a problem or a listing of events leading up to a war. There are different arrangements of steps or events in routines. You want your students to see what these arrangements are.

  • Sequence in one direction. A leads to B leads to C leads to D. Ex.: sounding out words, solving math problems.

  • Sequence with feedback loops. A leads to B and the change in B produces a (reciprocal) change in A which produces more change in B until some limit is reached. Exs.: Outbreak of war, onset of illness, falling in love, divorce, getting porky and out of shape.

  • Stages or phrases. A sequence of events or steps can be seen as a process divided into stages in a process.

    Ex1: Load rifle: steps a—b—c--d; Fire rifle: steps e—f—g; Clear rifle: steps h—i; Clean rifle: steps j—k, etc.

    Ex2: In history: If you examine enough (examples of) genocidal movements, you notice that one group has some features (e.g., property, social status) that produces envy in another group, or does something that threatens another group (e.g., resists power). This might be seen as the background (first) phase. Then (phase 2) the genocidal group demonizes the first group with racial slurs and propaganda. Then (phase 3) the genocidal group begins to mistreat the victim group; e.g., attacks, job loss, confiscating weapons, special (degrading) clothing. If (phase 4, escalation) the victim group fights back, this provokes worse treatment. If the victim group submits, it furthers the genocidal group’s perception of the victim as degraded. The genocidal group then (phase 5) creates an organization for killing or transporting. Then the killing begins (phase 6).

  • Logical argument. A text might be arranged as a logical argument. There are two sorts of logical arguments:

    a. Inductive. Facts are presented. Then the facts are shown to lead to a general idea, such as a conclusion. For example, examine five examples of genocide and INDUCE (figure out) the common phases and the activities in each phase.

    b. Deductive. Or, text may be arranged so that it presents a deductive argument. It begins with a general idea, such as a rule--first premise.

    “If X happens, then Y must happen.”

    It then presents facts relevant to the first premise—evidence or second premise.

    “X happened.”

    It then draws a conclusion.

    “Therefore, Y must happen.”



In the next post we'll discuss how a student learns this knowledge.

24 comments:

Parry Graham said...

Come on, Ken, stop taking on all the small-fry topics like SES and comprehensive theories of learning and knowledge.

Give us something we can really sink our teeth into.

:<)

Parry

Dick Schutz said...

Yikes. That's a pretty arbitrary list, Ken. And it really doesn't bear on the topic "To struggle or not to struggle."

The thing is, learning is always morphing and from the learners perspective, coming from the bottom up, the list is useless.

The analysis is isn't much more helpful in instructional development because instructional design is a matter of synthesis (system building).

You haven't dropped the other shoe, but you seem to be assuming that all the substance is going to be stored in the learner's "head." That's an obsolete view in the age of the Internet.

Without knowing where you're going in Installment II, we're left "struggling" at the moment.

KDeRosa said...

That's not where I'm going, Dick. Actually, I'm not sure exactly where I'm going, but I'll be disappointed if this is where I wind up.

Malcolm Kirkpatrick said...

You hear the faux macho promotion of "struggle" from the insecure "dictators in pocket edition" (to quote Karl Popper) who enter teaching for the opportunity to lord it over kids.

While I generally agree with Ken's last two posts, I will offer in defense of "struggle" that perhaps the intended lesson is not Math or Chemistry but perseverance and/or the pleasure of breaking through a barrier.

Against this, as an instructional method, teachers should consider that confident kids with loving, secure homes and supportive friends will have a much greater tolerance for failure than will battered kids with little emotional support. A strong rip current which provides an athlete with stimulating exercise will drown an elderly heart patient.

Dick Schutz said...

"the intended lesson is not Math or Chemistry but perseverance and/or the pleasure of breaking through a barrier."

Right, Malcolm. But. . .

If intentions ruled, all kids would be academic kings--or gods, or something.

The relationships involved are verified psychological understanding for about a century. In which of your categories would this information be classified, Ken?

KDeRosa said...

Sounds like a causal proposition to me. If you (work hard, persevere), then you will (succeed). Students see example of this proposition every day if they are taught effectively (work hard at spelling, then learn how to spell, etc.), so they will learn the general rule.

Dick Schutz said...

Well, I was referring to the psychological information re perseverance. It's when one tries to aggregate your categories into a body of understanding that they prove empty.

"they will learn the general rule."

The pro-struggle proponents intend more than "learning the rule," They believe that "struggle" will build behavioral traits.

KDeRosa said...

I disagree for a few reasons, Dick.

Behaviors are learned. And, I contend that they are learned by the learned seeing a continuous stream of examples that, in our case, that working hard results in academic success.

Students who are struggling won't necessarily learn this rule. These students struggle, get nowhere, and then the teacher has to tell them the answer. The rule the student learns is that he is a dummy and working hard doesn't lead to success. Thus, motivation is killed.

In contrast, the student that receives instruction that does not require him to struggle, learns that he is capable of learning. the student is also working hard because he still has to demonstrate that he has learned by successfully using the material that was learned. The student is also working hard because he is practicing the material for retention. Thus, the student learns that working hard (even without fruitless struggling) will result in academic success. And, motivation is built on this academic success.

Dick Schutz said...

I agree with everything you say about "struggling" Ken. And it's backed up by about 100 years of ed psych research.

"seeing a continuous stream of examples that, in our case, that working hard results in academic success."

Yep. That teaches the "rule." But reliably acting consistent with the rule goes beyond the rule. It seems to me that this holds for all instructional accomplishments worth noting. They go beyond Kozloff's categories of "what can be learned."

Malcolm Kirkpatrick said...

Discussions of the relative efficacy of different instructional strategies often get complicated by disputes over the choice of measuring instruments. One can find advocates for multiple choice standardized tests, portfolio assessment, "authentic assessment", among other options. Opponents of standardized tests occasionally assert that measures given by standardized tests cannot be valid. They say: if you want to assess whether a student can program a computer, give the student a computer and a programming problem. There is something to this, but not as much as advocates for authentic assessment suggest. Is my ability to digesst lunch evidence that I "understand nutrition"? Does pregnancy prove that a girl "understands Biology"? If standardized test scores of, say, programming correlate strongly with other measures of programming ability, then why not use standardized tests?

I say this because I suspect the advocates of "struggle" would assert that whatever you could teach by spoon-feeding students and test with standardized tests is not what "we" "should" be teaching and testing. There's an arbitrary normative component to this, and an empirical point, that whatever else they want to teach and test is NOT teachable by spoon-feeding and not testable by standardized tests.

In a voucher-subsidized market in education services or in an unsubsidized, uncoerced, competitive market in education services, no public policy choice would depend on these questions. "What works?" is an empirical question which only an experiment (a competitive market in education services) can answer.

Dick Schutz said...

Doing programming is a transparent indicator of programming expertise. The indicator can be structured to obtain further information about the nature of the expertise.

Digesting lunch is uncoupled with "understanding nutrition." "Understanding nutrition" is a very large domain that requires a good deal more elaboration.

Same logic applies to pregnancy and biology.

"Authentic" proponents don't provide any assessment. They compile "portfolios" which then have to be assessed.

"Computer Programming" is not an "ability" and whether a test so labeled correlates with other tests is all part of the house of cards of standardized test practice.

You happened to pick a bad example with "computer programming," Malcolm. Assessment in the IT sector is very well-structured to certify defined operational expertise that is closely coupled with the job structure.

None of this relates to the matter of "struggle."

Alex said...

Mr.DeRosa, I read again your postings on Alfie Kohn and I read your posting here and I think he and you are actually on the same page. I mention Mr. Kohn again because this has to do with the issue of DI versus a more conceptual approach to teaching something such as the hsitory fact you bring up or a fact in chemistry or a fact in some other subjects. Kohn nor do I dispute the necessity of teaching facts. Its just that they shouldn't occupy the entire time a teacher is teaching. For example, if I was teaching a unit on the Civil War, I'd start off by asking my students what they knew previously before they came to my class about that event. I would then connect what they knew to something similar happening today(eg. the civil war in Sudan or Somalia) and then begin discussing how and why the war started and so on (where the kids would pick up the facts in context). I might have them pick out a particular general or well known political figure on either side and do some research on them and then share it with the class.
As regards the research regarding Follow Through, I did some more on my own and I discovered that all of the "positive" results for DI came from individuals who were already biased towards DI in the first place(this includes the sources that you cite in your attack of Kohn). The "independent" researchers all came from the University of Oregon, and had a vested interest in making sure that their DI program would trump the other models in the first place. I don't understand why you chided Kohn for not mentioning these studies even though you go on to admit that those studies are themselves biased. The fact that Abt and House et al. both reached similar conclusions and weren't biased(the pair of Oregon researchers that you cite themselves admit that House et al. didn't favor any model in their analysis) leads me to conclude that Kohn was right. After all, I certainly remember being trained to sound out words and read out of a basal, and I made frequent mistakes(my kindergarten teacher at the time didn't know what to make of me since she was using a direct script prepared by the district office). I didn't, however, make those mistakes at home where I was reading whole books by the time I was five years old(the process of learning at home versus school is very different).
I have a hard time believing that DI(instructing kids from the front of the room and telling them exactly what to do on a task step by step) was invented by an unknown guy in the 1960's. I think that schools had be doing that all over the world back into the previous century, when institutional schooling was invented. I think that homeschooling is a great thing, but that instruction in the public schools needs to be individualized and conceptualized in order for the schools to actually be effective like they're supposed to be.

KDeRosa said...

Alex,

1. How is DI less conceptual than other approaches?

2. I don't contend that instruction should only be about facts.

3. In PFT, DI's positive results come directly from the Abt report. Abt had no connection to DI.

4. You confuse potential bias with actual bias. There is no evidence that any researcher who analyzed the PFT data and found positive outcomes for DI performed a faulty analysis and that that faulty analysis was the result of bias. If you have evidence to the contrary I'd like to see it.

5. All the researchers (with perhaps the exception of Abt) were potentially biased, including House et al. which was funded by the Ford Foundation which had funded some of the models that were part of PFT.

6. The Oregon researchers did indicate that House didn't favor any model; however, what they also indicated was that House's analysis served to reduce the effect size of the interventions by introducing changes to the analysis without sound reasons for introducing those changes.

7. Your experience learning to read is anecdotal. It's not data.

8. Your understanding of DI is incorrect.

Tracy W said...

Malcolm Kirkpatrick Opponents of standardized tests occasionally assert that measures given by standardized tests cannot be valid. They say: if you want to assess whether a student can program a computer, give the student a computer and a programming problem. And if you give every student you want to test the same computer and the same programming problem, and if you develop a marking method for the student's resulting program that can be standardised so that different markers would give roughly the same score to the same end program then you have a standardised achievement test. (This is more difficult to do with non-multi-choice tests, but the various NZ qualifications authorities have been administering exams including open-ended answers like essays or mathematical proofs for decades and they've gotten reasonably good at inter-marker reliability). It's a test because it seeks to determine the presence or otherwise of programming ability, it's an achievement test because it focuses on what the student can do, not on what the student could do, and it's standardised because you can compare different test results with some degree of confidence.

You probably want to validate the programming test to check that it measures what you intend it to measure. There is also another point that there is quite a lot of variation in programming skills, eg a programmer can be capable of writing a VBA macro but not understand pointers. A standardised achievement test does not necessarily tell you everything about the student

Is my ability to digest lunch evidence that I "understand nutrition"? No because digestion is not conscious.

Alex said...

Mr.DeRosa, DI isn't as conceptual as a more progressive version of schooling can be because kids and people for that matter aren't passive and they don't just absorb information. The teacher shouldn't just be discoraging facts to kids in front of the room. She ought to be posing questions that tap kids previous experience before they came into her classroom while she's explaining the topic that the class is going to explore in the subject. She would then set up a research activity like I mentioned or a similar activity and then have the kids work on it alone or in small groups, ending up with a project that can inform the rest of the class or the school in a fair of some sort. The teacher would certainly still be involved(and not standing out of the way like the persistent mischaracterizations of progressive learing say that she would be)but in a supportive, guiding role(ie. asking the kids why they're doing something, telling them the answer when they need it,etc). Your understanding of Progressive ideas (ie.Open Classrooms,High/Scope,Montessori) is incorrect. DI isn't as ambitious or productive as what I've just mentioned because it requires very little if anything at all from the teacher and the students themselves. They're both passive and lazy in that setting.
When a fact is learned(ie. the Constitution was written in Philadelphia in 1787), it isn't just a "brute fact" as some people like to say because questions immediately present themselves(ie. WHY was it wriiten in Philadelphia?, WHY in 1787?,WHO influenced the writing of that document,WHAT connection do those delegates' ideas have to our country's government today?,etc).My point is that facts come in a context and for a purpose and are connected to other facts and when a model such as DI is used, kids have no opportunity to be INVOLVED in their own learning process(ie.I wonder for whose purpose the Constitution was written? Could the Articles of Confederation have actually provided more freedom to the colonies? I think I'll bring that up when I talk with Mr. Jones and the rest of the class). Progressive education actually wants kids to THINK and be ACTIVE. That's what's so wonderful and exciting about it and leads kids to being successful in their learning experiences.
As regards to PFT, I never said Abt was biased at all. The results that they got that were favorable for DI were for certain low-income cohorts and even those results didn't show up at all of the sites.The Oregon researchers were indeed biased in my view because they were the ones who actually were involved with and funded the DI model themselves, so I don't understand why you continue to say that DI is the "best" model and rely on those researchers. The charts that they used to show that DI trumped all of the other models are suspiciously skewed towards DI because it showed "perfect" score percentages for DI and "low" percentages for "Open Classrooms". However, "Open Classrooms" like other progressive approaches that Mr. Kohn and others have mentioned doesn't exclude phonics and phonetic awareness in reading nor does it exlude the teaching of concepts in math. These things are simply taught in a different, more purposeful, more meaningful way(ie. reading whole books, doing calender games to learn addition,etc). You and other pundits routinely(and perhaps deliberately) confuse "Whole Word" with "Whole Language". "Whole Word" is more similar to DI in that the teacher uses workbooks and stands in front of the room calling on the students to sound out the words in isolation or in lists. It appears that you've set up and then knocked down your own strawman.
If House et al. was "potentially biased" as you trying to say they were, then why is it(again) that the Oregon researchers(who were definitely biased as previously mentioned) say that House et al. didn't favor any of the models? To bring up the fact that House et al. was funded by the Ford Foundation(which isn't partisan and is purely a research based group)and try to say that House et al. is biased serves no prupose and actually pits you against the very uncredible researchers you yourself cite. The Oregon researchers say during the course of their piece that House et al. did the right thing by looking at sites rather than individuals. They then say that House et al. end up confirming most of what Abt found also. The Oregon researchers then go on to claim that if the teacher "has a clear understanding of the objectives" of DI and "teaches in such a way so as to reach those objectives" and a week or so later "the class and the teacher are still covering the same material", then "the teacher didn't teach in a way to ensure that the objectives were met". However, they just said that the teacher was doing exactly what she was supposed to do and still the class didn't learn the material. I think that those researchers let slip the obvious fact that DI doesn't work and that they were too proud of themselves to admit it. The MAT, by the way, is a norm- referenced test that doesn't tell how the child did in relation to the actual subject material on the test itself, only how they supposedly did in relation to a select cohort around the country who took that test during similar conditions when the test was first used. That's another nail in the coffin of the Oregon researchers you rely on. DI doesn't work, can't work, and won't work to help struggling kids.
As regards to your chiding of Kohn for saying that it's a known fact among inner-city teachers that DI doesn't work, read Linda Darling Hammond's The Right to Learn where she talks about beginning on page 72 about the Competency-Based Curriculum (CBC) in DC that was started in the 80's and continues today even though the district is known for its low test scores(she goes on to mention experiences of some teachers with this skills-based routine and they're all negative). DI can't even succeed on its own merits. It deserves to be dumped and our kids deserve better.

KDeRosa said...

Alex, there are so many fundamental problems and confusions in your last comment that I'm not sure where to start and if it would even be productive for me to attempt a meaningful rebuttal.

Alex said...

Mr. DeRosa, I'd love to hear a rebuttal(or at least your take on it). Regarding PFT, House et al. did find positive results for DI. The issue is whether they amount to a serious and definitive support for that approach that could be generalized to all classroom practice in every school throughout the country. The answer provided by Abt and House et al. is no. When I mentioned that House et al. confirmed most of Abt's findings, what I meant was that they concurred with Abt in at least using the MAT for the test that was to be used for PFT as well as the DI program (since PFT was designed to see, of course, whether low-income kids could do well at a skills-based test with skills-based instruction). The fact that there was at least some positive results with regards to that program that came from DI wasn't surprising. House et al.,however, found flaws in how the results were determined and reported in that for a large study such as PFT, sites rather than individuals had to be used as the base units since individual results would always vary. House et al. divided each cohort by the standard deviation between PFT and Non-PFT so as to get a resonable percentage rate of success for that particular cohort and site using each of the models. This was done in order to make sure that the models were implemented equally and so that each model had a resonable rate of success. Another objection was offered by the Oregon researchers you cite: Why divide up the sites? Well, House et al. had to divide up the sites because there were different cohorts at each of the sites and they each had different levels of performance. In order for the measuring of the success of the models with groups within the sites to be accomplished, House et al. had to look at these cohorts. These success rates could then be averaged out and divided by the diviation in order to get a resonable overall site-based success rate for each of the models at the sites. This would give two levels of performance:Overall site-based performance for the models and the group cohorts perfromances within each of the sites. Individual performances weren't conducted by House et al. just as they said they wouldn't(the objection by the Oregon researchers that breaking down the sites amounted to testing indiviuals again is incorrect because House et al. looked at the groups within the sites as previously mentioned). This would be useful if we looked at individual sites one by one and pretended that the sites themselves each were their own PFT's. House et al. did a redo of their own of PFT because a through analysis by an outside group that finds errors throughout the study must be able to show with resonable conclsuions what an alternate, correct study would look like in order to give a complete explanation of why the original PFT was seriously flawed. This House et al. did. The Oregon researchers only proved that they didn't like the fact that House et al. didn't give their particular DI program a boost, which is what they were hoping for. In fact, the Ford Foundation funded not only the progressive approaches but also the Kansas Behavioral Analysis (similar to DI) that also showed some positive results according to the Oregon researchers own interpretation. Just because PFT was a large study that cost a lot of money doesn't mean,unlike the Oregon researchers were asserting, that it will always prove something. DI doesn't lead to good results when the instruction leads to something more than mere rehersal of basic skills on a test that is designed to indicate those very same basic skills. Wesley Becker's interpretation even admitted that the supposed benefits for overall math and reading from this model were lost by the fifth grade(ie.cognitive comprehension and problem solving applications). He then asserts that its because of the schools themselves rather than the model that leads to these declines. He provides no empirical support or evidence for these assertions. All he showed were some gains for kindergarten through the second grade in basic word attack skills and phonemic awareness for reading as well as basic addition/subtraction for math,etc. Even these weren't garunteed to keep kids at the mean norm for long,assuming that the kids being tested were actually performing at or above the 50th percentile all the wile. This isn't clear because norm-referenced tests don't actually tell the peerformance of a child in relation to the actual subject material(ie. how many questions they got right out of how many there were on the test and what areas indicated by the test they needed to improve on). DI as developed by Zig Engelmann and Wesley Becker at Oregon showed gains at basic skills at that wasn't surprising (since the whole PFT was designed to produce that very result. This, however, doesn't mean that the model was or still is appropriate for the cognitive and affective areas). The Open Education model, which is progressive, was drawn from a British preschool setting and therefore wasn't actually accurate in representing the true Pigetian/Progressive approach (and should have been combined with High Scope at least). There were numerous other misclassifications and misanalysis that make DI untenable as an overall approach for young children. This is the problem with "research based" ideas in education. It's not about what a group of (biased) researchers say works. It's about what works for the individual kids in your care. That in itself is what progressive education is all about.

Tracy W said...

DI isn't as conceptual as a more progressive version of schooling can be because kids and

people for that matter aren't passive and they don't just absorb information.
Alex, may I suggest that you read some sample DI lessons? There's one available here.
http://www.specialconnections.ku.edu/~specconn/page/instruction/di/pdf/lang_sample_lesson_2.pdf
This lesson most definitely depends on active learning. For example, in the first exercise, on

page 1 of the pdf, the teacher is teaching students how to question when they here a word they

don't understand. (Students have been assigned into lessons based on their prior knowledge, so

most students in the group should not know these words, if they already knew those words they

would be in a different lesson).
The final question in exercise 1 asks students if they would like to go to a funny movie - an

open ended question.
In the second exercise, also on page 1, calls on student's existing knowledge of the meanings of

the word "WATCH".
Notice the amount of time that students are expected to make some sort of response. Students are learning actively in the DI lessons.
In exercise 3 on page 2 the students practice making inferences, eg they infer from a

descriptive passage that the passage is taking place during summer. This is conceptual

knowledge. And here the teacher is not merely providing context, she is teaching students to

figure out context themselves from clues in the text.

She would then set up a research activity like I mentioned or a similar activity and then

have the kids work on it alone or in small groups, ending up with a project that can inform the

rest of the class or the school in a fair of some sort.
From this I can identify at least 3 skills that are being practiced:
1. Research skills
2. Groupwork skills (assuming that the kids are working in groups)
3. Communication skills (informing the rest of the class or the school)

All three of these are very sophisticated skills if you want to do them properly. There are plenty of adults who suck at groupwork, or can't research something (you being an example, given your false statements about how DI works apparently you have never researched it), or are good writers but terrible speakers, or vice-versa, or just suck at communicating generally. How many teachers are qualified to teach all three? After all, teaching is even more demanding cognitively than merely performing, a teacher not only has to be able to do, they have to be able to work out why other people's efforts are failing and articulate this to the other people.
Do you have any evidence that the mass of teachers are able to teach all these three skills, plus whatever the subject matter the students are supposed to be learning? How about new inexperienced teachers, how do you expect them to cope with this wide range of skills?

This is the problem with many conceptual approaches to education - it places an awful lot on the teacher's shoulders. I mean no disrespect to teachers, I suspect that any professional group would struggle with such a diversity of demands. For example doctors are often criticised for a lack of communication skills, even though on a daily basis they get to practice that far more often than groupwork skills or research skills. There are presuambly teachers and doctors and lawyers who can do all this, but it is doubtful that there are enough to stock every classroom in the country. (And of course in some ways teachers have the harder job in that they don't merely need to know how to do this, they need to know it well enough to teach it). And this is in part why I favour Direct Instruction, it does not place demands on teachers that are so extraordinary.

However, "Open Classrooms" like other progressive approaches that Mr. Kohn and others have

mentioned doesn't exclude phonics and phonetic awareness in reading nor does it exlude the

teaching of concepts in math.
Merely not excluding something is rather a minimal involvement. What DI does that is different is that it

provides a field-tested way of teaching phonics and phonetic awareness, plus reading comprehension skills and critical thinking skills, and concepts in maths.

Concepts are not merely included, the wording has been tested to remove ambiguities. Students

are not merely exposed to phonics and phonetic awareness, they are given numerous opportunities

to practice those skills and make them their own and are provided with meaningful feedback.

These things are simply taught in a different, more purposeful, more meaningful way(ie.

reading whole books, doing calender games to learn addition,etc).
Do you have any evidence to support the assertions that these ways are more purposeful and more

meaningful to students?

I also note that games are included in some DI lessons. For example, in this lesson at http://www.specialconnections.ku.edu/~specconn/page/instruction/di/pdf/math_sample_lesson_a.pdf, in exercise 7 on page 4, a game is played called the Fact Game.

Whole Word" is more similar to DI in that the teacher uses workbooks and stands in front of

the room calling on the students to sound out the words in isolation or in lists.
This is not only what DI does. See this sample reading lesson at

http://www.specialconnections.ku.edu/~specconn/page/instruction/di/pdf/reading_sample_lesson_a.p

df
Exercise 20, on page 4, they read a story and practice asking simple questions and understanding pictures.
Or take this reading lesson at http://www.specialconnections.ku.edu/~specconn/page/instruction/di/pdf/reading_sample_lesson_b.pdf
Exercise 6, page 2, they read a story and have to answer questions ranging from simple comprehension ("What was the name of the rancher?") to some deductive skills "Why did the workers think that it was good to have Emma on their side?"

.My point is that facts come in a context and for a purpose and are connected to other facts and when a model such as DI is used, kids have no opportunity to be INVOLVED in their own learning processAnd this happens in DI. For example, in the maths lesson I referred you to earlier (http://www.specialconnections.ku.edu/~specconn/page/instruction/di/pdf/math_sample_lesson_a.pdf) exercise 2 on page 1 starts off by the teacher asking kids how much they weigh.

KDeRosa said...

Alex, your commenst continue to be riddled with non-sequiturs and the lack of paragraphing doesn't help with the readability.

You also continue to get your facts wrong. In PFT the DI model was the best performing model in basic skills, higher order skills, and affective skills. Not just basic skills as you've indicated. Also, the DI model performed best for all of the subgroups tested, not just for the low-income group.

The primary problem with the House re-analysis was that the procedures adopted by House all tended in the direction of maximizing random error, thus tending to make differences appear small and insignificant.

I don't see a scientifically sound reason for introducing random error into the results, unless there is an agenda to reduce the effect size of the interventions (say, perhaps, to make the interventions that fared poorly and were funded by the same foundation that funded the House reanalysis look better.)

There is always a good reason to look at site data instead of classroom data (to reduce teacher effects) in any study, not just the large ones. of course, by this token, almost all of education research, which is mostly based onclassroom level data, would not rise to statistically significant levels since most experiments are small. So, to be consistent (as opposed to a biased hack with an agenda) you'd want to hold all education research to the House standard (and as I've indicated above, there's good reason not to, but if you were), and in so doing you will also be invalidating almost all of the research cited by Kohn in his books. You can't have it both ways.

Any results (such as say performance in the fifth grade) after the research ended and the students were dumped back into their generally awful classrooms are subject to confounding factors.

Alex said...

Tracy W.-If what you say is in those studies is actually true, then I have no problem with those activities. I just think that teachers should spend less time going over the basic skills and when they do them with the kids, they ought to turn them into helpful patterns that kids can remember and use rather than isolated afcts or numbers(ie tree families such as 4+2=6,6-4=2,6-2=4,this ,these, those,etc) A book I think would be good for you to see is Best Practice by Steven Zemelman,Harvey Daniels, and Arthur Hyde(Heinemann,1998). They do a good job of answering the questions you've posed about these issues.
Mr. DeRosa- I think that the fact that House et al. had to look at the site analysis rather than the classrooms in order to reach statistically significant conclusions about a study such as PFT is the only thing we'll agree on. Kohn, in fact, cites numerous other studies in his books that used schools rather than classrooms and used large populations in order to reach reliable conclusions about DI and other issues,just as House et al. did. They didn't maximize random error because they looked at precisely the right level(sites) to make the analysis that we and others(including the Oregon researchers themselves) agree was the best approach. The groups that were in the sites were existent and needed to be charted in order to be able to get at least a good idea of how the cohorts at each site fared(so as to give the administrators and teachers at each of the schools some inkling of what may or may not work). If that is adding "random error", then that nixes the Oregon researchers own argument that groups needed to be the unit of analysis(both at the cohort level within the site for individual school research and the site level for comparing schools so as to get a general performance level). You keep insisting that House et al. was motivated by the Ford Foundation's financial concerns even though you've provided no objective evidence to prove that and(as I'll repeat again) the very Oregon researchers you site admit that House didn't favor any particular model,so you've made your own non-sequiturs and they continue to collapse of their own weight.
The position staked out by Carl Beriter, Wesley Becker, Zig Engelmann, and their colleagues at the University of Oregon isn't tenable. The fact that DI was the highest performing model isn't supported by Abt's reanalysis nor House et al.'s. Also your admission that the effects of DI probably wouldn't last beyond the fifth grade proves that DI is ineffective. After all, if there were confounding factors as you mention, shouldn't DI be able to overcome them? The fact that DI may even lose its luster around the third grade and that Becker admitted that overall performance for the low-income children may or may not stay around the national 50th percentile norm average with DI as the years pass raises good doubts about the long term effectiveness of DI(to say nothing of its short term narrow focus on only the most basic of skills). You say that it worked for the groups of kids other than low-income kids. That's only because they've already picked up those skills in other ways since they were already high in those areas to begin with.
PFT was specifically designed for Head Start kids, so that raises another question of why the primary experimenters even included those above the poverty line in the data sample in the first place. The fact that higher order skills and affective skills were also shown to have positive results depends on how those sets were defined since those areas are themselves subject to what is being studied and the general orienation of the cohorts and the schools that they come from. There's good reason to doubt that higher order skills and affective skills as commonly understood by researchers were in fact tested since the whole PFT was oriented to basic skills in the first place.

KDeRosa said...

Alex, based on your response to Tracy its clear you are not at all familiar with DI except for the caricature version bandied about by people like Kohn.

Also you appear to be getting your talking points from a third party and not the actual Abt report. The 4th Abt report did indeed find that DI with its 297 and 354 educationally significant sites for basic skills and cognitive skills respectively which far surpassed all the other models. Also DI was the only intervention with scores at or around the 50th percentile. DOE concluded that PFT failed as a whole not that DI failed to achieve.

Berieter's acceptance of House's analysis of sites does not imply a agreement with House's actual analysis methodology which he criticized. House could have accomplished the analysis without introducing error, but clearly he was intent on doing just that. Also, there is no evidence that Bereiter or any other "oregon researcher" was actually biased and osed a flawed analysis that favored DI.

Again, what happened after PFT was over is a confounding factor and no I wouldn't expect any gains to carry through after PFT if those students did not receive effective instruction.

PFT was not oriented to basic skills, they also tested cogntiive skills and affective skills.

Tracy W said...

Alex - sorry about stuffing up the links (and the line breaks). I should have previewed beforehand. My aplogies. The links are:

Reading and thinking Language for Thinking Lesson BMaths Maths Lesson AReading lesson Reading lesson I just think that teachers should spend less time going over the basic skills Why? What evidence do you have that teachers are spending too much time on the basic skills at the moment? (And all teachers are spending too much time on the basic skills? This seems very unlikely.) How can kids learn the more advanced skills if they don't have the basic ones down pat?

when they do them with the kids, they ought to turn them into helpful patterns that kids can remember and use rather than isolated facts or numbers (ie tree families such as 4+2=6,6-4=2,6-2=4,this ,these, those,etc)Why is a pattern particularly helpful? I would have thought that a rule would have been more helpful than a pattern because you can understand a rule, while patterns are never reliable. (I work as a forecaster, trust me on this one. You think you've got a nice statistical regression working wonderfully and the next thing you know it falls apart. Patterns are dangerous). And teaching only tree families seems very limiting to me - the wonderful thing about arithmitic is that it can be applied to *any* set of numbers. For someone who criticised DI for a lack of emphasis on conceptual teaching, you appear to be advocating a rather non-conceptual way of teaching yourself.

And how would anyone teach a number as an isolated number? I mean the basic point about the natural numbers is that they are related to each other, 2 is 1 more than 1, 3 is 1 more than 2 and 2 more than 1, and so forth. What on earth would a maths lesson that taught isolated numbers look like? Has anyone on the face of the planet ever taught isolated numbers?

Or isolated facts. The traditional 19th century approach to teaching history I understand was to teach a series of dates and events. Boring perhaps, a waste of time, maybe, but dates at least provide a structure to history. I don't know of anyone who has advocated teaching isolated facts with no structure at all to them.

I think you are attacking strawmen here.

A book I think would be good for you to see is Best Practice by Steven Zemelman,Harvey Daniels, and Arthur Hyde(Heinemann,1998). They do a good job of answering the questions you've posed about these issues.Does this book present any research-based evidence that the average teacher (as opposed to a super-teacher) can teach all those different skills effectively? I'm a bit suspicious about your ideas of research, given that you made a series of flat-out wrong statements about DI so I don't want to waste time tracking down this book merely on your say-so. The positive reviews on Amazon.com don't sound to me like this book contains any material answering my questions as to how you expect all teachers to manage to teach all those very complicated skills, when not even many professionals in other fields manage to display all of them.

Alex said...

Mr. DeRosa-I acknowledge that DI can indeed work to produce some gains in basic math and verbal skills, but I'm skeptical about the ability of the model to produce long term gains in the cognitive and affect areas(as well as the basic skills themselves) since I myself went through a curriculum in elementary school similar to DI. I didn't benefit from it at all.I remember my first through sixth grade teachers using predetermined,scripted lesson plans for elementary math,reading,and science and I could sometimes remember the formulas to succeed on the tests but I don't remember anything of value from those years(ie.problem solving,appplications,other ways of figuring out problems,etc). What I can do in math(ie. addition/subtraction,multiplication/division) I learned at home over the years. I've always struggled in mathematics at school but never at home. In reading, it was much the same. My teachers used basal readers and prepared lesson plans that left little to no opportunity for the kids including me to ask questions about the texts we were reading. I remember one time asking my fourth grade teacher in Englich class what structure I'd need to write a good essay on my pet cat at home(it was a class assignment to write about something that you have that you like a lot) and I was told "Well, that's the silliest question I've ever heard!" I also remember diagramming sentences and memorizing the rules of grammar, but I don't remember them now and I think it was all a waste of time.
My own personal experience with DI and also what I've obeserved over the years as a student has convinced me that the model hurts kids rather than helps them. I'm not going to put my students or my own kids through what I went through because i want better for them than what I got.
Tracy W.- The kids should get the basic skills. No one in the debate is disputing that. What I've been saying is that teachers ought to be spending only a fraction of their class time(say about 20%) on allowing kids to get those skills,and even here the kids can pick them up in pattern activities like the examples I mentioned because that is actually more conceptual than memorizing each in isolation on a worksheet for,say,elementary school kids(ie.1+1=2,47*2=94,6+9=15,etc). I like the ideas that Zemelman, Daniels, and Hyde talk about in their book because they put hose ideas into practice at the Best Practice High School in Chicago and its worked for the low income kids and their familes very well. They mention studies that show that there are some benefits from explicit phonics-based instrucion, but that these are small and short lived. They can't achieve what the studies that prove Whole Language and other progressive practices can achieve.
Kids at the Best Practice School are active and involved in their communities and the parents are involved in their own kids education too. Everyone is treated with dignity and respect, something I never experienced when I was in school because I went for the majority of my school years to private,Catholic schools where I was laughed at and my family and I were rountinely maligned because we weren't wealthy like they were. That's why I like Zemelman et al's ideas and why I plan on teaching in th public schools becaue they need to be saved from the "reform" movement that's trying to bring them down. I suggest you also read Alfie Kohn,Linda Darling-Hammond,Deborah Meier,David Berliner and Bruce Biddle,and Howard Gardner.They are all actual teachers who have taught would be teachers as well as teaching at the high school and college levels themselves. They know what they're talking about and are very supportive of the basic mission of and reform to improve the teaching profession. Reading Zemelman et al. and these other educators yourself rather than comments about the books on Amazon from others would be an intelligent idea.

Tracy W said...

Alex - I fail to see what is conceptual about learning patterns. It may be more conceptual than memorising each fact in isolation, but as you have not actually identified anyone who teaches mathematics facts in isolation, I don't see why you keep going on about it. You haven't said anything to convince me that teaching patterns is more conceptual than teaching the underlying rule, I still think it is less conceptual as patterns are never reliable.

Can you please answer my question as to what evidence you have that teachers are spending too much time on the basic skills at the moment?

When we talk about Direct Instruction or DI here we are talking about a specific programme, not the general method of direct instruction. Kids struggle in some conceptual programmes, sadly they have bad teachers like the English one you describe in all sorts of programmes. What matters is the specific programme.

I am afraid I will have to remain unconvinced by Zemelman, Daniels and Hyde because apparently their book that you refer to is not available in the UK as far as I can tell. Can you please do me a favour and give me the names of the studies that Zemelman, Daniels and Hyde refer to that prove what Whole Language and other progressive practices can achieve?

They know what they're talking about and are very supportive of the basic mission of and reform to improve the teaching profession.But have they shown that they can train the ordinary teacher to achieve with students what they achieve? That's what is remarkable about Direct Instruction - it has been implemented in a wide variety of elementary schools and has shown itself to be replicable.