November 13, 2008

Today's Quote

It's time to admit that public education operates like a planned economy, a bureaucratic system in which everybody's role is spelled out in advance and there are few incentives for innovation and productivity. It's no surprise that our school system doesn't improve: it's more resembles the communist economy than our own market economy.

-Al Shanker, President AFT

November 12, 2008

Efficiency and Spelling

It's no secret that I'm not a fan of constructivist and child-centered teaching practices.

One of the main reasons why I don't like these practices is that they are even less efficient than traditional teaching practices. And traditional practices aren't very efficient either. In fact they are downright primitive compared to what we know about how children learn.

Let's take the teaching of spelling as one of the worst offenders.

Spelling continues to be taught, when it is taught at all, as it has been for decades. Students are given a list of words (10-15) on Monday and then tested on Friday to see if the words were learned. Then a new list of words is given and the process repeats. What happens to the old list of words? They disappear forever.

More formally, a week of massed practice is followed up with zero distributed practice. Not unpredictably, the students quickly forget what they've learned. All that effort is wasted. Retention is left to happenstance. Maybe the student will use the word in his writing before the spelling is forgotten. Maybe he won't. Maybe she'll read the word in her reading and think about the spelling, maybe she won't.

This is not an efficient way to learn spelling. It is a waste of time. Unless the student happens to be one of those smart kids that learns easily, reads voraciously, writes prolifically, and has exceptional retention. Inefficient teaching methods handicap those that aren't smart.

Further, it seems that the preferred way to teach spelling is through brute memorization. Often, the word lists do not capitalize on phonetic or morphographic efficiencies. Rote memorization appears to be the rule for learning spelling.

Then we have some of the inane exercises used to teach spelling. My favorite is "write a sentence for each spelling word." This often requires that the student is familiar with the meaning of word, familiar enough to use it coherently in a sentence. If the student doesn't know the word, it must be looked up in a dictionary. The hope is that the words used by the dictionary to define the word are understood by the student. Often they are not. This leads to more looking up until a definition the child understands has been found. At this point the child can formulate an understandable definition of the original word assuming all of this can be juggled in short term memory. Now the child is ready to make-up a sentence which requires creativity and knowing the rules of grammar, among other things. It's quite a lot for the student to attend to. We know that students remember what they think about, so you can bet that spelling only plays a minor role in this difficult exercise.

Who wants to defend the traditional way to teach spelling?

And who has a better way to teach spelling that addresses the problems I've discussed above?

November 10, 2008

Whose National Standards

Diane Ravitch is touting National Standards again. So is KIPP's Michael Feinberg.

I don't understand the love for standards, especially the national variety.

Imagine your ideological enemies being the ones in power drafting the standards. Now imagine that they, as they are wont to do, draft standards that not only favor their ideological brethren, but also might preclude you from practicing your favored ideological method. You can be certain they won't disfavor or handicap themselves.

Spend five minutes thinking about what you think are the best education outcomes and methods. Now spend another five minutes devising ways to disfavor those outcomes and methods. It's alarmingly easy to do.

Now tell me that you're still for a national standard that will apply to each and every state. They'll be no escape, unless you move to Canada. Or Mexico.

We need a diet

I'm a big plan of efficiency. So instead of analyzing all the bad education plans out there, I'm going to point out the shortcomings of the best -- Andy Rotherham's and Sara Mead's policy paper Changing The Game: The Federal Role in Supporting 21st Century Educational Innovation.

Here's the short version for the lazy:

Bad federal governmental intervention is the cause of much of our education woes, so we propose more federal intervention, but the good kind, i.e., the kind we like.

Now I like Andy and Sara. They are smart commentators on education policy. I am at least sympathetic, and often agree, with many of the views on education policy. But this time around Andy and Sara think that they can foster educational innovation and free-market-like solutions by putting the federal government's thumb on the scale and ignoring the reason why the free-market works in the first place.

Andy and Sara think they'll do a better job guiding the thumb than their equally smart predecessors. What they don't realize is that the thumb is the problem in the first place. This is a mistake that smart people tend to make. They think that they are smarter than the accumulated wisdom of the market. History shows they are not.

People, even smart people, are bad at making accurate predictions with respect to which innovations will succeed and which will fail. The recently deceased Michael Crichton makes a similar point with respect to finding solutions to the pollution problems facing people a hundred years ago.

Let's think back to people in 1900 in, say, New York. If they worried about people in 2000, what would they worry about? Probably: Where would people get enough horses? And what would they do about all the horseshit? Horse pollution was bad in 1900, think how much worse it would be a century later, with so many more people riding horses?

But of course, within a few years, nobody rode horses except for sport. And in 2000, France was getting 80% its power from an energy source that was unknown in 1900. Germany, Switzerland, Belgium and Japan were getting more than 30% from this source, unknown in 1900. Remember, people in 1900 didn't know what an atom was. They didn't know its structure. They also didn't know what a radio was, or an airport, or a movie, or a television, or a computer, or a cell phone, or a jet, an antibiotic, a rocket, a satellite, an MRI, ICU, IUD, IBM, IRA, ERA, EEG, EPA, IRS, DOD, PCP, HTML, internet. interferon, instant replay, remote sensing, remote control, speed dialing, gene therapy, gene splicing, genes, spot welding, heat-seeking, bipolar, prozac, leotards, lap dancing, email, tape recorder, CDs, airbags, plastic explosive, plastic, robots, cars, liposuction, transduction, superconduction, dish antennas, step aerobics, smoothies, twelve-step, ultrasound, nylon, rayon, teflon, fiber optics, carpal tunnel, laser surgery, laparoscopy, corneal transplant, kidney transplant, AIDS… None of this would have meant anything to a person in the year 1900. They wouldn't know what you are talking about.

Now. You tell me you can predict the world of 2100. Tell me it's even worth thinking about. Our models just carry the present into the future. They're bound to be wrong. Everybody who gives a moment's thought knows it.

So where does the free market come in? I'll let P. J. O'Rouke explain:

What will destroy our country and us is not the financial crisis but the fact that liberals think the free market is some kind of sect or cult, which conservatives have asked Americans to take on faith. That's not what the free market is. The free market is just a measurement, a device to tell us what people are willing to pay for any given thing at any given moment. The free market is a bathroom scale. You may hate what you see when you step on the scale. "Jeeze, 230 pounds!" But you can't pass a law making yourself weigh 185. Liberals think you can. And voters--all the voters, right up to the tippy-top corner office of Goldman Sachs--think so too.

With NCLB we finally bought the scale and made sure everyone weighed themselves. Many in education think that was a mistake and want us to throw out the scale. That's silly: how are we to know the diet works without a scale.

Others don't mind keeping the scale provided they can erase the objective markings and replace them with their own subjective ones. That's equally silly: you don't let the purveyors of the diet regime determine how to measure their own success.

And still others thought that merely weighing everyone and reporting their weights once a year would be sufficient to drop all those pounds. You still need a sensible diet in place for that to work. We didn't get many sensible diets. We got lots of excuses and test-prep diets: the kind of temporary diets that boxers do right before the weigh-in before a big fight.

What Andy and Sara want to do is legislate. i.e., fund, the "innovative" diets they think work best. That's only a small part of the problem. The bigger problem is getting the failed diets off the government teat and, unfortunately, that will include many of the diets Andy and Sara like. Andy and Sara's pseudo-free-market approach doesn't provide such a mechanism. And that is its fatal flaw. A real properly-functioning free-market works by ruthlessly eliminating the losers which involves a lot of short term pain, just like a real diet. That's the part that Andy and Sara leave out. Government won't defund, or starve, its losers voluntarily. That's not the nature of politics. And that's why political solutions, like Andy and Sara's, won't work.

November 7, 2008

Change

I've come back from my unannounced hiatus to discover that we have a brand new president.

A president that is for change. And, apparently, hope as well.

I "hope" that none of you wasted any time reading either candidate's platform. What politicians say they are going to do is very different from what they actually do once you've given them power. But you can rest assured that once elected their actions they will be consistent with them accruing power and ensuring that they retain power by getting re-elected. Keep that in mind because what you've just been promised (by both candidates) is inconsistent with their desire for power. Suffice it to say that you will be disappointed, and you would have been disappointed regardless of who was elected. That is the nature of politics.

Here is my prediction for education:

There will be change. That change will be superficial with respect to improving academic performance. It is extremely difficult to improve academic performance. The odds of academic performance improving in the next eight years in an educationally significant way are virtually nil.

It is easier to reduce academic performance by unwittingly changing things for the worse. This is because educating children is a difficult orchestration of detail that is difficult to get right and easy to screw-up. This remains true even though our current system remains horridly inefficient with much of the orchestration being badly out of tune.

Nonetheless the most likely scenario is that the change will produce no significant effect on outcomes. That is the history of education reform.

I wish my new president well but I don't have much hope that he is capable of improving education. He doesn't know how. And, as a result, he has no basis for selecting an education secretary that knows any better. Even an ideologically blind random selection is unlikely to produce better results because the field is replete with charlatans. Even if he were lucky enough to pick a winner, it is unlikely that that person could overcome the obstacles and vested interests in place that are anathema to improving academic performance.

We're going to get change. We always do. NCLB was change. But change doesn't guarantee improvement. Did you jump to that conclusion? I hope not. What you will get is something different, but that difference will likely not be an improvement.

There will be no shortage of wishful thinking and opinions of advisors. But since those opinions are almost certainly based on faulty science and informed by political correctness you should not necessarily expect beneficial results. Unless you're counting on luck. That's always a possibility. Even broken clocks are correct twice a day. Though, unfortunately, a clock that is five minutes slow is never correct.

That's what you're going to get -- an education secretary that is slow, broken, or both. Kind of like the current one.

So here's my prediction: the change you get in education will be different but not an improvement.

Let's hope that I am wrong. But don't count on it.

October 8, 2008

The WWC falls down on the job again

The What Works Clearinghouse (WWC) does a noble job of identifying much of the junk science research that plagues education research and masquerades as real research. The WWC, however, is not without its faults.
I have noted at least two instances in which the WWC has given its imprimatur to very questionable research.

In August, the WWC released a report on Reading Mastery-- one of the most researched reading programs in existence. Despite the fact that other reputable organizations have found that much of the Reading Mastery research base passes scientific muster, the WWC did not find a single study that met its standards. Clearly something was amiss.

The author of Reading Mastery, Zig Engelmann, has just weighed in on the WWC's latest shenanigans -- Machinations of What Works Clearinghouse. Basically Zig says that WWC failed to locate a large portion of the extant post 1985 Reading Mastery research base, improperly excluded the entirety (38 studies) of the pre-1985 research base, and used dubious criteria for excluding at least one study it did consider. I suggest you read the whole thing. I'll elaborate on two points that Zig raises.

Dubious Rationale for Excluding Pre 1985 Research

The WWC arbitrarily limits its research review to studies reported no earlier than 1985 (unless the WWC principal investigator deems the study important enough to report). This 1985 cut-off makes little sense. Beginning reading performance hasn't changed much since 1985. In fact, we have readily available evidence that it hasn't changed much since as early as 1971. That evidence is the NAEP Long-Term Trend in Reading test data (not to be confused with the plain ol' NAEP test which changes frequently). Here's a graph of the performance of nine year olds (4th grade):




As you can see, the performance of nine-year olds in reading has stayed remarkably flat during the period 1971 - 2004 with little difference between pre-1985 scores and post-1985 scores. My back of the envelope calculation is that the change between 1971 and 1999 is less than a quarter of a standard deviation, i.e., not educationally significant. In fact, scores in 1980 were higher across the board than they were in 1999. Only in the post-1999 do scores rise above the 1980 high-water mark.

Since we have reliable data going back to 1971 showing similar performance in early reading, there is no compelling reason to arbitrarily set the cut-off at 1985. The rationale the WWC offers is lame:

... the fact that preschool enrollment has increased, combined with the fact that more preschool and kindergarten programs run full-day, means that students in the early grades may be better prepared to receive reading instruction today than students 25 years ago. Moreover, it is possible that any changes in reading readiness over this period may not have been evenly distributed, since differences in reading ability by socioeconomic status and race are apparent at the kindergarten level . . . Any of these changes could have implications for the effectiveness of an intervention. If school readiness has increased, then an intervention that was effective 25 years ago may not be effective in more recent years. (p. 2, Appendix A)

Perhaps the WWC hasn't heard, but there isn't any evidence that preschool, full-day kindergarten, and Headstart provide any lasting effects that don't quickly fade out. In fact all of the potential causes given by the WWC (for none have been confirmed by research) must be superficial and superfluous to reading performance, since the NAEP data shows that none of them have had a significant effect on reading performance.

This is a somewhat embarrassing admission coming from the WWC what with its lofty evidentiary standards and all. I also suggest you read Zig's evisceration of this argument which concludes:

The assertion that the children are better prepared now and therefore what was effective 25 years ago might not be effective now is logically impossible. Lower performers make all the mistakes that higher performers make. They make additional mistakes that higher performers don’t make and their mistakes are more persistent, more difficult to correct. Therefore, if the program is easier for them now because of their higher degree of undefined ―readiness, they will make fewer mistakes and progress through the program sequence faster.

...

[B]eginning reading for grades K–3 is stable because nothing of significance has changed in the last 40 years. The instructional goal is the same—to teach children strategies and information that would permit them to read material that could be easily covered with a vocabulary of 4,000 words. The frequency of these words has not changed. The syntax of the language has not changed significantly. For these reasons, the content of the first four levels of Reading Mastery has not changed over the years.

I am not aware of any properly conducted scientific research which has a shelf life of only 20 years. Research doesn't go bad. I'm not going to stop taking penicillin based drugs while the research gets updated because the basic research was conducted 80 years ago. And, I see little reason for the WWC to exclude any properly conducted research on Reading Mastery, such as the Project Follow Through, or for any other educational program for that matter.

Dubious Confounding Factors

It's bad enough that the WWC failed to even locate, much less consider, a majority of the extant Reading Mastery research. It's even worse that they set an arbitrary cut-off date that excluded at least 38 studies on Reading Mastery. However, improperly excluding a study (which otherwise meets all the selection criteria) based on the fact that the new teachers were provided initial training goes beyond the pale.

The RITE study (Carlson and Francis, 2002) which involved 9300 students and 277 teachers (Zig claims that it is "probably the second largest instructional study ever conducted (after Project Follow Through") met all of the WWC exceedingly high selection criteria. However, the WWC excluded the study because "support [was] provided to teachers through the RITE program" which the WWC believes to be a confounding factor. Here's the confounding "support" the teachers received:

This support consisted of summer training, less than two hours of monitoring during the year, and help from a designated trainer. Nearly half of the teachers (137) were in their first year of teaching Reading Mastery. The training focused on how to provide positive reinforcement, how to correct specific errors, how to organize and manage the classroom so that one small group is in reading instruction while the other two groups are engaged in independent work and are not disrupting the instruction... The teachers were trained to teach Reading Mastery exactly the way the [Teacher's] Guide describes it, with all the technical details in place.

This is not only a ridiculous reason for excluding an otherwise acceptable study, but also against the WWC's own protocols which permits the inclusion of "commercial programs and products that [have] an external developer who: Provides technical assistance (e.g., provides instructions/guidance on the implementation of the intervention)." (p. 6, Protocol)

The WWC excluded many other otherwise acceptable Reading Mastery studies based on "confounding factors." I wonder how many were confounding factors related to initial training like the RITE study. I know that more than one study was excluded because the control group initially performed at least half a standard deviation above the Reading Mastery group, yet despite this advantage, the Reading Mastery group outperformed the control group by the end of the study. I'm thinking that the magnitude of the effect size more than compensates for the reliability issue caused by initial discrepancy which favored the control group.

In any event, there you have it. The WWC failing to do their job properly yet again. This is beginning to become a pattern.

August 22, 2008

Postal Service More Loved Than Public Schools

According to Lisa Snell:

An August 2008 poll conducted by Education Next and Harvard University finds that Americans think less of their schools than of their police departments and post offices. When asked to grade the post office, 70 percent of respondents gave an "A" or "B." In contrast, only 20 percent of Americans said public schools deserve an "A" or a "B." Twenty-six percent of the country actually gave their public schools a grade of "D" or "F." And African-Americans are even more down on public schools, 31 percent gave public schools a "D" or an "F."


I'm not surprised. The post office delivers my mail faithfully, albeit expensively and with a substandard tracking system, regardless of my social status, my ability to receive mail, or my mail receiving style.

August 21, 2008

Learning Styles Are Bunk

Dan Willingham has another video out on the non-existence of learning styles.



Willingham goes into much more detail on learning styles in his Summer 2005 American Educator article: Do Visual, Auditory, and Kinesthetic Learners Need Visual, Auditory, and Kinesthetic Instruction?

Vicki Snyder also makes a similar point in Myths and Misconceptions about Teaching: What Really Happens in the Classroom. Learning styles are presented as the fifth myth of teaching. (I reviewed the book here.):

Myth #5: the myth of learning styles refers to the popular idea that teaching methods should be matched to students' unique characteristics. Although individualization is desirable, learning style assumes that certain learner characteristics are intrinsic when they may in fact be the result of experiential factors that are amenable to instruction. As a result, teachers may inadvertently deny low-performing students opportunities to learn.


The myth of learning styles is based on three faulty premises: learning styles are intrinsic, learning styles can be assessed, learning styles can be matched to instructional styles. Snyder points out that all three premises are untrue.

In any event, as far as teaching goes, we only really care about the differences and similarities that influence learning and instruction. Of course, the vast majority of differences between children have little or nothing to do with how kids learn. Often these differences are expressed in terms of "learning styles and modalities," "multiple intelligences," and "differing interests." All of these so-called differences are similar in that none has any empirical support nor has any been shown to have an effect on learning or instruction.

This is because the content of instruction dictates about 90% of what has to be taught:

Content, and the nature of content, doesn't change according to the interests of children, nor according to any other characteristic of children. If we were trying to teach a gorilla to read, the nature of reading wouldn't change. Obviously, when it comes to the nature of content, differences among learners don't have much to do with anything.


Learning style differences are usually assessed informally through teacher observation. Teachers, however, often know little about inducing real learning. These learning styles are often expressed as superficial external traits like visual, auditory, tactile or kinesthetic which mask the underlying complex cognitive traits. For example, children cope with their inability to read in ways that might superficially seem like a learning style, but that actually reflect poor reading skills. It's easy to misinterpret certain behaviors.
Consider the following examples.

  • Sometimes elementary teachers say that poor readers are auditory learners because they can't track words with their fingers. It's more likely that they can't read the words. Usually these auditory learners can keep their eyes riveted to a television or video game screen for hours.
  • Sometimes elementary teachers say that poor readers are visual learners because they memorize and rely on picture clues rather than sounding out words. It's more likely that they revert to visual clues because they can't read the words. Without knowledge of the underlying sound structure of language, they have little choice but to rely on memorization and guessing.
  • Sometimes high school teachers say that poor readers are auditory learners because they need the text read aloud or explained to them.
    Sometimes high school teachers say that poor readers are visual learners because they need pictures, graphics, and visual displays to explain the text to them.
  • When students are labeled tactile/kinesthetic learners, they often need hands-on experience, group work, and activities to learn, not because of their learning style but because they need structure, assistance, and feedback on difficult or unfamiliar tasks.

In all of these examples, the source of the observed behavior is poor reading skills. To ignore the basic problem in no way benefits the students.

The point is that all kids (and humans) share some characteristics that are useful for learning, and, therefore, instruction has to accommodate those samenesses among learners, rather than the many differences among them. Learning styles and "intelligences" and student interests and modalities couldn't possibly have too much influence on learning, not when the nature of content doesn't vary among learners, and not when some of those things that make us all human are so central to learning.Efficient instructional programs make every effort to communicate the essential nature of content to all learners (because it is the same for all learners), and they make every effort to take full advantage of the ways all humans generalize more accurately and efficiently. What is the same about children is their innate capacity for language, to learn to read and think inductively and deductively. What is the same about all children is that they will learn if given appropriate instruction. They may learn at different rates and may need different amounts of structure and practice to master academic skills and concepts, but they can learn.

August 13, 2008

The IQ Conundrum for Broader, Bolder

Here are some charts from Gersten, R., Becker, W., Heiry, T., & White. (1984). Entry IQ and yearly academic growth in children in Direct Instruction programs: A longitudinal study of low SES children. Educational Evaluation and Policy Analysis, 6(2), 109-121. that show the gains made by the low-SES DI students in Project Follow Through for a range of IQ blocks from under 71 (2 sd below the mean) to above 130 (2 sd above the mean).

There are six IQ blocks shown on the chart. From left to right:

Block One: IQ below 71
Block Two: IQ between 71 and 90
Block Three: IQ between 91 and 100
Block Four: IQ between 101 and 110
Block Five: IQ between 111 and 130
Block Six: IQ above 130

For each IQ block the mean standard score has been graphed at the end of grades 1, 2, and 3.

There are arrows (<, <<, <<<) along the Y axis (mean standard score) that show the national median for each grade. I've (helpfully) drawn a blue line at the third grade national mean, as you can see, only the kids in blocks with IQs above 100 are performing above about the national median for math and only those above 110 for reading. (The blue line only has meaning with respect to the third grade scores (the top point). You could draw horizontal lines from the double arrow (second grade) and compare it to the middle point and from the single arrow and compare it to the bottom point.)

Click on each chart to enlarge.

This chart is for total reading for the Metropolitan Achievement Test.



This chart is for total math for the MAT.



Here is Becker's interpretation of the charts:

The data showed almost no contribution to "learning rate" (pretest to posttest gains) for IQ. If IQ were correlated with gains, lower-IQ children would make smaller gains and higher IQ children would make larger gains. This does not happen for Reading on the Wide Range Achievement Test (decoding) [Ed: Not shown.] or comprehension on the MAT, there is no IQ effect gains from the end of grade one to the end of grade two (most of the gains are about equal), but there is an effect for the gain from the end of grade two to the end of grade three. I believe this effect is due to the fact that the end of third grade test for Reading Comprehension on the Metropolitan uses an uncontrolled, adult-level vocabulary (as found in fourth grade texts). Since vocabulary instruction in school does not progress gradually to the adult level (but jumps from a carefully controlled vocabulary to an adult vocabulary after third grade), the test at this level is now measuring something not taught in school. Thus, students who score higher on a test of verbal skills (IQ) do better on a test of verbal skills (Reading Comprehension) when the content was not systematically taught in school. (A caution: The data may have imposed a ceiling effect on the brighter students; the program stressed preventing failures and thus teachers may have given more effort to teaching lower performers. Even if this is the case, however, the data are noteworthy in showing what can be done "gainwise" for lower-IQ children.)


Here is my observation. I understand Becker's comparable gains argument, but look at the mean percentile ranks for each IQ block:

Math End of Third Grade

Block One (IQ below 71): 24th
Block Two (IQ between 71 and 90): 39th
Block Three (IQ between 91 and 100): 47th
Block Four (IQ between 101 and 110): 61st
Block Five (IQ between 111 and 130): 69th
Block Six (IQ above 130): 88th

Reading End of Third Grade

Block One (IQ below 71): 11th
Block Two (IQ between 71 and 90): 29th
Block Three (IQ between 91 and 100): 34th
Block Four (IQ between 101 and 110): 44th
Block Five (IQ between 111 and 130): 58th
Block Six (IQ above 130): 81st

Also notice the gradual slippage from first to third grades in Reading even for the smartest kids. There is no slippage in math. Interesting.

I don't see how the lower IQ kids are going to be able to learn in a regular classroom given these percentiles. That would seem to foreclose a college education for these students and probably an academic high school education. Am I wrong?

And for the Broader, Bolder crowd, given that many low-SES students have lower IQs and that SES inerventions have not been able to to show a significant effect on IQ past about third grade, how exactly are your proposed SES interventions going to get around this IQ conundrum. Look the high-IQ, low-SES kids are performing well. The low-IQ ones aren't. I'd like to hear a rational argument that makes sense of this.

August 11, 2008

Day 17: Still Waiting


Two weeks after I first called for some evidence on the effectiveness of Broader, Bolder, I finally received a (sort-of) response from Big-Labor Fat-Cat Leo Casey.

Leo must have had a few of his underlings poring over the ERIC databases non-stop finding the requested evidence. Here is Leo's evidence. I am leaving in all the internal citations and footnotes.


Classroom teachers recognize immediately the educational value of providing a comprehensive array of services to students living in poverty. They have seen the effects of undiagnosed and untreated eye problems on a student’s ability to learn how to read, and of untreated ear infections on a student’s ability to hear what is being said in the classroom. They know that the lack of proper medical care heightens the severity of childhood illnesses and makes them last longer, leading to more absences from school for students who need every day of school they can get. They have seen asthma reach epidemic proportions among students living in poverty, and they know that the lack of preventive and prophylactic medical care leads to more frequent attacks of a more severe nature, and more absences from school. They understand that screening for lead poisoning happens least among children in poverty, even though their living conditions make them the most likely victims, with all of the negative effects on cognitive functions. They know that the stresses of life in poverty make mental health and social work services for students and their families all that more important, and yet they are least likely to receive them. They see how the transience that marks poverty disrupts the education of students again and again, as the families of students are constantly on the move. In short, teachers know that the students living in poverty lack the health and social services routinely available to middle class and upper class students, despite the fact that they need them even more. And they know that the absence of these services has a detrimental impact on the education, as well as the general well-being, of students living in poverty.


I emphasized Leo's evidentiary citations since they do not conform to the generally accepted norm. Leo's logic goes something like this: Leo knows best because Leo knows best. The circularity of this argument is surpassed only by its arrogance.

There is, of course, little actual research backing up Leo's claims. This is fortunate for Leo since in the few instances where there is research, it proves Leo wrong. Let's take a look at one of those claims.

They have seen asthma reach epidemic proportions among students living in poverty, and they know that the lack of preventive and prophylactic medical care leads to more frequent attacks of a more severe nature, and more absences from school.

As luck would have it, we actually have legitimate research on the efficacy of an asthma intervention. Here are the results.

  • An asthma self-management program incorporating health education and parental involvement increased academic grades for low-income minority children but not standardized test scores. (Evans et al.)

  • A subsequent study of the asthma self-management program was expanded to include health education for asthmatic children and their classmates, orientation for school principals and counselors, briefings for school custodians, school fairs including caretakers, and communication with clinicians demonstrated higher grades for science but not math or reading and fewer absences attributed to asthma as reported by parents but not fewer school-recorded absences. (Clark et al.)

Notice how the subjective measures (teachers' grades and parental reporting of grades) conflict with the objective measures (standardized test results and school-recorded absences).

Apparently, this isn't the sort of evidence that Leo is looking for. Leo isn't looking for any evidence:

Disingenuous calls for “evidence” that community schools work require a willful myopia on the effect on life in poverty on education — a blindness made possible by a complete unfamiliarity with the real world of the classroom.

If you ask Leo to provide support for his (expensive) opinions, you're being disingenuous. If you don't trust Leo that community schools work, you're being willfully myopic to poverty's effects on education. Of course, based on Leo's educational track record, if you're still foolish enough to be taking Leo at his word at this point, you'd have to be priapic.

I'll take disingenuous and myopic over priapic any day. I'm sufficiently hyperopic to know better than to take Leo at his word. Especially when that word calls for yet another bromide that gives more money and power to Leo.

August 8, 2008

Prediction Time

Following up on my last post on Charles Murray's new book, real education, it's time to see what Murray predicts will be the results from the grand experiment he proposes:

On measures involving interpersonal and intrapersonal ability. I expect statistically significant but substantively modest gains. On measures of actual knowledge, the experimental group will score dramatically higher than the members of the comparison group, perhaps 30-plus percentile points higher (technically more than a standard deviation). On measures of reading and math achievement, the differences will be no more than 15 to 20 percentile points (about half a standard deviation). Three years after the experiment ends, all of the differences will have shrunk. The differences in reading and math will be no more than 8 to 12 percentile points (no more than a third of a standard deviation) and may have disappeared altogether.

More formally, I predict that the magnitude of each academic effect will be a function of the g loading of the measure. Measures of retention of simple factual material have the lowest g loadings and will show the largest gains. For highly g-loaded measures such as reading comprehension and math, what has been accomplished by the last half-century of preschool and elementary school will be shown to be about as good as we can do, no matter how much money is spent.


This is a decent prediction. The I think that Murray overestimates the ease at which facts can be taught to and retained by low-IQ students and underestimates their ability with respect to math and reading comprehension.

Facts are difficult to learn because facts must be mostly learned on a case by case basis which is not readily amenable to acceleration. Math and reading (decoding and comprehension) are easier to teach because these skills, can be accelerated (even though teaching language and vocabulary remain problematic). But I knew that from the Follow through and the Baltimore Curriculum Project data. The data shows that we can get at least about three-quarters to a standard deviation improvement on average by the end of elementary school, better if we discount the schools that are so incompetent that they are unable to implement well-tested programs with fidelity.

Murray's point with respect to fade-out is well taken, but I'll leave that for another post.

August 7, 2008

Real Education: A Call for an Educational Experiment

I'm reading Charles Murray's latest book, Real Education: Four Simple Truths for Bringing America's Schools Back to Reality.

I agree with some of the points Murray makes and I disagree with others.

In any event Murray proposes a very good idea in Chapter 5:

Hence my second proposal, for a study that would be the most expensive educational demonstration project in history and would take as much as fifteen or twenty years from beginning to end. I state in the form of a challenge to everyone who is convinced that we can tach low-ability children far more than we are currently teaching them: Put up or shut up... Here is the proposal:

select children who test low in accademic ability but are not clinically retarded--say, children with measured IQs from 80 to 95, which demarcate the 10th to 37th percentiles. Make the number of the children in the study large enough that the results cannot be explained away as an accidents of small samples. Then provide these children with the best elementary education that anyone knows how to provide. Build new facilities or renovate existing ones. Hire the best teachers and create model curriculum. Measure how well the children are doing at the end of elementary school, and compare their progress with that of other children matched for IQ, family background, and whatever other variables are considered important.

...

The people who conduct the experiment should be free to use any teaching techniques, any class sizes, any amount of one-on-one tutoring, and type of technological aid. They shouldn't worry about making the program financially affordable for wider application, but instead bring to bear every resource that anyone can think of, at whatever cost that will maximize the education that these children acquire. Or to put it another way, their mission is to conduct the experiment in such a way, if it fails to produce success, there will be no excuses. Only three ground rules are nonnegotiable:


  • The organization that selects the experimental and control samples and tests the children must be completely independent of and isolated from the organization that conducts the experiment.
  • The design must protect against teaching to the test and test-practice effects.
  • The design must include a test for fadeout, conducted three years after the experimental education ends.


Great idea. Sound familiar?

That's what I thought too. So I dashed off an email to Murray informing him that we'd already done something very similar thirty years ago: Project Follow Through.

Murray wrote back that he thought something was out there (even though people kept telling him there wasn't) and hoped that Real Education would surface it. Sure enough it had and I gave him a crash course on PFT.

In the post I'll tell you what Murray predicted would be the results of this grand experiment and we'll see how well his predictions matched the results of PFT.

August 2, 2008

SES and Rotten Instruction

Deep in a comment thread over at Sherman Dorn's blog, Dick Schutz makes an excellent point:

The only thing that the [standardized] tests are sensitive to is SES and racial/ethnic characteristic. If those two variables were partialled out statistically, the results would show that schools are pretty feckless instructionally.

That's not "news." It's been around since the Coleman Report of the 1960's. But the popular conclusion is that we have to "change society." The "obvious conclusion" has been overlooked--change instruction. When you do, you find that the correlation with accomplishments and SES is near-zero. That's empirical reality, not a statistical manipulation.


At least at the elementary level and if we don't include comprehension with uncontrolled vocabulary, and if we teach the higher-SES kids like we do currently, but his larger point is valid. SES matters quite a bit as long as instruction is rotten.

August 1, 2008

More Visual Aids

I have two more charts related to my earlier post on school expenditures and student performance.

For the first chart I calculated the differential between total student pass rate and the total pass rate predicted from the regression between percentage of economically disadvantaged students and pass rates. (Basically this crudely controls for amount of economic disadvantage in a school district.) Then I divided the pass rate differential by the standard deviation to arrive at a z-score so you can more clearly interpret student achievement. Then I plotted this z-score against the differential between the total expenditures for a school and the median total expenditures ($10711.5). (Does that make sense? Let me know.)



For the next chart I did the same thing. this time, though, I used the pass rate for economically disadvantaged students only since this is a better indicator of how well these districts do with at-risk students. The results are similar.




It should be clear from these charts that at these funding rates student achievement is not affected by school expenditures. There are plenty of schools that perform well with low expenditures and plenty that fail even with high expenditures.

Bronze: Where the Least Motivated Find the Will to Succeed

A blog reader, Carol Glenn, has come up with a new proposed after-school program that incorporates DI and Core Knowledge. Heer idea is in contention over at ideablob where she can win $10,000 in seed money if you vote for her. Check out her business plan and cast your vote.

Here's a good visual representation of Carol's plan:

July 31, 2008

Today's Chart

Update: more visual aids here.

Arnold Kling suggests another way to present education data to determine if funding matters in education:

On the X-axis, plot the percentage of students in a county who are above the FARMS line (that is the "free and reduced meals" indicator of poverty). On the Y-axis, plot the percentage of students that pass the math exam. For each county in Maryland, put a data point on the chart. Next to each data point, put the County's ranking in terms of expenditure per pupil.

Next, draw the line of best fit through the data points. Counties that fall above the line are adding relatively more value than counties below the line. If education spending matters, then Montgomery County and other high-spending schools should be above the line. It would be interesting to see whether this is in fact the case.


Here is the chart for 497 school districts in Pennsylvania for 2005. On the X axis I have the percentage of students in the district that do not receive free and reduced meals. On the Y-axis I have the pass rate for PA's PSSA 11th grade exam (math and reading). There were too many school districts to add the funding data but I did do the calculations.



For school districts falling below the regression line the average total expenditures was $11,417.

For school districts falling above the regression line the average total expenditures was $11,214.

The overperforming schools actually spent less on average. Go figure.

Back in March I ran few different regressions on expenditures and FARM perfromance, household incomes, teacher salaries and parental education. The results are not always what you'd expect.

Update: Brett from DeHavilland blog has the numbers from Tennessee. Brett writes, "After looking at the correlation between TCAP and poverty rates, we looked at correlation between free/reduced lunch rates and value-added performance of the schools. Virtually no correlation to be found: in other words, some schools with 100% free/reduced lunch rates are contributing tremendously to student learning, and some with almost no free/reduced lunch participants are dropping the ball." Notice the variance (R2) is virtually the same as what I calculated for PA. (Note: Brett 's graph shows the percentage of FARM students not non-FARM students.)



Update II: Unbroken window runs the data for New York and finds the same relationship. Although, it appears that the some schools are maxing out the test and distorting the data.

Inquiry Physics

I was looking for something in my storeroom earlier today and stumbled upon my old College Physics textbook. Seven hundred pages of pure applied brutality that every science and engineering student had to complete.

Physics I was the course that sent a large portion of my freshman college class for greener pastures over at the business school. This happened despite the fact that almost every student had taken a high school physics course, so this was the second time through this material.

If my recollection serves me correctly, Physics instruction was supposed to go something like this. The student was supposed to read one or more sections of the textbook every week and attend a lecture given by the professor elucidating the sections we were to have read. A few dozen problems from those sections were assigned to us to work out. Then we attended three hours of recitation classes given by graduate students who worked through some of the problems we had been assigned to make sure we understood what was going on.

This brutal pace kept up for fourteen weeks and we covered nearly the entire textbook. During that time, we worked through hundreds and hundreds of problems. We were permitted to take into each exam one sheet of paper with whatever we could fit thereon. Otherwise, the exams were closed book. Nonetheless, the average grade for each exam was almost always less than 50%.

Here's my question.

How could one possibly teach an inquiry-based (problem-based learning) Physics course and possibly hope to get through more than say a quarter of the syllabus of a lecture-based course? I don't even see how this might work for a high school level course.

Update: Based on Stephen Downes' comment, I sense some confusion. I consider this to be a direct instruction/lecture based course not an inquiry based course. The pace is brutal for a lecture based course. I can't imagine covering this amount of content in a true inquiry based course.

Snake Oil is still Snake Oil even when its Broader and Bolder



I've been perusing the various Background Papers for the Broader, Bolder Initiative looking for some valid research pertaining to an actual implementation of one of the Broader, Bolder ideas. What I didn't find was:

Nevertheless, there is solid evidence that policies aimed directly at education-related social and economic disadvantages can improve school performance and student achievement. The persistent failure of policy makers to act on that evidence—in tandem with a school-improvement agenda—is a major reason why the association between social and economic disadvantage and low student achievement remains so strong.


What I find is a lot of observational studies that find various correlations between traits associated with at-risk children and their families and the fact that these children tend to perform worse than their mainstream peers. The causal jump is then assumed.

For example, studies have shown that at-risk kids report in questionnaires that they experience more hunger, which might be broadly defined to include everything from extreme malnutrition to missing a snack once a week, than their middle-class peers. Since the performance of at-risk kids is less than their mainstream peers, Broader, Bolder reasons that hunger causes distraction and distraction causes lower performance. And, therefore, we should provide more nutrition to at-risk kids.

But since there's no such thing as the nutrition fairy, this broader, bolder plan has to implemented somehow. For example, we might fund the public schools so that they might provide free and reduced lunches to qualifying at-risk students. Actually, we do that already. Then, how about if we fund breakfast programs for qualifying at-risk students. We do that too. I'm confused.

You see, the question isn't whether we should be providing more nutrition to at-risk students. That low-hanging fruit has already been picked. The question now is whether we should expand or supplement these existing programs and are educationally significant gains in student achievement to be forthcoming.

This is the question that should have been answered before Broader, Bolder issued their manifesto. But it wasn't. At best, we have some small scale research, usually rife with methodological flaws, that was conducted so that the researcher could provide evidence that their belief was correct. You can always find evidence that some kids learned something when you changed some condition. Round up enough experimental subjects and conduct enough experiments and you are bound to find some statistically significant, though not necessarily educationally significant, increase in performance whether by chance, good fortune, or hoax.

What we want is research in which the researcher started withe assumption that their idea was wrong. Then the researcher collects evidence that shows either that the researcher's beliefs are false or not false. This is called testing the null hypothesis. Kozloff gives us an example of this type of research:

I believe program X works, but I'm going to assume that it doesn't work and I'm going to collect data to try to show that it doesn't work. If the data do not show that X does not work, I will conclude that maybe it does work. Maybe.


We don't see this kind of research cited by Broader, Bolder. What we see is "research" that is attempting to persuade us that we should join the researcher in accepting his beliefs and that the researcher is not very interested in the possibility that he is wrong.

Broader, Bolder needs a healthy dosage of humility, especially since so many of its bromides remain untested.

(Picture adapted from Telling the Difference Between Baloney and Serious Claims About What Works, Kozloff and Madigan, DI News, Summer 2007)

July 29, 2008

Today's Best Education Paragraph

From Jay Greene:

Besides neither being unfunded nor a mandate, the argument that NCLB is an unfunded mandate is especially odd because it makes one wonder what all of the funding that schools received before NCLB was for. It’s as if the unfunded mandate crowd is saying: “The $10,000 per pupil we already get just pays for warehousing. If you actually want us to educate kids, that’ll cost ya extra.” Remember, that NCLB just asks states to establish and meet their own goals. Didn’t they have goals before NCLB?



Oh, we were supposed to educate them as well with that money?

Bogus Bowl V

Go take Teach Effectively's latest Bogus Bowl poll.

Which of the following do you consider to be the most bogus reason for failing to teach prospective teachers how to employ teaching procedures that have been documented to be effective?

  • Professors want future teachers to find their own teaching styles.
  • Professors don't want to stifle future teachers' creativity.
  • Professors say that using research-based practice is only one small part of what future teachers need to know.
  • Professors believe that there is not one best way to teach.

July 25, 2008

Hoffman and the Rule of Holes

Tom Hoffman is having a grand ol' time digging himself deeper into a rhetorical hole.

Let's start with the argument Hoffman apparently considers to be the knock-out blow.

DeRosa's critique of the assignment is based on how he imagines the Dred Scott decision ought to be taught in a US History class. Had he asked before writing his missive, or bothered to read the History and Social Studies section of the Curriculum Guide, he would have known that his entire frame for critiquing the assignment was incorrect, because this was not an assignment for a US History class (taken in 11th grade at SLA), but an African-American History class.


I'm not sure why Tom thinks I didn't know that the project was for an African-American history class considering this statement from my initial post.

The project comes from page 10 of the Family Handbook and pertains to African-American History.


I've taken the liberty of highlighting the relevant portion for Tom's benefit. Apparently, my "entire frame of critiquing" wasn't correct after all.

In this context, what is important is the decision's impact on African-Americans and the abolitionist movement, not the balance of power in the great game between the North and the South in which the African-Americans are seen as mere pawns. Perhaps in 11th grade US History, the pre-war balance of power dynamic will be emphasized.


I agree. That's why I gave the following as an example of analysis showing deep understanding for a high school student.

[T]he Dred Scott decision is important because it upset the political compromise at the federal level (the Missouri Compromise and the Kansas-Nebraska Act) which served to limit the spread of slavery.


Again I've emphasized the relevant portion for Tom's benefit. The balance of power issue remains an important issue for African-Americans history since it affected the growth of slavery. I'm thinking the growth of slavery might have "impacted" African-Americans.

I provided yet another reason for why the balance of power issue was important to the slavery issue:

As long as the Senate was gridlocked, the North would not be able to pass a constitutional amendment banning slavery in all the states.


Again, I've scaffolded the passage for Tom's benefit. I'm thinking the balance of power issue was a little more important than merely a "great game between the North and the South in which the African-Americans are seen as mere pawns."

Tom continues:

I would note that Chris Lehmann told me that he left a comment on Ken's blog explaining this oversight on Ken's part, but for whatever reason, that comment has not been published as of this date.


Unlike Tom, I don't moderate comments and I only delete spam comments. If Chris' comment didn't post it's either because Chris did something wrong or blogger ate the comment. Most likely it was the latter.

I'm thinking at this point Chris is glad the comment didn't make it through.

The knock-out punch missed its mark. Let's see if Tom's remaining argument lands.

Beyond Ken's unhappiness of the framing of the decision and the assignment, his criticism of the student work itself is not based on any knowledge of the kind of work 14 year olds typically do. As a piece of writing, the letter in question would stand up admirably against the anchor papers used in any 9th grade writing assessment in the country, if not the world. DeRosa never questions the accuracy of the student's historical information.


Actually, I did question the efficacy of petitioning Southern Democrats for redress. That seemed to be on the wrong side of a few historical facts. However. the primary deficiency in the student project was that the student didn't give us much to work with, hence my characterization of the project as "superficial understanding." This contradicted the claims made by SLA in the family handbook, and I quote:

Teachers in each course ask the question – “What are the enduring understandings students should have when they leave this class?” Teachers then create projects that can only be completed by showing both the skills and knowledge that are deemed to be critical to master the subject and demonstrate that deep level of understanding.

...

At SLA, there may be multiple assessments – including quizzes and tests – along the way, but the primary assessment of student learning is through their projects.


This was supposed to be an exemplary project, yet it contradicted SLA's assertion that it showed deep understanding of the subject matter and mastery of skills. I argued that it showed superficial understanding because it "fail[ed] to cover any of the important issues presented by the decision, the historical context of the case, and why the case is historically important."

Tom's point with respect to what a 14 year old should be expected to know is relevant. Tom cites an AP prep book for U.S. History that provides far more detail on the Dred Scott decision (p. 132)and the relevant history leading up to that decision that my analysis. I also cited a middle school history text which gave about the same level of analysis that I provided. What is clear is that the student example project provided far less analysis than either the middle school analysis, my analysis, or the AP prep book analysis.

That's two misses.

My advice to Tom: stop digging.

Still Waiting on Broader, Bolder

I'm still waiting on someone fom Broader, Bolder to offer some evidence supporting the effectiveness of their call to expand public education to cover a myriad of social services.

This week both Diane Ravitch and Randi Weingarten offer tepid defenses over at The Education Gadfly.

Let's take Ravitch's defense first:

I care as much about academic achievement as Checker or anyone else in the world, but I don't see any contradiction between caring about academic achievement and caring about children's health and well-being.
The issue isn't about who cares about children's health and well-being. The issue is whether public schools, who are by and large failing at their primary task of education, should take on the additional responsibilities of caring about children's health and well-being. You could care very much about the health and well-being of children and NOT think it's a good idea to hand these services over to our public schools.

The argument seems to be that since children attend school every day (cough, cough) that social services could be easily provided at school. Then why not hand over these responsibilities to the post office. After all, they make house calls six days a week regardless of the rain, snow, heat, or gloom of night. They could give the kids a quick vision screen and drop off any drug prescriptions.

Will it help or harm children's academic achievement--most especially children who are living in poverty--if they have access to good pre-K programs?


The extant evidence suggests that pre-k programs will have no or a negligible effect on academic performance. The yoke is on you to show that there will not only be an educationally significant effect on academic achievement but also that the benefit will persist when the public schools do the provisioning.

Will it help or harm children's academic achievement--most especially the neediest children--if they have access to good medical care, with dental treatment, vision screening, and the like? Will it help or harm children's academic achievement--the children whose lives are blighted by the burdens of poverty--to have access to high-quality after-school programs?


The evidence here is even more scant. I know these community service schools exist, so where is the evidence that they are raising academic achievement. Show me the money.

So, I explain my dissent briefly: One, what we are doing now--the standards & assessments & accountability strategy alone--bears little or no resemblance to genuine academic excellence.


But this doesn't mean that the Broader, Bolder way will be any improvement.

And two, children who come to school hungry and ill cannot learn no matter how often they are tested.


Last I checked, schools offer free and reduced price lunches to practically the entire left side of the socio-economic curve. If these kids are still hungry what makes you think that expanding these current programs will solve the problem, to the extent that there even is one. Again, where is the data?

And three, a good education must include attention not only to academics but to children's character, civic development, physical education, and physical health.


Schools are attempting to do most of this stuff already. Where is the evidence that it is working? Where is the evidence that providing more will bring about improvement?

All we seem to have is rhetoric. Show me the data.

Let's move on to Weingarten.

I'm stating the obvious when I say that No Child Left Behind's testing regime has left little time for these kinds of in-class activities.


By "the obvious," Weingarten appears to mean "with little evidentiary support."

What evidence we go have suggests that only about 16% of schools have reduced art and music time at all. And those that did reduce time in these areas only reduced time by less than an hour a week. Perhaps these schools were neglecting math and reading pre-NCLB. Do you know? No, you don't.

But teachers alone can't get kids all the way to proficiency, when disadvantaged children typically enter school already three years and 30 million words behind.


I hate throwing this word around, but this statement is a lie. The big lie, so to speak. Schools can substantially reduce all of the achievement gap that exists between low-SES and middle-class schools. We've known this for over thirty years now. The experiment has been replicated many times, most recently in Baltimore. Up to the fifth grade level as well.

[M]y message was twofold: first, let's put in place a federal education program that, unlike NCLB, provides space and opportunity for children to be taught a rich, well-rounded curriculum, with standards and accountability that support rather than undermine that curriculum; and second, let's--at the same time--try to address the outside factors like nutrition and health care that affect a child's ability to reach her full educational potential. And yes, I said that we also should try to help parents so they can better support their children's learning.


In Project Follow Through, comprehensive medical, dental, nutritional, and social services were provided to all of the thousands of students taking part in the experiment so these factors would not confound the results. Most of the interventions failed to achieve any student gains at all despite the provisioning of all these services. Many interventions performed below the performance of the control groups. Ooops.

The causal link has not been established. Repeating the rhetoric ad nauseum is not a substitute for data.

Ravitch knows better. Weingarten probably does as well, but there is self-interest at play.

I'll ask one more time. Show us the data. We're waiting.

July 22, 2008

The Myth of Fun and Interesting

Most educators have brought the myth that academic learning does not require discipline--that the best learning is easy and fun. They do not realize that it is fluent performance that is fun. The process of learning, of changing performance, is most often stressful and painful.


--Lindsey, O., (1992). Why aren't effective teaching tools widely adopted? Journal of Applied Behavioral Analysis, 22-26.

So begins chapter four of Vicki Snider's book Myths and Misconceptions about Teaching, What really happens in the classroom on the Myth of Fun and Interesting. This chapter is relevant to the ongoing debate over the educational value of the SLA student's project on the Dred Scott decision and problem-based education in general.

Snider describes the dangers of this myth well.

The myth of fun an interesting is an extension of the myth of process. When the process [of education] is emphasized, the entertainment value of a lesson and the students' level of engagement becomes the measure of the successful lesson. Teachers derive their reward and sense of satisfaction from creating fun and interesting lessons rather than from attaining specified learning outcomes.

...

There is some truth to the myth that learning should be fun. I can understand why teachers feel compelled to make lessons entertaining. I, too, like to design activities that engage students, create excitement, stimulate discussion, and make students laugh. I also know that these reinforcing moments do not necessarily guarantee that students have mastered the content. The exhilaration of my great lesson is more that offset by the letdown when I assess retention and application of skills and concepts.


Snider goes on to describe the potential harms that are done by the myth:

There are four harmful effects that result from overreliance on fun and interesting activities. First, fun activities lead to a lot of wasted instructional time. Second, activity-based instruction can make it difficult for learners to focus on what it is they are supposed to learn. Knowing what to pay attention to is called selective attention in the psychological literature and it is often a problem for young or naive learners or those with learning disabilities. Third, rather than increase motivation to learn, activities with a high entertainment value but a low content value may actually decrease the probability that a child will become a lifelong learner. Fourth, without effort and practice, individuals cannot master any intellectual or creative endeavor.


Snider makes an often overlooked, yet important point about lifetime learners:

People seldom decide to pursue a new intellectual area out of the blue; they become interested because they find themselves in a situation that reactivates some general knowledge that a teacher thought important years ago. If they have enough general knowledge, they can find out more through experience, by going to the library or looking on the Internet, or taking a class. The more specific knowledge they acquire, the more they are able to learn. Sometimes this positive reciprocal learning cycle leads to a depth of knowledge that allows a person to think critically and analytically. In other words, interest is the reward of learning, not the motivation for learning.


Lastly, Snider on the importance of developing fluency.

Fluency goes beyond accuracy. It is accuracy plus speed. Our traditional reliance on accuracy only, in the form of percentages, does not distinguish between students who have learned a skill, but still perform it with hesitation, and those that are fluent. failure to make these distinctions underlies many educational failures (Binder, Haughton, & Bateman, 2002). Students "progress by building one non-fluent skill on top of another until the whole skill set becomes too difficult to be enjoyable and students may respond to this stressful learning situation by becoming inattentive, misbehaving, or failing to complete homework and other assignments. All of these consequences are predictable. If the teacher responds by making learning more fun and interesting or by reducing accountability, the problem is solved in the short term, but the real issue remains unaddressed and the long-term result is low academic achievement.


Snider raises some important issues with PBL. Making school work fun and interesting is all well and good, but ultimately you have to look at what the student has actually learned to gauge the effectiveness of the teaching. The arguments I'm reading so far favoring PBL and SLA rely on some form of goal-post shifting with respect to what the student is expected to have learned. This isn't exactly a strong argument in favor or PBL and SLA if you know what I mean.

July 21, 2008

Teaching Content -- the Dred Scott Decision

We all can agree that history instruction should not be a parade of facts; however, facts will need to be learned, at least temporarily, to give the student something to think about. Here's one way to make that process less painful and boring for the student.

To facilitate learning, most of the instruction should relate to big ideas. Big ideas are the ideas that summarize content, are applicable again and again, can generate predictions and hypothesis, can be used to help structure the details of content, and help in remembering and reconstructing through inference content details. Most of the instruction should revolve around the big ideas. (Crawford 2004)

Students should be able to fluently articulate the knowledge they learn so that the knowledge can be used in higher-order activities, such as essays, comparisons, application, synthesis, evaluation, and the like. (Crawford 2004)

Here's an example of a big idea from a middle school history textbook that precedes the teaching of the Dred Scott decision.

The sectional disputes about slavery that began during the Constitutional Convention ended with two compromises. One compromise was that the Constitution said that slaves were to be counted for both representation in Congress and as property, but at three-fifths value. The other compromise kept Congress from stopping the slave trade until 1808, but allowed Congress Control over other aspects of trade. The northern and southern states had equal numbers of representatives in the Constitutional Convention and chose to reach a compromise. When each side is evenly matched, there is a balance of power. An example of balance of power is when two teams have the same number of players and the players have the same skill level. If one team gets a chance to put one extra-good player into the game, then the balance of power is upset and that team will probably win. That team will dominate.


University of Oregon (1995), Understanding U.S. History: Volume 1—Through the Civil War, Chapter 13: The Road to the Civil War pp. 318- 328.

The big idea of the unit is balance of power which gives the student something to think about as the material is learned and by doing so helps him integrate the material being learned.

By using the big idea of Balance of Power, the congressional compromises of the period, the Missouri compromise and the compromise of 1850, can be learned with some meaning, rather than as a mere parade of facts. For, example, here is how the Missouri compromise might be taught:

In 1819, the balance of power in the Senate was equal with 11 free states and 11 slave states. Adding new slave or free states to the United states could be thought of as adding extra players to a game—if one side got too many new players, the balance of power would shift to that side. But adding new players was exactly what was about to happen to the U.S. because the country was growing. It looked as if the balance of power might change.

In 1819, the people of the Missouri Territory asked to join the United States as a state. A territory was a region that was a part of the United States but not yet a state. Most of the white people living in Missouri wanted slavery. Northern Senators were against adding Missouri as a state because then there would be more slave states than free states with representation in the Senate. In 1819, the northern senators would not agree to let the South have a majority in the Senate. They argued for months against adding the Missouri Territory as a slave state.

After several months of debate, the people of Maine, which had been part of the state of Massachusetts, asked to join the United States as a free state. This opened the way for a compromise to be worked out. Henry Clay, a senator from Kentucky, proposed the compromise plan, called the Missouri Compromise. Henry Clay was one of the War Hawks who boasted before the War of 1812 that the Kentucky militia could conquer Canada. He also worked out a compromise for the Nullification Crisis.

Clay’s idea was to admit both states into the United States. One state, Maine, was free state which did not allow slavery. The other state, Missouri, did allow slavery. That maintained the balance of power.

The Missouri Compromise was a plan that drew a line across the Louisiana Purchase territory, south of which slavery would be allowed. Slavery would be outlawed in the rest of the Louisiana Purchase lands north of the imaginary line. The Missouri Compromise Line was drawn at the southern boundary of Missouri. Areas south of the Missouri Compromise line could become slaveholding states, but the land north of the line was to remain free from slavery. In 1820, the United States had not acquired the land west of the Louisiana Purchase. At the time it was passed, the Missouri Compromise established rules for all the land in the United States.


(Understanding U.S. History)

Now, see if you can answer the following question to see f you understand balance of power and how it affects the admission of territories as states.

Q: Arkansas was the next state to ask to join the United States. Arkansas was below the line of the Missouri Compromise, so it would be added as a slave state. What factors might influence Arkansas' admission as a state?


The same big idea can get you through the admission of Florida and Iowa, the election of President Polk, the concept of Manifest Destiny, the admission of California, New Mexico, Texas and the Oregon Territory, the Compromise of 1850, the South's attempt at seceding, the concept of popular sovereignty, and the passage of the Fugitive Slave Laws which culminate in the end of the balance of power era:

Although the Compromise of 1850 solved the problem of the southern states seceding from the United States, the solution was only temporary. Enforcing the Fugitive Slave Law and allowing popular sovereignty to decide about slavery only made the issues about slavery more difficult for Congress to solve. The ability of the Congress, the political parties, and the country as a whole to make any more compromises on the issue of slavery was coming to an end.

...

Until 1850, a balance of power between slave states and free states had existed. That balance of power resulted in compromises being made between the North and the South for many years. In the 1850s, three factors ended the ability of Congress to make compromises about slavery: (1) the idea of popular sovereignty, which began with the Compromise of 1850; (2) the Kansas-Nebraska Act of 1854, which led to violence and 200 deaths in Kansas; and (3) the Supreme Court’s Dred Scott decision in 1857.


(Understanding U.S. History)

The Dred Scott decision wasn't important because of the specific holding with respect to the slave Dred Scott (though I'm sure it was important to him), but was important becuase it eliminated Congress' ability to maintain the delicate balance of power that had existed and threatened to allow the unhindered growth of slavery:

The Dred Scott decision. The third factor that completely ended the ability of Congress to compromise on the issue of slavery was an 1857 Supreme Court ruling on slavery. The Court ruled that slaves had no right to sue in federal court because African Americans could not become citizens of the United States. The Supreme Court ruled that someone who owned slaves could keep slaves in any federal territory. The decision basically made slavery legal in any territory. The Supreme Court also ruled that there was nothing in the Constitution that allowed the federal Congress to outlaw slavery in any territory, and that only states could decide for themselves about slavery. That decision was the end of Congress’ ability to make compromises about slavery because Congress couldn’t pass any laws that limited slavery.

The Chief Justice who wrote the court’s decision was a southern Democrat who had always supported slavery. He felt that the decision in the Dred Scott case would end the debate over the expansion of slavery. He thought that once the court had decided to protect the property rights of slaveholders, the country would accept this decision. He was mistaken.


(Understanding U.S. History)

The challenge for techers is to teach all this material so that the students know it well enough to explain it without the referring to the textbook. This is the first prerequisite for knowledge to be used flexibly in higher order activities such as essays, comparisons, application, synthesis, evaluation, and the like.

There is no golden road to learning. Deep understanding in a domain (such as pre-civil war history) is not going to happen in a fact vacuum. You need something to think about first, before you can think about it deeply, i.e., understand the abstract functional relationships inherent in the material. If the student's "inquiry" hasn't led the student to learning the facts, at least temporarily, deep undersatnding isn't going to follow. This is what happened to the SLA student.

July 19, 2008

Tom Hoffman Attempts a Defense

Tom Hoffman of Tuttle SVC tries to defend SLA'a Dred Scott project that I recently critiqued. Hoffman attempts the old smoke and mirrors defense by attempting to portray the student's response as "deep understanding" of other stuff. You can be the judge of whether he's succeeded or not. I think he unwittingly proves my point as I indicate in the comments.

Hoffman's argument is another good window into the mind of the progressive educator. Deep understanding has been redefined to mean the amount of understanding you can achieve without knowing much content. To the rest of us, this is superficial understanding. History without historical facts or understanding of those facts. See the way that works?

I hope Hoffman will never try to "get my back" like he did here.

July 17, 2008

Chris Lehmann Responds

Chris Lehmann, Principal of Science Leadership Academy (SLA), responded to my criticism of one of SLA's projects in a recent post. Here is Chris's response with my comments.

One, let's start with the premise that you and I have fundamentally different views on educational philosophy, so I'd argue that you're predisposed to
disapprove of our school.

I'm not so sure about that. I assumed that our goals are similar: to maximize student learning. To the extent that I favor certain pedagogies over others, it is only because those pedagogies have evidence of superior results when we look at what the student has actually learned. But I am not beholden to any particular pedagogy for the sake of any particular educational philosophy.

Let's also state that, as you point out, we are completely transparent about our philosophy at SLA. Between my blog, the Family Night Book, the web site, etc, you can get a sense of what we believe pretty easily. Also, we encourage all
prospective students and parents to come spend a day at the school... not a special "everyone-visits" day, but any day. We want families to understand our educational philosophy because we want kids to make an informed decision about where they want to go to high school. So we're not trying to trick anyone into coming to SLA. Given that transparency, we've had a great interest in our school. We're also pleased with the academic results we've seen. So far, we are currently on track to have over a 90% four-year graduation rate from SLA, and by qualitative and quantitative metrics (attendance, course passing rates, PSAT scores, as well as early research by two PhD students), kids are doing very well.

I haven't seen any reported PSSA results for SLA yet. Will this year's results be reported?

So to the specific points you raise:

A) You show an interesting unwillingness to accept the scope of the piece or that the scope of the piece has merit. To do this work, the students had to examine primary source materials, learn about abolitionist societies, learn about the Dred Scott case through multiple lenses -- including the political one you favor. And then they had to create a piece of writing that a) showed historical understanding and b) made a decision to argue a point of view of one of the groups active at that time. All of that maps to the standards of the state for history.


Admittedly, the scope of the project is unclear based on what is provided in the handbook. I assumed, and I think fairly so, that the scope of a project for a high school history class would be a demonstration of historical understanding. Now, if the student expectations are diminished from this level, I'm curious to know in what way they are diminished and what is the expected pathway to this higher level since the curricular design is claimed to be backwards?

The handbook states that the project "can only be completed by showing both the skills and knowledge that are deemed to be critical to master the subject and demonstrate that deep level of understanding." What specific skills and specific knowledge were the students expected to have mastered? What was the level of specific understanding that was expected?

I do think that a high school student can be expected to perform the kind of analysis I provided if they are taught how to do it and taught the underlying content. In fact, I think middle-school student can be taught to this level. I'll provide an example in a future post.

My concern, and I think it is a valid concern, is that student learning is taking a backseat to the pedagogical fidelity. My opinion is that the student's understanding is banal and typical of what you see when the student lacks domain knowledge. (My third grade son comes home with stuff like this from school all the time even though he is capable of better.) My understanding is the theory is that it is better for the student to generate this knowledge on his own, rather than be told it. Fair enough. But the student's work product indicates that the required knowledge has not been generated, as the theory predicts, with the result being a superficial analysis. I just don't see how the student understanding can in anyway be considered to be deep at this point.

Moreover, during the course of the unit, students were engaged in a variety of instructional techniques. The assumptions you make about the kind of teaching that happens at SLA are wrong.


My assumptions were based on what was provided in the handbook and by what learning the student demonstrated. If I'm wrong, I'd like to understand how.

Moreover, even in my presentation that you cite, I talk about how traditional forms of assessment -- quizzes and tests -- have their place in our classrooms, they just are now lower on our hierarchy of assessment. Tests and quizzes are great ways to see if kids have learned how to handle skills and content in a narrowly defined context. What the student projects do is see if they can transfer those skills and content to a larger context.


The handbook states that "the primary assessment of student learning is through their projects" so I think it's fair to use the project, by itself, as the criterion of whether the learning meets the educational goals. I assumed that some teaching took place regarding the historic era in question and some background knowledge was taught, but I don't see how very much of this transfer took place.

B) I'd argue that [your] analysis of what was missing suggests your own bias toward what [you feel] is important to learn about that material. That's fine, the single greatest limiting factor in school is time. If you want to cover a great deal of material -- and even in "progressive" schools, history courses have a lot to cover, you are never going to get to every lens. And, by the way, while this assessment may not have asked kids to deal with the larger political lens, other work did. And yes, we do often question our balance with depth and breadth -- that's the question all good progressive schools should ask themselves, just as all good traditional schools should ask themselves about their balance of skills and content, information transferal and knowledge acquisition.


Again, the handbook indicates the goal was deep understanding and mastery of skills. I don't see either having taken place here at a ninth grade level. Which lens were expected if not the historic/political one considering this was a history class? I don't see depth nor breadth here, but then again I am not sure of the expectation which appears to be much less than what I consider to be high school level.

C) The level of analysis you suggest is warranted is collegiate -- or at least 11th or 12th grade -- in its complexity when this was a 9th grade piece of work. (We only had ninth graders when we published the book.) I think it holds up as a sophisticated, smart piece of work that shows an emergent sense of what it means to have a historical sense of the world.


Another way of saying this is that the student has not yet acquired an historic sense.

I'm going to post a middle school level text related to the same issue so we can judge the level of analysis we might expect. I get to hopefully by the weekend.

D) You and I have a fundamental disagreement over the value of skills v. content in student understanding. You believe that until the student has enough facts at their disposal, there is little benefit to asking them to "think like a historian." We believe that those skills develop with practice and that students' ability to develop the ability to apply an historical lens on the world requires frequent, guided, scaffolded practice. That's fine. We'll have to agree to disagree.


Again, the proof of the pudding is in the eating. I don't see how a series of superficial analyses is going to lead to expertise, nor I have I ever seen a deep analysis of an issue by one who was not in possession of a deep understanding of the underlying domain knowledge. Cog Sci tells us you don't acquire the latter until you've acquired the former. Nor is the student getting practice gaining expertise in this domain. The student isn't learning the underlying content. How is he going to learn how to analyze what he doesn't know?

Inquiry-driven learning doesn't mean you just set the kids off on Google. Guided inquiry means giving kids skills to access resources and make decisions on their own.


Has that been demonstrated here? I think just the opposite has been demonstrated. The resources weren't accessed and the right decisions were not made. Hallmarks of a novice.

I don't think you're going to read this comment and suddenly think, "Aha! I get it, SLA is wonderful!" But I also would hope that we could move beyond strawman arguments -- I'll promise not to argue that KIPP creates a bunch of automatons who merely can regurgitate what they have been told if you promise not to argue that SLA is some unstructured school where kids just are indoctrinated toward an ideological bias or kids merely learn some surface knowledge by surfing the web.


I do not assume that SLA is unstructured. I assume there is plenty of structure because no structure would be a disaster. I think that the problem is that the students' inquiry isn't leading to content knowledge and that the students' analysis suffers for it.

Regurgitation is not the goal, though inflexible knowledge is the typical starting point. However, not being able to even regurgitate is telling as well.

As always, I offer you the opportunity to come visit SLA. I don't think you'll like what you see there -- again, my goal is not to convince you that we're the one right school, but rather so you could see that there is more than one approach to schooling. I have no doubt that a thoughtful application of DI schoolwide can create an effective school where kids learn well. Can you entertain the same notion that an inquiry-driven approach that has been executed thoughtfully can do the same?


There is always the possibility, but I've yet to see the results with a population like that of SLA. I'm always looking to be convinced. Perhaps, there is a better example project for inclusion in next year's handbook.