January 17, 2009

McWhorter and Yglesias on DI

John McWhorter believes that DI is a "solution for the reading gap" between white students and black and/or poor students:

Starting in the late 1960s, Siegfried Engelmann led a government-sponsored investigation, Project Follow Through, that compared nine teaching methods and tracked their results in more than 75,000 children from kindergarten through third grade. It found that the Direct Instruction (DI) method of teaching reading was vastly more effective than any of the others for (drum roll, please) poor kids, including black ones.


This is true as far as it goes, but only part of the story: The results from PFT were a bit more conclusive than WcWhorter lets on. Here's how Zig describes the results in chapter 5 of his last book.

The evaluation had three categories: basic skills, cognitive (higher-order thinking) skills, and affective responses.

The basic skills consisted of those things that could be taught by rote—spelling, word identification, math facts and computation, punctuation, capitalization, and word usage. DI was first of all sponsors in basic skills...Only two other sponsors had a positive average. The remaining models scored deep in the negative numbers, which means they were soundly outperformed by [the control group]

DI was not expected to outperform the other models on “cognitive” skills, which require higher-order thinking, or on measures of “responsibility.” Cognitive skills were assumed to be those that could not be presented as rote, but required some form of process or “scaffolding” of one skill on another to draw a conclusion or figure out the answer. In reading, children were tested on main ideas, word meaning based on context, and inferences. Math problem solving and math concepts evaluated children’s higher-order skills in math.

Not only was the DI model number one on these cognitive skills; it was the only model that had positive scores for all three higher-order categories: reading, math concepts and math problem solving. DI had a higher average score on the cognitive skills than it did for the basic skills...

Not only were we first in adjusted scores and first in percentile scores for basic skills, cognitive skills, and perceptions children had of themselves, we were first in spelling, first with sites that had a Headstart preschool, first in sites that started in K, and first in sites that started in grade one. Our third-graders who went through only three years (grades 1-3) were, on average, over a year ahead of children in other models who went through four years—grades K-3. We were first with Native Americans, first with non-English speakers, first in rural areas, first in urban areas, first with whites, first with blacks, first with the lowest disadvantaged children and first with high performers.


You see the problem? DI wasn't just effective with poor/black students; it's effective with all students. In the long run this success is going to militate against eliminating achievement gaps, though a relative long-term reduction might be possible.

McWhorter continues:

DI isn't exactly complicated: Students are taught to sound out words rather than told to get the hang of recognizing words whole, and they are taught according to scripted drills that emphasize repetition and frequent student participation
.

On a superficial level DI does not appear to be complicated. But, DI requires a lot more than "sounding out words" and "scripted drills" to work effectively. These are superficial features of DI. SOmeone with a deep understanding of DI would focus on other features, such as mastery learning.

Matt Yglesias basically agreees with McWhorter's somewhat superficial analysis but cautions:

A word of caution I would offer is that the rhetoric in the column seems, in my view, to oversell this fix. I think it’s important not to set people up to believe that some proposed change is a silver bullet when that just sets the stage for a potential future backlash. Based on what we know, it would be much better in general—and especially for poor kids—to do more direct instruction.


Yglesias is also confused. Like, McWhorter he identifies phonics as the primary component for DI's success. This isn't accurate. Moreover, McWhorter is talking about Direct Instruction. Yglesias is talking about direct instruction; the two are not the same. McWhorter is overselling DI a bit. He's also underselling it as well. So, Yglesias's concerns are only partially probative here. Yglesias's concern of a "potential future backlash appears to be based on the following:

Even the most egalitarian countries have statistically meaningful achievement gaps, and the United States is far from being the most egalitarian country.


Believing that egalitarian policies, or lack thereof, are somehow the cause of achievement gaps is a good example of what you get when you use correlational studies without understanding the underlying issues. Just because poverty and educational achievement are correlated and egalitarian policies and achievement are also correlated, does not mean that egalitarian polcies cause increased student ahcievement. See La Griffe's latest analysis to see why this is not so.

Yglesias does appear to stumble upon the right answer in the end though:

There’s no “solution” to the general existence of achievement gaps. There are, rather, policies that can be effective in narrowing them and this is one.


Even though the rest of his comment is wrong. Funny how that works sometimes.

15 comments:

Anonymous said...

A good part of the “black/white reading gap” is a function of the gap between the ears of the people who are addressing it.

Zig uses three types of measures: “basic skills, cognitive (higher-order thinking) skills, and affective responses.” That’s both more and less than measuring reading expertise.

“Basic skills” aren’t defined apart from the tests being used.

We’ve been around the barn here of “higher order thinking skills” and needed repeat that exercise.

“Affective responses” are important, but they aren’t “reading.”

Kids, parents, and the citizenry have no difficulty determining if a kid can read. The “test” is simple. Put a text in front of the kid composed of words that are within the kid’s spoken vocabulary and say “Read this and tell me what it says.” Kids who can do this can read. Kids who can’t do it can’t. Kids who can read, can then “read to learn.” Kids who can’t read are at an inherent disadvantage.

Kids who can speak in whole sentences and participate in everyday conversation have what it takes to begin reading instruction. Aggregate kids, irrespective of racial/ethnic and socioeconomic characteristics have this expertise when they enter Kindergarten. With a legitimate reading instructional architecture, `kids with reading expertise can be reliably delivered by the end of Grade 2—by the end of Grade 3 at the latest.

Zig’s Direct Instruction architecture is one way to reliably get the job done. There are a few other ways to go about it. But not very many other ways. And those ways do not include Balanced Literacy where the “balance is “Whole Language” with a smatter of inconsistent and incomplete “phonics” tacked on. "Balanced Literacy" is the prevailing reading instruction oreientation

Children WILL vary in terms of the rate with which they acquire reading expertise. Some children acquire the expertise without any formal instruction. Some acquire it despite the mis-instruction they currently receive. The instructional concern is with the remaining kids. Zig’s data and other data indicate that race and SES are NOT key determinants in the acquisition. The variation is accounted for by the whims of schools and teachers who chase their personal whims under the banner of “meeting individual needs.”

Any problem can be defined in a way to make it unsolvable. And that’s exactly what has been done to date with the “reading problem.”

KDeRosa said...

These were the measures used in PFT and Zig didn't have a much of a choice but to go along with them.

I do like how critics conveniently ignore the fact that the DI students outperformed all the other intervention's students in the cognitive/higher order skills.

Anonymous said...

Ken says, "These were the measures used in PFT and Zig didn't have a much of a choice but to go along with them."

True. The thing is, he's still going along with them.

Ken says,"I do like how critics conveniently ignore the fact that the DI students outperformed all the other intervention's students in the cognitive/higher order skills."

That's true. But I'm a friend, not a critic. I fully credit the fact that DI did this. The thing is, the tests are spurious. There ain't no such thing as "cognitive/higher order skills." It's a fiction of test construction.

"Higher order skills" is beside the point. Just as ar "affective responses."

The skill of concern is reading, not performing "better" on filling in bubbles on a multiple-choice tests.

The concern is "filling the black/white reading gap." It's not possible to show that this can be done while chasing "higher order skills."

KDeRosa said...

Dick, my last comment wasn't directed at you, I was just piggybacking on your last comment.

Anonymous said...

Point taken. And it's an extremely important point.

Proponents of "higher order skills" who haven't the foggiest notion of how to instruct such skills, can't rightly knock a program that beats other programs out on tests they view as measures of such.

This is one of the hair-tearing elements of Follow Through. The bulk of the programs in the game involved this kind of mush-talk. They delivered diddle.

Did that stop the mush-talk. Nah. If anything it increased it.
Then as now, the Government-Academic-Publisher triangle swept the findings under the rug, and it was school as usual.

The rest is history. Events led to Whole Language, which masquerades today as Balanced Literacy. And the field of educational R&D is now in the weakest shape it has been in since the endeavor began to take shape in the early 1900's.

Prevailing testing practices have provided a fog for the travesty and continue to permit authorities and media to contend "we're making gains," and "we're closing the gap."

Tracy W said...

Kids, parents, and the citizenry have no difficulty determining if a kid can read.

Indeed. But if you want to determine what proportion of kids in your country can read, and your country has a population in the hundreds of thousands, or higher, you have some difficulty in determining if every kid can read. It isn't practical for me to go around to every school in NZ and ask every 7-year old to read a text to me, and a big country like the USA makes it even more hopeless.

Now you could ask every teacher in the country how many kids in their class can read, and how many can't read, and sum the results. Problems:
- you don't know if teachers are supplying equivalent texts to each kids. Generally when people talk about teaching reading and writing, they don't just want the kids to be able to read within their vocabulary, but to read things like a newspaper, or a letter from the tax department.
- quality control is hard, what does it mean when you say "tell me what it says"? Does it mean being able to answer questions about obvious things like "What was the girl's name?" or does it mean being able to provide a summary of the main idea? How many mistakes can a kid make before you decide they can't read? Teachers will have different interpretations. How does teachers choosing different test passages affect the results?
- a small minority of teachers and principals are dishonest and will lie if they can get away with it. And if other teachers know that they are getting away with lying this has terrible effects on morale and consequently data quality.
- the psychology guys provide a lot of evidence that humans are self-deceiving under certain circumstances. If a teacher thinks they are doing a good job of teaching, there is a non-zero risk that that teacher will honestly report back that their kids can read, and honestly believe so, even if an objective outsider would conclude that they can't read.

This is why people use standardised achievement tests. Determining if a kid can read is a very different problem to determining how many kids across the country can read.

Anonymous said...

"Determining if a kid can read is a very different problem to determining how many kids across the country can read."

Not really. It's just a matter of aggregation. At present, we're aggregating "proficiency" in terms of arbitrary cut scores on ungrounded tests. It doesn't tell us how many kids across the country "can read." That has to be established on an individual basis before there is anything worth aggregating.

Anonymous said...

I applaud this thread of a VERY IMPORTANT discussion which I call "what works" at K-3rd grade in reading, math and language. Ladies and gentleman, the largest educational experiment in the history of the country found out what worked AND IGNORED IT.

Has Direct Instruction proven it's capacity, properly implemented, to close achievement gaps between all students? Answer: YES!

Engelmann's with Direct Instruction work will endure as a major contribution to any discussion of what works at these grade levels. Regrettably, many research summariesfail to even acknowledge the accomplisments.

I make recent reference to thiese accomplisments in a post to my own blog entitled, "Academic Learning Time."

John Chadwick

Tracy W said...

I spend about 300 words explaining the practical problems driving the adoption of standardised achievement tests for reading. Dick's response? Just ignore all those practical problems and assert "It's just a matter of aggregation. "
Sigh.

Anonymous said...

Right, Tracy. Your 300 words were wasted.

It IS a matter of aggregation.

Apply Occam's razor to standardized tests. When "proficiency" is treated in terms of groundless numbers, we're at a medieval ideological level

Tracy W said...

Apply Occam's razor to standardized tests. When "proficiency" is treated in terms of groundless numbers, we're at a medieval ideological level

It is however entirely possible to validate a standardised achievement test so the proficiency levels are grounded. There may be a problem with current standardised achievement test practice, but that's no reason to dismiss all standardised achievement tests.

The point I was dealing with was your argument that it was easy to determine if a particular kid can read, so we don't need standardised achievement tests. My argument was that for all but very small countries it is not practical for any individual to go around testing if every kid of a certain age can read.

Ockham's Razor can be stated as "All other things being equal, the simplest solution is the best." I have provided a number of reasons why your recommended method of assessing reading doesn't scale up to a large country you have not provided any reason to believe that my practical worries are wrong, so Ockham's Razor does not provide us with any reason to dismiss standardised achievement tests.

Anonymous said...

"the largest educational experiment in the history of the country found out what worked AND IGNORED IT."

That's true. But this has happened repeatedly over the years.

It happened to even larger programmatic R&D conducted by the Regional Labs in the 1970's.

www.piperbooks.co.uk/documents/Making_Change_Happen.pdf

It also happened very recently with what was touted as the "biggest and best-designed randomized control experiment in the history of education. At $10 million it was the most costly single experiment and was also the biggest flop. What it showed was that the "best remedial reading programs had no effect on either 3rd or 5th graders.

http://ies.ed.gov/pubsearch/pubsinfo.asp?pubid=NCEE20064001
rev

It only gets worse. The $40 million study of the impact of the $40 billion Reading First program essentially found that the program has no effect.

http://ies.ed.gov/ncee/pdf/20094039.pdf

The summary of the Exec Summary
(p 18 of the .pdf) is all you need to read.

Yet this is not the story the government releases and that the media swallow. We're told "We're making gains. We're closing the gap."

What are you going to believe, the spin or the research reports? To date, it's been the spin.

KDeRosa said...

The Reading First study provides an especially good policy example.

With RF we got to about as close as we ever have in implementing sound science as educational policy.

Then we knowingly and completely screwed it up during the legislative process by compromising the scientific standard, i.e., "based on [research] instead of validated by [reseacrh]" that subsequently was large enough to drive a truck through-- a truck driven by publishers and state level educrats.

Anonymous said...

"With RF we got to about as close as we ever have in implementing sound science as educational policy."

Well, "close" maybe, but a long way from cigars. There was a decade or so of sound research sponsored by NICHD. But trying to go from research to practice without any intervening development is a fool's errand.

The "5 essentials" sprung out of Reid Lyon's head. His testimony to Congress PRIOR to the report of the National Reading Panel listed them. The Panel was nothing more than an elaborate way to legitimize Reid's over-simplificiation of the decade of research that he deserves full credit for managing.

The Panel also studied "Technology" and "Teacher Ed." But these two "essentials" were shed in the aftermath of the report.

It's not fair to blame blame Congress for the substance of NCLB. Reid Lyon and Bob Sweet have claimed credit for drafting the Legislation. There's a lot of pork in the bill, but the logic of"standards and accountability using standardized achievement tests" was a beltway consensus.

The rest is history, and the future is yet to come.

KDeRosa said...

Dick, I'm not blaming Congress for all of NCLB. I'm only talking about RF. There were compromises made for the RFf portion of the act which both Lyon and Reid have acknowledged.