June 7, 2006

Research that Isn't So Stinky

Is there any decent Ed research asks John from AFT? The simple answer is yes but not much of it.

John also helpfully sets out a good definition for effect sizes for social science research:
[I]f the difference between sample averages is no more than two-tenths of a standard deviation, the difference should be regarded as small; a difference of half a standard deviation should be regarded as moderate; and a difference of eight-tenths of a standard deviation or larger should be regarded as a large difference.
These are the generally accepted standards for evaluating social science research.

Your typical Title I school is performing at the 20th percentile. Only 20% of its students perform above the 50th percentile. This low performance places the typical Title I school about 0.84 standard deviation below the median school (50th percentile). So in order to improve the performance of this school so that it performs like a mainstream school, it would need to raise its performance by 0.84 standard deviation. This is a large effect size.

So now let's pretend that there were some easily implementable educational intervention that could raise the performance of all schools by 0.84 SDs. Even with such a large improvement, about 20% of students will perform below the present 50th percentile . In Title I schools, half the students would still not make the cut! In order to raise performance so that only about 5% of students perform below the 50th precentile, we'd need some combination of lowering standards and raising performance by another 0.80 SD. Most states have already dutifully complied by lowering standards quite a bit under NCLB.

Realistically, schools are going to have to raise performance by at least a full standard deviation (and count on some help from the states) in order to comply with NCLB. And even then, the Title I schools will still be failing since they will have about 20% of students not making the cut. This is the most unfair aspect of NCLB, but until we're ready to admit that student IQ affects student performance such policy discussions will be off limits.

We'll take a look at some of the existing Ed research in part II of this post to see if we know of any educational interventions that are even capable of increasing student performance in the neighborhood that we need to comply with NCLB.

1 comment:

KDeRosa said...

Please distinguish between stinky research and stinky programs. They are not the same thing.

Since I've included the need for high effect sizes, it should hopefully be clear I'm talking about both.

Ultimately, what we care about is identifying effective programs using good research.

Bad research tells us nothing.

Good research with small or neative effect sizes may indicate a bad program. But, sometimes merely indicates a bad implementation of a good program (or a bad program). The trouble is we don't know which is which yet.

It's only when we have good research paired with larger effect sizes that we begin to get something we can rely upon. With enough similiar studies, we approach confidence that the program is effective.