August 16, 2007

A Fool's Errand

Is what I'm on.

I'm trying to track down what these "multiple measures" thingies are that Congressman Miller talked about in his speech.


In my bill, we will ask employers and colleges to come together as stakeholders with the states to jointly develop more rigorous standards that meet the demands of both. Many states have already started this process. We seek to build on and complement the leadership of our nation’s governors and provide them incentives to continue.

This requires that assessments be fully aligned with these new state standards and include multiple measures of success.

These measures can no longer reflect just basic skills and memorization. Rather, they must reflect critical thinking skills and the ability to apply knowledge to new and challenging contexts. These are the skills that today’s students will need to meet the complex demands of the American economy and society in a globalized world.

Sounded like edu-cant to me. So I decided to dig a little deeper.

As luck would have it, Sherman Dorn has a post on this topic in which he uncharacteristically uses a whole lotta words to say a whole lot of nothing in an apparent defense of "multiple measures" and a weak criticism of Ed Trust's attack of same. Nonetheless, Sherman has led me to the culprits behind this new meme--the Orwellian-named Forum of Educational Accountability who issued this 53 page "Expert Panel" Report (PDF), Assessment and Accountability for Improving Schools and Learning: Principles and Recommendations for Federal Law and State and Local Systems.I

Then I found this August 13th letter from a bunch of education "researchers" which draws from the above-mentioned report.

Are you still with me?

Good.

At 53 pages there's too much to comment on in a single blog post, so let's just pull down some low hanging fruit.

Here's a doozy from p. 40.

Gaps will only close if students who are behind progress more quickly than those who are not. There is a lack of evidence to demonstrate schools alone can ensure that their historically disadvantaged populations can progress more quickly than more advantaged populations. Expecting schools to accomplish this feat without markedly increased support is likely to continue the NCLB problem of causing harmful educational consequences resulting from educators' desperate attempts to meet NCLB mandates without the resources to do so.


Most of the "expert" panel are supposedly testing, assessment, and standards experts, yet they don't seem to know how simple statistics work.

It is not true that the achievement gap will only close "if students who are behind progress more quickly than those who are not." The gap can close if both groups progress at the same rate. In fact, the gap can also close if the group that is behind progresses at a marginally slower rate.

Perhaps an example is in order.




Let's suppose that we have two groups: the low performing group (the top distribution) and the high performing group (the bottom distribution). These groups have a mean difference in achievement of about a standard deviation. When we first test both groups the students perform at passing level 1. At this passing level, 50% of the low performing group is failing but only 16% of the high performing group is failing. This is represented by the shaded area under the curve The achievement gap is now 34 points (50 - 16).

Now let's teach the students and retest them. Let's assume that both groups improve by the same amount and now perform at passing level 2. At this passing level, 16% of the low performing group is failing, but now only 2% of the high performing group is failing. The achievement gap is now 14 points (16 - 2). The achievement gap has miraculously shrunk by 20 points. Since the distributions are normal, the area under the curve changes non-linearly.

Both groups have progressed at the same rate and yet the achievement gap has shrunk by more than half. In fact, the low performing group could progress at a substantially slower rate under this example and the achievement gap would still shrink. In this example even if the lower performing group only progressed to a level in between passing levels 1 and 2, let's say at a level where 35% of the students were passing, the achievement gap would still be shrinking even though the higher performing group progressed at a much faster rate.

This is statistics 101 and the "expert panel" who wrote this report should be ashamed of themselves for such misleading hackery.

Analyzing the first sentence, we've established that the "expert panel" doesn't understand simple statistics. Now let's establish that they also don't know anything about real education research by examining the second sentence.

There is a lack of evidence to demonstrate schools alone can ensure that their historically disadvantaged populations can progress more quickly than more advantaged populations.


Could it be that the "expert panel" has never heard of the largest and most expensive educational experiment in US history, Project Follow Through, in which one the interventions, the Direct Instruction Model, was able to get "historically disadvantaged populations" to progress more quickly than more advantaged populations? In fact, the Direct Instruction Model was able to get the "historically disadvantaged populations" caught up or nearly caught up to the more advantaged populations by the end of the third grade.


No, it couldn't be. Smart guys like these couldn't have missed something like that.

Ironically, by the time we get to the third sentence:


Expecting schools to accomplish this feat without markedly increased support is likely to continue the NCLB problem of causing harmful educational consequences resulting from educators' desperate attempts to meet NCLB mandates without the resources to do so.

we finally get something for which no empirical support exists.

There is no evidence that schools currently lack the resources needed to adequately educate disadvantaged children. Furthermore, there is no evidence that increasing resources will result in increased student achievement.

You'd be hard pressed to find a paragraph that is more wrong than this one, which is why I say I'm on a fool's errand analyzing this report.

4 comments:

TurbineGuy said...

People are idiots.

I just posted about how any day of the year you can read the same fill in the blank article by some newspaper on the "achievement gap".

If you want logic, play sudoku, you won't find much of it in education policy.

Anonymous said...

Gaps will only close if students who are behind progress more quickly than those who are not.

While you demonstrated the absurdity of this statement, treatments having effect size of 1.0 are few and far between.

What do you think of an ITBS (better: Stanford Achievement Test)-based value-added measure? Tennessee still seems to generate good information on school and teacher quality with TVAAS.

KDeRosa said...

Eric, you can pull off the same statistical chicanery, though not as dramatic, with smaller effect sizes.

I don't think there is anything wrong with using the ITBS or SAT-10 as your measurement tool. Saves lots of money and you don't have t worry about validity issues. Problem is that the cut scores are too high for states to make it appear like most of their students are proficient.

Anonymous said...

Gaps will only close if students who are behind progress more quickly than those who are not.

From a value-added perspective, this statement (though demonstrably wrong) doesn't bother me. What's wrong if direct-instruction students "progress more quickly?"

Value added data is adequate to protect federal interests in education. The multiple measures advocates appear to want their research agendas written into law (along with FairTest's political agenda). We'll see if Dr. Dorn responds to questions about the effectiveness of multiple measures for ensuring educational equity.