September 27, 2006

EdWeek Spins Everyday Math Research

As I wrote earlier this month, The Fed's What Works Clearinghouse recently evaluated the research base behind Everyday Math (EM). Here's what the WWC found.

Out of the 61 studies on EM:

none where found to fully meet the WWC's evidentiary standards,
57 did not meet the standards; and
4 were quasi-experimental studies that met "with reservation."

Out of the 4 that met the standards with reservations, 3 did not have statistically significant results. 1 had indeterminate results. Two of the studies were conducted by researchers with ties to EM (Carroll & Riordan).

Here's my conclusion from my earlier post:

The WWC generously conclusion:
"The WWC found Everyday Mathematics to have potentially positive effects on mathematics achievement"
The average effect size was (0.31) or +12 percentile points. This represents a small effect size which is barely educationally significant.

That's certainly the equivalent of putting lipstick on a pig since those "potentially positive effects" come from three studies with statistically insignificant results and/or indeterminate results and one , and only, "study" with statistically significant results, came from a biased researcher [and] the research contained serious methodological defects.

You can put lipstick on a pig, but a pig is still a pig.
Edweek reported ($) on the WWC results too and had a slightly different take:
A popular K-6 math curriculum has shown promise for improving student achievement but needs more thorough study before it can be declared effective, a federal research center reported last week.

Everyday Mathematics, which is used by 3 million U.S. students in 175,000 classrooms, was deemed to raise students’ test scores by an average of 12 percentile points in a review of four studies reanalyzed by the What Works Clearinghouse at the U.S. Department of Education.

Those gains are “pretty strong,” said Phoebe Cottingham, the commissioner of the National Center for Education Evaluation and Regional Assistance, which oversees the clearinghouse. But she said the curriculum could not receive the clearinghouse’s top ranking because none of the research conducted on it was a large-scale study that compared achievement among students who were randomly assigned either to the program or to a control group.

If those results are pretty strong, I hate to see what pretty weak is. And, I love this spin by Edweek: "she said the curriculum could not receive the clearinghouse’s top ranking because none of the research conducted on it was a large-scale study." Another, more accurate way of saying this is to use the scientific terminology that those small sample size in those studies did not produce statistically significant results. That means that we cannot discount that the differences in test scores were the result of chance. This means that you can't reliably draw any conclusions from these studies, except maybe that it might be worthwhile to conduct a larger scale experiment.

Let's see how Edweek described the results:

Of those four, three studies found “positive” effects, but just one detected improvements in students’ math achievement that were considered statistically significant. The fourth study found no effect on test scores.

Based on those results, the report said the curriculum has “potentially positive effects,” the second-highest category on its ranking scale.

The one study that produced the positive effects was conducted by a researcher with ties to EM. Ya think that was worth mentioning, especially since a) it's the only study ever on EM that showed statistically significant positive results, b) that researcher refuses to release his data for independent analysis, and c) the study has been harshly criticised by other math experts as having serious methodological flaws (see my previous post).

One mathematics professor attempted to get the research data. Here's how he described what happened:
I am quite familiar with the (also aging, 2001) Riordan & Noyce paper because I tried to get exactly the information you snarkily request. In fact, I talked to the authors personally and they flatly refused to divulge any such information. That would be "unprofessional", you know. Without it, for all we really know, they sorted on performance and then looked for demographically matching schools. QED. In spite of the sarcasm, not wrong, just meaningless; its validity is unknowable.
That's the extent of the statistically valid positive research on EM. One study over the course of nearly 20 years conducted by a potentially biased researcher who won't release the results. And even then, the effect size was still small (0.31) and barely educationally significant (i.e., at least 0.25).

“The ranking underscores the stellar results [Everyday Mathematics] has had in the marketplace for over 20 years,” said Mary Skafidas, a spokeswoman for the McGraw-Hill Cos.

Everyday Mathematics is used widely across the country, including in most of the elementary schools in the 1.1 million-student New York City public school system, the nation’s largest.

Kinda sad isn't it?

In 1999, a federal panel of curriculum experts named Everyday Mathematics one of five math curricula with “promising” potential based on how well their materials aligned with national math standards. That list was later criticized by a group of mathematicians because they say programs it recognized failed to teach students basic math skills.

This underscores the charade that is the NCTM. What were they basing their conclusion that EM has "promising potential" on? Certainly not the research. And, certainly not the test scores of the millions of students who have passed through EM.

The current effort to evaluate programs’ effectiveness is hampered by a lack of high-quality studies published in academic journals and other places, some analysts say.

“It’s underwhelming the number of good studies done in math,” Ms. Cottingham said. “It’s a reflection on the past state of education research.”

And the current state too. It is a national disgrace.

4 comments:

Anonymous said...

I suspect whoever wrote the Edweek article used Everyday Math in school, since he doesn't seem to be able to read numbers. As for the math prof trying to get the data, how can an academic ethically defend refusing to share his data?

Anonymous said...

I unfortunately read your post AFTER I wrote one of my own that makes many of the same points. My post will be published tomorrow. Thanks for helping me to understand this issue. It is amazing to me how little research is done on curricular effectiveness. It makes me wonder if such research is really possible.

KDeRosa said...

Look forward to reading your post tomorrow, Mark.

Good research is done all the time by the few guys, ex. Engelmann and Slavin, whose curricula actually work.

Anonymous said...

I am a teacher that uses Everyday Math in 5th grade and got great results.
Compare data on schools that do and do not use this program and then see how you feel. Don't just listen to this idiots opinion, look at real research and form your own.