September 27, 2006

Here's a Reading First Story You Can Bet Won't Be Reported

From Edweek ($):

The federal Reading First initiative has led to improved reading instruction, assessment, and student achievement in schools taking part in the $1 billion-a-year grant program, as well as in some of the nonparticipating schools in districts that have widely adopted its principles, a study released last week concludes.

Participating schools and districts have made many changes in reading curriculum, instruction, assessment, and scheduling," the report by the Washington-based Center on Education Policy says. "Many districts have expanded Reading First instructional programs and assessment systems to non-Reading First schools."

Of course it's all a bunch of nonsense, but journalists are stupid. How do I know?
Titled "Keeping Watch on Reading First," the Sept. 20 report by the research and advocacy group is based on a 2005 survey of all 50 states and a nationally representative sample of some 300 school districts in the federal Title I program, as well as case studies of 38 of those districts and selected schools.
That's how. Surveys of educators are generally unreliable. So, even though I like the results, I'm not about to start touting survey "research"because I happen to agree with it. Hard data won't be available until next year.

Although apparently some preliminary data does exist:

"When I review the data from our state, we see huge [growth] in achievement" in Reading First schools, Diane Barone, a professor of literacy at the University of Nevada, Reno, wrote in an e-mail.

Ms. Barone, a board member of the International Reading Association, said that 21 of the 27 schools in the Reading First program in Nevada made adequate yearly progress under the No Child Left Behind law in English language arts for the 2005-06 school year, compared with just six the previous school year.

Then we have the obligatory concluding remark by this jackass:

But others question whether Reading First is pushing too narrow an approach to reading instruction. With more attention on basic skills, for example, reading comprehension may be suffering, said Maryann Manning, an IRA board member who is a professor of education at the University of Alabama at Birmingham.

"We have every terrible commercial reading program published in use in our Reading First schools," she wrote in an e-mail. "Narrowing the achievement gap on letter identification and the number of sighted words read in isolation is of no value on reading comprehension."

Actually, it has tremendous value because if you can't do it you'll be precluded from being able to read connect text later on much less comprehending it. Idiot.

3 comments:

Edspresso said...

"Surveys of educators are generally unreliable. So, even though I like the results, I'm not about to start touting survey "research"because I happen to agree with it. Hard data won't be available until next year."

Unreliable in what context? As to what works, or what teachers think? If the survey is a scientific poll of a random sample, I don't see why it wouldn't be unreliable. However, if you're saying it's unreliable for telling if a program is working (i.e. we should bank on a specific program because lots of teachers like it), then I'm with you all the way.

KDeRosa said...

Unreliable in that they measure qualitatively what should be meaured quantitatively. It is areliable measure of their opinion, but an unreliable measure of actual student achievement. Especially once you factor in the confirmation bias problem.

rightwingprof said...

Surveys are generally unreliable, at least as statistically reliable tests go, whether of educators or anyone else.