September 28, 2006

So Which Reading First Scandal Is It?

I remain confused by the Reading First Scandal. Once you probe an inch below the surface you quickly start seeing that OiG's findings contradict reality. (See my handy-dandy Reading First glossary for a description of the acronyms involved).

The OiG audit was initiated, in part, by Bob Slavin, co-creator of Success for All (SfA), who complained that his research based reading program was being excluded from Reading First funds. Here's how he put it in a press release from earlier this year:

Programs Excluded by Reading First Top List of Research-Based Programs

Two programs largely excluded by the federal Reading First program received the highest rating for research on reading outcomes given in a recent review. The review, issued by the Comprehensive School Reform Quality Center (CSRQ) at the respected American Institutes for Research, identified Success for All and the Direct Instruction Full Immersion program as having the strongest evidence of effectiveness for reading among 22 comprehensive school reform programs.

Despite the strong emphasis on evidence of effectiveness in the Reading First legislation, intended to help low-achieving children in grades 1-3 learn to read, the $1 billion a year Reading First program has instead emphasized traditional basal textbooks lacking evidence of effectiveness. Only 3% of Reading First grants have gone to schools using either of these programs.

Robert Slavin, a Johns Hopkins University researcher and Chairman of the non-profit Success for All Foundation said, "The CSRQ report used rigorous standards. It found 31 studies of Success for All and 10 of Direct Instruction that met its standards. If the same standards had been applied to the basal textbooks favored by Reading First, not one would have had more than a single qualifying study. Most have no research base at all."

The Success for All Foundation and other reading reform organizations have submitted complaints to the U.S. Department of Education'’s Inspector General, and the General Accountability Office is investigating at the request of Congress. The investigations are looking into allegations that the Department of Education failed to implement Reading First as intended and that there are serious conflicts of interest, as key Reading First contractors are authors of the commercial programs favored by Reading First. (emphasis mine)

The critical fact here is that only 3% of the Reading First grants have gone to SfA and DI, the only reading programs that, in Slavin's opinion, have sufficient scientifically based reading research (SBRR) support. As Slavin makes clear in this speech given to the Education Writer's Association, he believes that there were Reading First panelist who had ties to the commercial basal textbooks and it was these commercial basal textbook publishers that were impermissibly allowed in the Reading First program. Both SfA and DI would have been adversely affected by allowing these non-SBRR programs in, and, in fact, they were. Given an option between selecting a real SBRR reading program, like SfA or DI, and the more warm and fuzzy commercial basal programs, states overwhelming chose the latter.

One real scandal is why are the State departments of education excluded from their RF applications the only two reading programs, SfA and DI, that have been validated to be effective in teaching at-risk readers to read. Owen Engelmann, who is associated with DI, echoed Slavin's concerns recently when he wrote this on a educator's mailing list (no link):
The intent of the Reading First legislation was to provide funding to at-risk schools for implementing Reading Programs with strong scientific evidence supporting their effectiveness. There are only 3 reading programs with strong scientific evidence supporting their effectiveness --– Reading Mastery, Open Court and Success for All. Reading Mastery has more scientific data supporting its effectiveness with at-risk students than the other 2 programs combined...

Since Reading Mastery is the most effective program, if it didn't make a state's Core list, then the state couldn't have been operating within the intent of the law. Why aren't those states being investigated for incompetence, negligence or misconduct for not selecting the most effective program? Instead, the Attorney General's Office is persecuting the Federal Reading First Office for enforcing the law.
As the OiG audit makes clear DoE and the Reading First panelists had to play hardball with the states in order to enforce the RF law. The states were not only excluding the only two legitimate reading programs they were also trying to get funding for their existing failed whole language/balanced literacy reading programs.

This hardball playing would come back to haunt DoE.

The OiG audit was not directed to Slavin's complaint (which he confirmed in this recent press release) that both SfA and DI were excluded from RF funding, that commercial basal textbook publishers were impermissibly permitted funding, or that state's were also not selecting SfA or DI in favor of reading programs that were not research based. The facts support these contentions. Whether there is an actually a scandal to be had is unknown -- it depends on how broadly or narrowly the provision defining SBRR can be interpreted. Certainly, however, at-risk children are being adversely affected. Yet, no one seems to care about this angle. No, what OiG did was mangle the facts to fit a political agenda to attack the Bush administration and the Republicans two months before the mid-term elections.

What OiG did was capitalize on DoE's need to play hardball with the states and take advantage of the reading war battle raging behind the scenes. E-mails were selected showing DoE fighting with the states to exclude their non-SBRR whole language programs, which arguably is not only permissible but the intent of the Reading First statute, and bootstrap that to imply that SfA was also impermissibly excluded from RF funding by DoE when it appears that it was the states that were, arguably impermissibly, doing the excluding in their application (and as Slavin points out they were also excluding DI).

Then OiG capitalized on the fact that almost every RF panelist had connections to one or more commercial programs and concocted a conflict test "significant professional associations" that would have ensnared almost every RF panelist. But, OiG didn't use this contrived conflict to go after the publishers of the commercial basal textbooks that Slavin originally complained of. No, they had bigger fish to fry.

There was one reading program publisher, McGraw-Hill, that published a reading program under consideration, DI, who had political connections to the Bush administration. In addition, some of the reading researchers on the RF panels had professional associations with DI. So OiG used the DI connection to connect McGraw Hill with the Bush Administration. And, thus a scandal was born. Nevermind, that Slavin himself, believed that DI rightfully should have been included inthe Reading First program or the fact that DI only received less than 3% of the RF grants which means they didn't benefit from the "stacked panels" as alleged by OiG.

It is ironic that given the choice to go after the scandal that affects school children or the scandal that hurts the Bush administration, OiG chose the latter.

September 27, 2006

Here's a Reading First Story You Can Bet Won't Be Reported

From Edweek ($):

The federal Reading First initiative has led to improved reading instruction, assessment, and student achievement in schools taking part in the $1 billion-a-year grant program, as well as in some of the nonparticipating schools in districts that have widely adopted its principles, a study released last week concludes.

Participating schools and districts have made many changes in reading curriculum, instruction, assessment, and scheduling," the report by the Washington-based Center on Education Policy says. "Many districts have expanded Reading First instructional programs and assessment systems to non-Reading First schools."

Of course it's all a bunch of nonsense, but journalists are stupid. How do I know?
Titled "Keeping Watch on Reading First," the Sept. 20 report by the research and advocacy group is based on a 2005 survey of all 50 states and a nationally representative sample of some 300 school districts in the federal Title I program, as well as case studies of 38 of those districts and selected schools.
That's how. Surveys of educators are generally unreliable. So, even though I like the results, I'm not about to start touting survey "research"because I happen to agree with it. Hard data won't be available until next year.

Although apparently some preliminary data does exist:

"When I review the data from our state, we see huge [growth] in achievement" in Reading First schools, Diane Barone, a professor of literacy at the University of Nevada, Reno, wrote in an e-mail.

Ms. Barone, a board member of the International Reading Association, said that 21 of the 27 schools in the Reading First program in Nevada made adequate yearly progress under the No Child Left Behind law in English language arts for the 2005-06 school year, compared with just six the previous school year.

Then we have the obligatory concluding remark by this jackass:

But others question whether Reading First is pushing too narrow an approach to reading instruction. With more attention on basic skills, for example, reading comprehension may be suffering, said Maryann Manning, an IRA board member who is a professor of education at the University of Alabama at Birmingham.

"We have every terrible commercial reading program published in use in our Reading First schools," she wrote in an e-mail. "Narrowing the achievement gap on letter identification and the number of sighted words read in isolation is of no value on reading comprehension."

Actually, it has tremendous value because if you can't do it you'll be precluded from being able to read connect text later on much less comprehending it. Idiot.

The Reading First Glossary

I've gotten a few requests to construct a glossary of all the acronyms involved in the Reading First scandal. Here it is:

DoE: The U.S. Deptartment of Education. Currently run by a Republican Administration though staffed with lots of career bureaucrats who tend to be Democrats. It currently administers the No Child Left Behind Act of 2001 (NCLB).

: The Office of the Inspector General. Is the Department of Education's internal watchdog that looks out for waste, fraud, and abuse in the Department of Education. The bureaucrats running OIG are likely to be career government employees, i.e., Democrats.

RF: Reading First. Is part of NCLB and is intended to provide grant money to the states that adopt K-3 reading programs for at-risk schools with backed by Scientifically Backed Reading Research (SBRR) suportingtheir effectiveness . To accomplish this, states submitted applications which were reviewed by panelists comprised of reading research experts to determine whether the curricula and practices were aligned with SBRR. There is a high correlation between researchers with SBRR expertise and people who are involved in writing SBRR reading programs and materials. Most panelists had professional connections with one or more reading programs under review.

SBRR: Scientifically Backed Reading Research. It is defined in secftion 1208(6) of NCLB. It is a bunch of legalese gobbledygook (a technical term). Here's what you need to know. Only three reading programs have actual SBRR -- Success for All (SfA), Direct Instruction (DI), and Open Court (OC). SfA has 20 years worth of SBRR. DI has 40 years worth. OC has limited SBRR. All the remaining commercially available reading curricula lacks SBRR; however, some are more closely aligned with the reading research than others, though all programs claim to have SBRR even though they don't. It was the panelists job to exclude the bad reading programs that were not consistent with SBRR. What eventually happened was that the panelists permitted many of the phonics based commercial basal reading programs and excluded the worst of the whole language/balanced literacy reading programs, such as Reading Recovery.

SfA: Success For All. Is a successful reading program with 20 years worth of
SBRR behind it. It received the highest ranking (one of only two) from the American Institutes of Research based on a metastudy of comprehensive school reform research. Apparently, SfA missed being included in the first round of State applications since the States were only notifying commercial reading program publishers. There is also some question that SfA might have been required to conform to a three tiered structure (core program, supplemental materials, and remediation materials and failed to conform. In any event, SfA did make it onto and got approval in about 28 states. Bob Slavin, the creator of SfA, didn't like being excluded in so many states and lodged the initial complaint that started the OiG investigation of RF.

DI: Direct Instruction. Is an instructional system developed by Siegfried Engelmann that includes an effective reading program with 40 years of SBRR behind it. It also received the
highest ranking (one of only two) from the American Institutes of Research based on a metastudy of comprehensive school reform research. Unlike SfA, Direct Instruction has a commercially available reading program, Reading Mastery, published by SRA McGraw-Hill that was eligible to be included in the RF program. Direct Instruction also has consulting companies, e.g., NIFDI, that manage comprehensive school implementations.

OC: Open Court. Is a commercially available reading program that is backed by limited SBRR. It is also published by
by SRA McGraw-Hill.

McGraw-Hill is the publisher of the only two reading programs, DI and OC, that were both backed by validated SBRR and were eligible for RF funding. The chairman emeritus of McGraw- Hill is a political supporter of the Republican Bush administration.

Update: Updated the SfA description based on new information. 3:43 pm on 9/28.

EdWeek Spins Everyday Math Research

As I wrote earlier this month, The Fed's What Works Clearinghouse recently evaluated the research base behind Everyday Math (EM). Here's what the WWC found.

Out of the 61 studies on EM:

none where found to fully meet the WWC's evidentiary standards,
57 did not meet the standards; and
4 were quasi-experimental studies that met "with reservation."

Out of the 4 that met the standards with reservations, 3 did not have statistically significant results. 1 had indeterminate results. Two of the studies were conducted by researchers with ties to EM (Carroll & Riordan).

Here's my conclusion from my earlier post:

The WWC generously conclusion:
"The WWC found Everyday Mathematics to have potentially positive effects on mathematics achievement"
The average effect size was (0.31) or +12 percentile points. This represents a small effect size which is barely educationally significant.

That's certainly the equivalent of putting lipstick on a pig since those "potentially positive effects" come from three studies with statistically insignificant results and/or indeterminate results and one , and only, "study" with statistically significant results, came from a biased researcher [and] the research contained serious methodological defects.

You can put lipstick on a pig, but a pig is still a pig.
Edweek reported ($) on the WWC results too and had a slightly different take:
A popular K-6 math curriculum has shown promise for improving student achievement but needs more thorough study before it can be declared effective, a federal research center reported last week.

Everyday Mathematics, which is used by 3 million U.S. students in 175,000 classrooms, was deemed to raise students’ test scores by an average of 12 percentile points in a review of four studies reanalyzed by the What Works Clearinghouse at the U.S. Department of Education.

Those gains are “pretty strong,” said Phoebe Cottingham, the commissioner of the National Center for Education Evaluation and Regional Assistance, which oversees the clearinghouse. But she said the curriculum could not receive the clearinghouse’s top ranking because none of the research conducted on it was a large-scale study that compared achievement among students who were randomly assigned either to the program or to a control group.

If those results are pretty strong, I hate to see what pretty weak is. And, I love this spin by Edweek: "she said the curriculum could not receive the clearinghouse’s top ranking because none of the research conducted on it was a large-scale study." Another, more accurate way of saying this is to use the scientific terminology that those small sample size in those studies did not produce statistically significant results. That means that we cannot discount that the differences in test scores were the result of chance. This means that you can't reliably draw any conclusions from these studies, except maybe that it might be worthwhile to conduct a larger scale experiment.

Let's see how Edweek described the results:

Of those four, three studies found “positive” effects, but just one detected improvements in students’ math achievement that were considered statistically significant. The fourth study found no effect on test scores.

Based on those results, the report said the curriculum has “potentially positive effects,” the second-highest category on its ranking scale.

The one study that produced the positive effects was conducted by a researcher with ties to EM. Ya think that was worth mentioning, especially since a) it's the only study ever on EM that showed statistically significant positive results, b) that researcher refuses to release his data for independent analysis, and c) the study has been harshly criticised by other math experts as having serious methodological flaws (see my previous post).

One mathematics professor attempted to get the research data. Here's how he described what happened:
I am quite familiar with the (also aging, 2001) Riordan & Noyce paper because I tried to get exactly the information you snarkily request. In fact, I talked to the authors personally and they flatly refused to divulge any such information. That would be "unprofessional", you know. Without it, for all we really know, they sorted on performance and then looked for demographically matching schools. QED. In spite of the sarcasm, not wrong, just meaningless; its validity is unknowable.
That's the extent of the statistically valid positive research on EM. One study over the course of nearly 20 years conducted by a potentially biased researcher who won't release the results. And even then, the effect size was still small (0.31) and barely educationally significant (i.e., at least 0.25).

“The ranking underscores the stellar results [Everyday Mathematics] has had in the marketplace for over 20 years,” said Mary Skafidas, a spokeswoman for the McGraw-Hill Cos.

Everyday Mathematics is used widely across the country, including in most of the elementary schools in the 1.1 million-student New York City public school system, the nation’s largest.

Kinda sad isn't it?

In 1999, a federal panel of curriculum experts named Everyday Mathematics one of five math curricula with “promising” potential based on how well their materials aligned with national math standards. That list was later criticized by a group of mathematicians because they say programs it recognized failed to teach students basic math skills.

This underscores the charade that is the NCTM. What were they basing their conclusion that EM has "promising potential" on? Certainly not the research. And, certainly not the test scores of the millions of students who have passed through EM.

The current effort to evaluate programs’ effectiveness is hampered by a lack of high-quality studies published in academic journals and other places, some analysts say.

“It’s underwhelming the number of good studies done in math,” Ms. Cottingham said. “It’s a reflection on the past state of education research.”

And the current state too. It is a national disgrace.

September 26, 2006

The political Theater of Congressman Miller

What has happened to the once proud Democratic party?

Seeking to take advantage of the OIG's "scathing" audit of the DoE's handling of the Reading First initiative, Democratic Congressman George Miller, the senior Democrat on the House Education and the Workforce Committee, jumped out to on an early offensive:
Corrupt cronies at the Department of Education wasted taxpayer dollars on an inferior reading curriculum for kids that was developed by a company headed by a Bush friend and campaign contributor,” said Miller. “Instead of putting children first, they chose to put their cronies first. Enough is enough. President Bush and Secretary Spellings must take responsibility and do a wholesale housecleaning at the Education Department.
That "inferior reading curriculum for kids" is Direct Instruction (DI) which is published by McGraw-Hill whose chairman emeritus has made political contributions to President Bush and the Republicans.

Here's what happened. The DoE hired reading experts to serve as panelists to determine whether the reading curricula selected by each state applying for Reading First (RF) funds was backed by Scientifically Based Reading Research (SBRR) as required by the statute. Every panelist necessarily had professional contacts and associations (though supposedly not financial contacts) with one or more reading programs. They were reading experts after all. Some of the panelists, including the head of the RF program, had professional contacts with DI. As part of the review process, the panelists rejected the applications of many states that included reading programs that lacked SBRR. The DoE and the panelists were aggressive in excluding these unscientifically based reading programs as required by RF. This resulted in the hurt feelings of the owners of those SBRR free reading programs and the states that wanted to continue to use them. Naturally complaints ensued and an investigation by DoE's inhouse watchdog OiG ensued.

Since excluding the unscientifically based reading programs was a feature, not a bug, of RF, it is unlikely that the OiG final audit would have been very exciting. It's difficult to feel compassionate about the purveyors and users of a failed system for teaching children to read that's caused millions of non-readers and poor readers who've been weaned from the government teat.

There was, however, one white hat complainant among the black hats, Bob Slavin, the creator of Success for All (SfA), a legitimate highly successful reading program having twenty years worth of SBRR behind it. Slavin complained that the DoE had been too lenient and had allowed many popular commercial phonics-based basal programs that lacked SBRR to be selected by the states and receive RF funding. These basal reading programs were more palatable to the states than the more intense (and effective) than Slavin's SfA. The result was that many states elected not to put SfA on their RF approved list. This meant that any RF school in that state would not be able to use SfA. Problem was that some schools that were destined to become RF schools were already using SfA and would have to discontinue its use. Understandably, Slavin was upset.

Fast forward to last Friday when OiG issued its final report. OiG used the DI connection to reach the Bush Administration. Six panelists and the head of the RF program were involved in the RF selection process and connected to DI and DI's publisher, McGraw-Hill, benefitted from the selection process and McGraw-Hill's 's chairman emeritus has contributed political funds to President Bush and the Republicans.

It didn't take long for Congressman Miller to pounce:

"The inspector-general's report raises serious questions about whether Education Department officials violated criminal law, and those questions must be pursued by the Justice Department," said Representative George Miller of California, the top-ranking Democrat on the House Education Committee.

Miller, in his statement, called the audit part of pattern in which the Education Department under President George W. Bush "as repeatedly run afoul of ethical standards."

Miller's office also cited an independent analysis published last year by the Washington-based American Institutes for Research that found the program favored by Reading First directors, a product of the McGraw-Hill Companies Inc., was one of only two programs to receive AIR's highest rating.


The report issued last November by the American Institutes for Research gave only DI and the Baltimore-based Success for All program its top rating, ``moderately strong evidence of positive effects,'' out of 22 popular comprehensive elementary school reform models. Miller's office cited the report to highlight Success for All as a quality alternative.

Huh? Let me see if I get this right. Miller claims that DoE systematically excluded Slavin's SfA and relies on the independent AIR report to show that SfA is an SBRR reading program. According to Miller, DoE forced states to adopt DI -- "an inferior reading curriculum for kids that was developed by a company headed by a Bush friend and campaign contributor," instead. Yet, the very AIR report that gave SfA its top rating also gave DI its top rating; the only other reading program to receive AIR's top rating. What an idiot.

I suppose Miller thought no one was going to actually read the AIR report. Nor do I suppose Miller believes anyone would actually analyze the OiG report. None of the OIG findings suggest that DoE was trying to exclude SfA, rather they were fighting the good fight trying to keep out the non-SBRR whole language programs. Moreover, the OiG report failed to show any instances of DI actually being forced on states or favored by DoE. Even if DI were being pushed by DoE or the six DI panelists, by Congressman Miller's own admission, DI was more than qualified under RF. If anything, DI and SfA (and arguably another McGraw-Hill program -- Open Court) were the only two reading programs that were truly qualified to receive RF funds.

Not unsurprisingly, the MSM has jumped all over the scandal and Miller and the Democrats will undoubtedly score some political capital. One possible fall-out from all this is that some of those failed non-SBRR reading programs might now get approved for RF funds to the detriment of millions of more kids. But, I suppose it's worth it for Congressman Miller to score his political points, though it does make this statement a bit ironic:
"Everyone at the Department of Education who was involved in perpetrating this fraud on school districts should be fired – not suspended, not reassigned, not admonished, but fired. This was not an accident. This was a concerted effort to corrupt the process on behalf of partisan supporters, and taxpayers and schoolchildren are the ones who got harmed by it."
Yes, indeed, schoolchildren will be the ones getting harmed again.

Shameless Self Promotion

Edspresso is running an article on Philly's High Tech High that I coauthored with RightWingProf of Right Wing Nation. The article takes a hard, some might say brutal, look at HTH where everything is high tech but the low tech education. Unlike the MSM journalists who covered the opening of HTH, we were less than impressed with its dubious technology and pedagogy. Two words best describe my disdain for HTH: expensive failure.

See my previous posts on HTH here, here and here.

New blogger Rory of Parentalcation also has a some good articles too.

RightWinfProf's posts here.

Beware What You Wish For

Bob Slavin, creator of SfA, must be horrified with the findings of the OIG audit. The NewYork Times reported that the OIG investigation "was opened last year after the inspector general received accusations of mismanagement and other abuses at the department" from certain unnamed publishers and Slavin who is named and is quoted as saying:
The department has said at least 10,000 times that they had no favored reading programs, and this report provides clear evidence that they were very aggressively pressing districts to use certain programs and not use others
Since the OIG report only singles out panelists who were associated with DI, the natural inference is that DoE was aggressively pushing DI. The reader is left with the impression that Slavin is upset with the DI people. I believe this impression is intentional on the part of the NYT and the other MSMers who've reported the scandal.

But look at what Slavin had to say at the Education Writer's Association annual meeting in New Orleans, June 2, 2006.
I work at Johns Hopkins University, and am Chairman of the non-profit Success for All Foundation. Our Success for All reading program is one of two reading programs that have been extensively evaluated and found to be effective. The other is Direct Instruction. Both have been successfully evaluated in studies with random assignment to conditions, and the majority of the dozens of studies of both programs were done by third-party researchers. (You have summaries of research on Success for All in your packet.) Hardly anyone denies that Success for All and Direct Instruction have exceptionally strong evidence of effectiveness.

Naturally, advocates for scientifically proven practice assumed that a significant portion of the $1 billion a year in Reading First would go to help schools adopt programs with proven benefits for children. We were idiots. Nothing of the kind took place.

What did happen is that at least 95% of federal money was spent to implement the same commercial basal textbooks that most schools would have used had Reading First never existed. Worse, Reading First has actively pushed out programs with strong evidence of effectiveness, in favor of commercial programs and professional development programs lacking even a shred of evidence.

Reading First has turned out to be a giant step backward in reading reform, not just because it has nearly destroyed research-proven programs, but because it has made a mockery of the idea that scientific evidence should guide educational decisions for vulnerable children.

... [When the legislation passed, they realized that they faced a dilemma. Only two programs, Success for All and Direct Instruction, had extensive evidence of effectiveness. One of the five top-selling commercial basals, Open Court, also had a limited research base. The others had nothing.
Slavin was complaining that Reading First was being overinclusive by approving too many programs that did not have scientifically based reading research support. Zig Engelman the creator of DI, made the exact same criticism in "How Scientific is Reading First?" back in January.

Slavin goes on the say that DoE made the decision to soften the requirements of RF to permit the inclusion of other reading programs that were consistent with the research base on reading.
What Reid Lyon was essentially saying is that once he and other Reading First leaders realized that insisting on proven programs would limit schools to just a few programs, they had to change the rules to emphasize a much lower standard, programs based on good principles of practice. This is a key distinction.
It appears that it was DoE's intent to interpret SBRR to include other commercial basals, programs with no scientific research of their own but which superficially looked like SfA and DI because they taught phonics, phoneme awareness, fluency, vocabulary, and reading comprehension in a systematic and explicit manner. Slavin's opinion is consistent with mine in that under this standard the only programs that would not be eligible for RF funding were "the most extreme whole language basals." But, that isn't going to stop the publishers of those programs from also trying to claim that they were just as consistent with the SBRR as the commercial phonics basals were.

Then Slavin makes this excellent point:

Instead, Reading First leaders did their best to make certain that very few schools would use research-proven programs. From the earliest Reading First academies, for instance, top-selling basal texts were given by name as exemplars of what Reading First was all about. Programs with strong evidence of effectiveness were never mentioned.

Why would they do this? To understand why, put yourself in the position of a major textbook publisher. To them, the very idea that schools should choose programs with strong evidence is anathema. It took Direct Instruction 40 years to build up its impressive research base. It took Success for All 19 years. The commercial basals would be starting from zero. Frankly, their programs would probably not succeed in well-designed studies. Remember, standard commercial basals were the control groups in all studies of Success for All and Direct Instruction.
Slavin goes on to document that many of the panelists in fact had conflicts not with DI but with the commercial basal programs, such as Scott-Foresman and Voyager.
The large publishers could not possibly allow research to have a serious role in textbook selection. So they put enormous pressure on federal officials, congressional staffers, and key leaders in academia. They gave major publishing and consulting contracts to many of the individuals they knew would be influential. I'll come back to the issue of conflicts of interest in a moment.

I can't tell you all the ways the big publishers ensured that evidence of effectiveness would play no role in Reading First, but I can share with you unequivocal evidence that this is indeed what happened.

Back in 2003, when the first states were qualifying for Reading First funding, we were shocked to learn that the earliest-funded states, such as Michigan, restricted Reading First funding to the five top-selling basals: Houghton-Mifflin, Harcourt, Scott-Foresman, Macmillan, and Open Court. The two programs with strong evidence of effectiveness, Success for All and Direct Instruction, were not allowed.
So just in case it's not painfully obvious by now, Slavin was complaining that DoE was subverting reading First by permitting the use of commercial phonics basals which did not have a research base. Slavin, like Engelmann, believed that RF funds should be limited to the programs with an actual research base, namely DI, SfA, and Open Court. Ultimately, Slavin lodged his complaint and the OIG investigated.

Now here's the irony. The OIG report actually comes to the exact opposite conclusion -- Reading First may not have been inclusive enough and should have possibly included the whole language basals also. In the process, OIG manages to take a backhanded swipe at the panelists who had a connection to DI, a program that Slavin believed to one of the few that was truly eligible under RF along with SfA, tarring DI in the process.

Let's look at the winners and losers after the OIG audit.
Winners: all commercial basals that were funded under RF even though they had no research base, if anything they tended to be the losers in the reading research.

Potential Winners: whole language programs, who were even bigger losers in research, which may also get RF funding upon DoE's reinspection of the state applications.

Losers: Success for All and Direct Instruction, the only programs with real research support. SfA only made the approved list in about 28 states. And Owen Engelmann of DI has also said that Reading First "hasn't helped us out much at all." Both he and Slavin say implementation problems threaten to turn a worthy program into ineffective instruction for poor kids at taxpayer expense.

September 25, 2006

How to Tell if the OIG Report is a Legitimate Scandal

1. Was any legitimate SBRR reading program precluded by DoE in any State's RF application? If, in fact, any research validated reading program, such as Open Court or Success for All, was precluded there is the makings of a real scandal. If a un-validated phonics program that sufficiently mimicked a SBRR reading program was excluded there is a possibility that it there is a scandal depending on the actual research on the program and how well it claimed to be based on research.

What is not a legitimate scandal is if a balanced literacy/whole language was precluded funding. In the absence of legitimate research, this is the wolf in sweep's clothing scenario. These programs do an excellent job claiming to be phonics based and consistent with the research, but, in fact, they are not. It is good government to aggressively preclude these programs. These programs have recourse in the courts if they think they can exploit a loophole in the law.

2. Was a SBRR reading program unduly forced on a state that didn't want to include it in its RF approved programs? It doesn't count if DoE insisted that the state restrict its selection to one of the few SBRR reading programs out there and refused to permit non-SBRR programs.

That's all I can think of. Everything else is just typical sloppy government and lurid innuendo. We need a lot more facts to determine if either of these two scenarios took place because there is nothing in the OIG audit that suggests that this is what in fact happened.

What most likely happened is that many states tried to get RF funding for non-SBRR reading programs and didn't like it when DoE balked. Hello? This is why we have a reading crisis. So far no curriculum publisher or state has come forward with such claims and I don't suspect they will. Slavin has come forward with unsubstantiated allegations and the fact that SfA has been approved in 28 states seems to belie those allegations. Getting approved under RF is not the same thing as having a school district adopt your reading program.

Scathing, Scorching, and Blistering -- Oh My!

Update: See this post first for a primer on the reading wars. Corrected some typos and edited for readability (1:26 pm)

It's Monday morning and I still haven't seen any original reporting on the Reading First Scandal that actually shows wrongdoing on the part of DoE. Potential wrongdoing, perhaps, but, potential and innuendo do not a legitimate scandal make. There's still no blue dress, no matter how many adjectives the MSM dreams-up to characterize how bad they think the OIG audit will be for the administration.

Let's go through the findings point by point.

Finding 1A: The Department Did Not Select the Expert Review Panel in Compliance With the Requirements of NCLB

This one has a basis in fact. DoE was supposed to convene an Advisory and Oversight Panel to oversee the work of the subpanels that examined each state's RF applications. This was never done and DoE admits that this was a mistake. Nonetheless, OIG failed to show any actual harm that resulted from not forming the panel. DoE points out that the OIG failed to present evidence and they know of none that indicates that DoE's failure to convene this panel "would have met the requirements of the statute, resulted in any inappropriate effects or disadvantage to any State." DoE indicates that they will review all the application to determine if there was any wrongdoing by December 31. So, we'll have to wait until then to find out if this finding has any real merit.

FINDING 1B -- While Not Required to Screen for Conflicts of Interest, the Screening Process the Department Created Was Not Effective

This one is my favorite. DoE was under no obligation under NCLB to perform any conflicts check since RF does not directly issue any grants. Nonetheless, DoE did performed a conflicts check which included a screen for "financial interests in commercial products." No violations of this conflicts check were found or alleged by OIG. OIG's beef is that the conflict screen wasn't "effective" because DoE didn't include a catch-all question recommended by OIG: "Are you aware of any other circumstances that might cause someone to question your impartiality in serving as a reviewer for this competition?" OIG reasons that such a question would have revealed that six panelists had "significant professional connections to a teaching methodology that requires the use of a specific reading program."

This reasoning is specious and contrived at best. Using OIG's catch-all question would have revealed that every panelist had a significant professional connection to some program. That's why the panelists were qualified to sit on the panel in the first place; they were experts in reading instruction. Whether the panelists had ties to a single program or to several programs is not a satisfactory distinguishing characteristic for a conflicts check. In both cases, there remains a "conflict" under OIG's standard and all panelists would have been conflicted. This criterion would have effectively either removed all the reading experts from the panel (not exactly a desirable result) or would have biased the panel to members who had significant professional contacts with multiple programs (also not a desirable result). I see no advantage or heightened sense of impartiality in following OIG's contrived "professional conflicts in a single program" standard. Seems an ex post facto justification for achieving OIG's desired result.

In any event, OIG failed to show that even this flimsy conflict resulted in any of the six conflicted panelists reviewing or favoring the program with which they had significant professional contacts. Again, no real harm proven or alleged.

FINDING 2A -- The Department Did Not Follow Its Own Guidance For the Peer Review Process

OIG's objection here is that DoE departed from their initial plan to have provide the panelists' comments to the states and to have the panelists participate in the communications with the states to facilitate each state's amending of their applications to comply with RF. DoE admittedly diverted from this initial plan and undertook to summarize the comments made by the panelists and communicated with the states directly, supposedly in the interest of efficiency. OIG does not dispute that DoE was permitted to change this procedure since the procedure was not specified in the statute. OIG's only objection is that the initial guidance documents were not updated after DoE amended the procedure as they were entitled to do.

In a discussion list, one of the panelists clarified what actually transpired:
[R]egarding the assertion that the RF administrators did not share subpanel documents with the states, but instead wrote summaries---this is hardly a failing. In fact, it makes good sense. The evaluations sent by Sandi Jacobs and Chris Doherty to the states were summaries of main points in the subpanel reports, and were merely the BEGINNING of continuing conversations between these persons and the states. Ms. Jacobs many times told our subpanel that she worked from our evaluation reports when helping the states to improve their proposals. Can you imagine the demoralization and confusion if states were sent 80 page evaluations from the subpanel after each re-submission? In most cases, no more than four pages were needed to help states to improve their proposals.

[T]he summaries were the beginning of a dialogue with Jacobs and Doherty.

In most cases, a summary plus dialogue was all a state needed. The shortcomings were almost always the same and could be rectified if states read and used the readily available resources that were often cited for them.
Moreover, as I pointed out here, most states were hostile to RF's requirement that they use reading programs based on scientific research. Many states wanted to continue using the same un-scientific reading programs that they were currently using and were actively conspiring among themselves in an attempt to subvert the RF requirements looking for creative interpretations of the statute and find a loophole to get their unscientific programs approved -- the same programs that were currently causing millions of children to be unable to read by third grade -- the very result RF was seeking to ameliorate.

OIG again fails to point to any specific instance where DoE substituted their own judgment for that of the panel by endorsing, approving, or mandating any specific reading program.

FINDING 2B -- The Department Awarded Grants to States Without Documentation That the Subpanels Approved All Criteria

This one is particularly lame. OIG alleges that "some applicants were funded without documentation that they met all of the criteria for approval raising a question of whether these States should have been funded." That's it. Only a question is raised. OIG again fails to show any actual wrong doing; just the existence of potential wrong doing. DoE points out that the statute did not require them to provide any documentation. So in the absence of a finding of any wrongdoing on the part of DoE, the lack of documentation is not an infraction. All OIG has here is that some issues with some applications without documentation appeared to be unresolved and characterized them as not being minor -- a judgment call by OIG. At best, this was sloppy work by DoE, until OIG can identify any real infraction.

FINDING 3 -- The Department Included Requirements in the Criteria Used by the Expert Review Panels That Were Not Specifically Addressed in NCLB

OIG complains that DoE required States to comply with three conditions that were not explicitly required by the statute. Here is what the conditions required, "“Coherent instructional design that includes explicit instructional strategies, coordinated instructional sequences, ample practice opportunities, and aligned student materials,"” "“Protected, dedicated block of time for reading instruction,"” and '“Small group instruction as appropriate to meet student needs, with placement and movement based on ongoing assessment."” DoE claims that all these conditions are consistent with the scientific research (ed: they are) and OIG does not dispute this. OIG's beef is merely that DoE went above and beyond the minimum requirements of the statute. This seems like a good thing to me, especially since OIG failed to show that these "extra" conditions prejudiced any program or state application. In any event, these "additional consideration" can easily be found to come within the province of many explicit terms in the statute. It's not like those terms have a defined meaning in the art. DoE is free to interpret them as they see fit provided the interpretation has a ratinal basis. Until a court decides that DoE's interpretation was not rational, DoE is entitled to its interpretation.

FINDING 4 -- In Implementing the Reading First Program, Department Officials Obscured the Statutory Requirements of the ESEA; Acted in Contravention of the GAO Standards for Internal Control in the Federal Government; and Took Actions That Call Into Question Whether They Violated the Prohibitions Included in the DEOA

This finding is the catch-all that depends on the validity of the previous three findings to be accurate. I've already pointed out that the "validity" of those points is less than compelling. Morevoer, once you aware of the ideological battle being fought in the reading wars, it becomes evident that most of what is going on in this finding is DoE's attempt to prevent some states from subverting RF by their aggressive lobbying to include non-SBRR programs in RF. DoE is entitled to its interpretation of the RF requirements to exclude whole language/balanced literacy programs that, in fact, are not SCRR programs. We'll have to wait until a court decides if DoE's interpretation is inaccurate, assuming some state or publisher complains and wins. Most importantly, OIG fails to show an instance where a SBRR program was denied access to RF funds or that one SBRR was favored or endorsed by DoE. Finding four is merely a series of bootstrap arguments that only sound bad when taken out of context of the reading wars and what actually went on during the RF approval process -- context OIG conveniently leaves out of its vaguely written audit.

This is weak tea at best, which is not to say that it won't be milked for all the political capital it can.

Midday Update: The International Reading Association (i.e., the whole language baddies) chimes in. The wolf trying to act innocent as they take advantage of the hen house scandal.

My trusty Jim "schoolsmatter" Horn anti-compass remains true. Overheated rhetoric plus cut-and-paste skills do not a rational argument make.

September 24, 2006

Reading First Scandal Update

Update: See my point-by-point updated analysis of the OIG's "findings" here. But read this post first if you are not up to speed on the state of the reading wars.

I finished reading the OIG audit and with the exception of some salicious quotes pulled from e-mails it was oddly free of sufficient facts for anyone to make an informed judgment as to the alleged wrongdoing on the part of DoE. In addition, I couldn't find one instance of "balanced literacy" or her ugly step sister "whole language." And, in case anyone has been living under a rock for the past decade, there is a "reading war" raging in this country between the advocates of "whole language" reading instruction and the advocates of "phonics" based reading instruction. It's difficult to understand what's going on in the Reading First program unless you know what's going on in the Reading War.

Certainly the MSM reporting, the AP article and the New York Times article, on this OIG report has been superficial. Their intent, so far, is to report on the potential Bush administration scandal and play it up. Their investigative reporting has consisted of accepting the bad findings in the OIG report at face value and parroting them back to their readers.

My take is that the OIG report is a strong offensive waged by proxy by the advocates of whole language. It is uncertain whether OIG is an unwitting dupe, carrying water for the whole language cultists, or merely taking advantage of a political opportunity (sloppy DoE administration) to take a swipe at the administration. So let's place the scandal in its proper context so we can get a clearer picture of what is actually going on here, and what is not being reported by the OIG and MSM.

Whole language remains the predominate mode of reading instruction in use today. It's not called whole language anymore; it's called "Balanced Literacy" because they added a veneer of phonics on top of a system that continues to teach children how to read by "focusing on meaning" rather than by decoding words using phonics. Phonics is an after thought in Balanced Literacy. The most important aspect of Balanced Literacy is that it continues to fail to teach many children to read and still lacks a research base. Nonetheless, it is used in nearly every classroom in this country and its proponents remain ensconced in the departments of education in many states.

Reading First is the initiative designed to combat the mass reading failure caused by the continued use of Balanced Literacy reading programs. RF was designed to encourage states to adopt reading programs based on Scientifically Based Reading Research (SBRR). When RF began , three reading programs had sufficient SBRR to be funded under RF: Open Court (OC), Success for All (SfA), and Direct Instruction (DI) -- some being more successful than others. It is important to note that these three programs were not only SBRR programs they were also scientifically validated reading programs; each program has a significant research base validating its effectiveness. However, RF only requires SBRR, and, as a result, additional phonics-based reading programs were later found to be qualify.

In order to get RF funding it's up to the states to submit their application for funding. This is where the fun begins.

All the states had to do to claim RF funds was to submit a proposal indicating that they would use one or more SBRR programs, such as OC, SfA, or DI. This did not go over well with many states whose DoE's were infected with whole language cultists and reading curriculum publishers (such as Reading Recovery) whose programs were excluded because they were not SBRR programs. Moreover, this didn't prevent many states from submitting RF applications which included only reading programs that were SBRR free. Reading Recovery, a failed remedial reading program, was a popular choice and the publishers of Reading Recovery mounted an intense campaign to be included in the RF program.

In addition, at this point some states began communicating among themselves in an effort to find a loophole in the imprecisely worded RF statute to get their favored Balanced Literacy programs included in RF. These states used the application process and their ability to amend their applications as often as needed to subvert the RF prohibitions against the Balanced Literacy programs they wanted so dearly to continue to use and get funded.

Here's how the game typically gets played. All a state needs to do is get one Balanced Literacy program approved, it could then include any number of additional SBRR programs in its application knowing full well that most school districts would never use anything but the sole approved Balanced Literacy program. Such is the preferred way to subvert a statute aiming to any reform of education.

Bob slavin appears to not understand how this game gets played and is seemingly upset that SfA has not been selected by more states even though it is eligible for RF funding. The New York Times used Slavin as their "useful idiot" and to run cover for the politically powerful and largely Democrat supporting whole-language-loving educators. (Word of advice to Slavin: states aren't selecting DI for use either; being named as eligible under RF is not the same thing as actually being used.)

This is the context that the harsh DoE emails in the OIG report are best understood. Professor Marin Kozloff, a Reading First Panelist, had this to say about the states' shenanigans in a discussion forum on Saturday night:
[T]he assertive comments found in several of Mr. Dohertys emails are not an example of rejecting certain programs and favoring others. They were emotional responses to the continual harassment and pressure put on Reading First by companies (and by proponents) whose programs were (in my estimation) very badly designed but who wanted their programs accepted, anyway. It must be remembered that Reading First did not occur in a neutral context. It was not merely that certain materials were NOW considered badly designed, untested, and ineffective; it was the case that a whole approach (unsystematic instruction; teaching students to predict what words say) was called to account. Proponents of that approach (whole language and Reading Recovery) were often the very persons writing state RF proposals, and (according to Chris and Sandi) communicated amongst themselves in emails, telling each other how to by-pass the Reading First regulations. That is the context in which Mr. Dohertys remarks must be understood.

Far from side-stepping the regulations, he was trying to prevent their being subverted, and was frustrated by the chicanery of some states.
Right or wrong, Mr. Doherty is certainly politically naive if he thought the diehard whole-language cultists were going to simply roll over and accept the beating RF was intended to inflict on them. That's why he's being made the fall guy in this scandal.

Professor Kozloff has "professional contacts" with DI along with at least five other RF panelists. The OIG made much of these contacts in its reports. This was all they had to go on, because even though DoE was under no obligation to conduct a conflict check it performed one anyway for financial conflicts. Nonetheless, OIG drummed up the "professional contacts" conflict conveniently ignoring that every panelist had a professional contact with some reading program, otherwise they wouldn't have been qualified to serve as a panelist. Kozloff explains:
As to the finding that six panelists had some kind of professional connections to DI programs, this does not reveal a bias towards DI. Everyone on the panel must have had a professional connection to SOME program (Open Court, Success for All, Orton Gillingham); that is, they must have used a program; trained teachers to use it; or owned it. Can you imagine an expert in math who has no professional connection to a math text---is not partial to and has never used any? At the time the panel was created there were only three sbrr programs---Open Court, Reading Mastery, and Success for All. If there were six persons with connections to DI in the whole panel, this would be well below chance.
It's hard to believe that the OIG failed to see that everybody in education has professional contacts with some pedagogy and/or program. The difference is that some of those pedagogies are backed by scientific research and are effective and others have no such backing and, to boot, are ineffective. Unfortunately, the ineffective programs are the ones in widespread use. Furthermore, OIG seems not to be able to grasp the concept that insisting that states only adopt reading programs with SBRR support is not the same as endorsing or pushing the reading programs having SBRR.

Not unsurprisingly, the MSM has so far failed to get to the bottom of the OIG report. They sense that the report can be played up as a Bush administration failure and are playing it for all it's worth. This is what politics is all about after all and it doesn't help that DoE appears to have been very sloppy with its discoverable communications and its practices.

But let's not lose track of the fact that if the Reading First initiative is permitted to be subverted by the States it will not bode well for the millions of kids who fail to learn to read in a timely basis every year or for NCLB which depends on our schools being able to get kids to read in a timely manner in order to get them to proficiency by 2014.

Update (1:06 pm): Corrected some typos and readability.

See my point-by-point updated analysis of the OIG's "findings" here.

September 22, 2006

Letter from Skippy

NCTM President, Skip Fennell, sent this comical email to the troops after the media reported that the NCTM's new Curriculum Focal Points was a "return to basics." Apparently, the NCTM's talking out of both sides of their mouth has finally caught up with the NCTM.

Dear NCTM Members:

I am pleased to announce that Curriculum Focal Points for Prekindergarten through Grade 8 Mathematics: A Quest for Coherence was released on September 12. The Curriculum Focal Points are the next step in the implementation of the Standards. The focal points fully support the Council's Principles and Standards for School Mathematics. The appendix in Curriculum Focal Points directly links the focal points to virtually all the expectations in Principles and Standards.

Curriculum Focal Points presents the most important mathematical topics for each grade level. A focal point specifies the mathematical content that a student needs to understand thoroughly for future mathematics learning. The focal points are compatible with the original Standards and represent the next step in realizing the vision set forth in Principles and Standards for School Mathematics in 2000.
Except that the PSSM wasn't even campatible with the original standards. And, try as I might I couldn't find "quick recall" of math facts in anything before the focal points.

The focal points are intended for use by mathematics leaders as they examine their state and local mathematics expectations and seriously consider what is important at each grade level. This discussion, dialogue, or perhaps debate is designed to influence the next generation of curriculum frameworks, textbooks, and assessments.
Here's where it starts getting good.
Unfortunately, some of the media coverage has raised questions and caused concern among our members.
Despite several conversations with a reporter from the Wall Street Journal explaining what the Curriculum Focal Points are and are not, a September 12 Wall Street Journal article did not represent the substance or intent of the focal points. The focal points are not about the basics; they are about important foundational topics.
Gee, Skipper, do you think that "foundational topics not basics" doublespeak had anything to do with their confusion.

foundation: "a basis (as a tenet, principle, or axiom) upon which something stands or is supported"

Sounds like the basics to me. Otherwise the basics wouldn't be basic.
The Council has always supported learning the basics.
Except that they thought that the basics meant "conceptual understanding." It turns out that it doesn't.
Students should learn and be able to recall basic facts and become computationally fluent, but such knowledge and skills should be acquired with understanding.
What happened to "quick," as in "quick recall" of basic facts. I'm sure many hours were spent arguing over that term. Odd, that you would forget to include it in your email to the troops. Idiot.
Unfortunately, some of the other news media have followed the Wall Street Journal's lead and have similarly misrepresented the Curriculum Focal Points.

The Council's goal is to support teachers in guiding students to learn mathematics with understanding. Organizing a curriculum around a set of focal points can provide students with a connected, coherent, ever expanding body of mathematical knowledge. The focal points describe what should be the focus of what students should know and understand thoroughly.

I encourage you to explore the complete Curriculum Focal Points and related resources. You can view a video overview and introduction to Curriculum Focal Points, and you can see answers to some questions or submit your own questions about the focal points. The news release on the focal points and a video of the news conference at the National Press Club announcing the release, as well as an article on the focal points from Education Week, are also on the Curriculum Focal Points section of the NCTM Web site.


Francis (Skip) Fennell
He speaks with such authority. Yet, the NCTM has never designed, implemented, or identified a math curriculum that has successfully taught kids the "conceptual understanding" that enables kids to solve math problems. In fact, any curriculum the NCTM has ever identified have been failures.

They NCTM has no idea what they are talking about. I still don't understand why anyone listens to them.

Reading First Report

Update: 9/28: I have updated the Reading First "Scandal" nine times since this first post. G0 to the top and start scrolling.

The Office of the Inspector General issued a Final Inspection Report today regarding some potential shenanigans in the Reading First grant process. I see some of the prominent DI guys being named in the investigation. Looks like it could turn out to be an nice little scandal. Should be an interesting read.

Here's an AP story on the report. Looks like DoE stacked the panel with DI and other non-balanced reading program people in an effort to keep the balanaced/whole reading people off the panel who would undoubtedly permit non-scientifically based programs, like Reading Recovery, to get Reading First grants. Such is life when you are in an industry with an ideological agenda. It's like battling the communists. Nonetheless, rules should have been followed. It should be interesting to see how this plays out. I suspect a large political element is involved with this report. Let's see what shakes out. The DI guys know how to defend themselves in smear campaigns, so if they fail to do so in the next few days, you can probably bet they did something wrong.

Eduwonk appears to be the only one on the case.

Zealous Idiocy

Leave it to a lawyer to pack an article with so many bad arguments that it's about to burst. Surely, the author, Danielle Holley-Walker, has set some kind of record with this article on on the supposed alarming developments in the NCLB act.

At first I thought this article was written by a law student and I was going to introduce it by telling you to pity me because this is the kind of recent law school grad I have to train on a regular basis. Seven years of college and they still can't argue their way out of a paper bag. But, as it turns out, Ms. Holley-Walker is actually an Assistant Professor of Law. Oh, heavens.

In any event, Professor Holley-Walker doesn't like NCLB. More importantly, she has scoured the internet and has managed to find every single criticism of NCLB, no mater how loopy and has thrown it all against the wall in the hope that some will stick. I don't remember learning that in law school.

It's going to get messy, but let's take our rhetorical power-washers out and get to it.

Probably the most alarming development under NCLB is the number of schools that are being designated as in need of "restructuring," which could lead to massive school closures.


Because this is the fifth year after NCLB's passage, we are beginning to see schools receive the restructuring designation. As of 2006, some reports estimate that there are already 2,000 schools designated as failing. In some cities, such as Chicago, one-third of all public schools are being designated as in need of restructuring. Some experts predict that in the next five years the number of restructuring schools may swell to as many as 10,000. In short, NCLB has the potential to put our public schools into a state of chaos and crisis.
Someone needs to explain to me again how forcing kids to go to a failing school is in any way a good thing. I'm certain that if the good professor had a life threatening illness and the government forced ther to go to the closest doctor's office that was only able to cure 20% of its patients, she'd be singing a different tune.

And, it's not as if there wasn't a way for these failing schools to avoid the chopping block by actually--wait for it-- improving.

These statistics raise a number of serious questions. First, what will states do to respond to the increasing numbers of schools being labeled as in need of restructuring? Some educational policy experts worry that one response will be to actually lower academic grade-level standards, making it easier for students to meet the standards. Prior to NCLB's enactment, almost every state already had its own set of academic standards for each grade level, but the harsh accountability requirements of NCLB may cause states to turn away from the highest standards in order to avoid the consequences of failing schools.

This seems more like an argument for toughening up NCLB instead of getting rid of it.

The good professor seems to forget that the predecessor to NCLB required complaince also, but the states were ignoring the compliance portions. The state's figured that the Feds would just shovel money to them forever and all they had to do was spend it without caring about student achievement. NCLB was the newspaper swat on the nose and it clearly still stings.

Next, where will all the students go whose schools are restructured?

Er, someplace better, hopefully?

The act allows students to transfer to other public schools. For some high achieving public schools that may mean the absorption of students from failing schools. But will those schools have the staff and the capability to manage the influx? Or will the migrations start those schools down the path to possible restructuring status as well?

I thought these were going to be serious questions. NCLB has requires schools to disaggregate their data and doesn't permit schools to shield their minority and low-SES kids from NCLB's accountability provisions. Eventually, NCLB will catch up with every school. (The bdirty little secret is that even high performing schools do a miserable jon with low SES and minority kids.) The only way to avoid NCLB is to improve and teach all every kid. That's a good thing.

Also, when a student is attending a failing neighborhood school, the student's only choice for transfer may be outside of their neighborhood. This possible move away from local neighborhood schools would defeat current attempts by parents and school officials to increase parental involvement in their children's education by promoting neighborhood schools.

Oh, the horror. I'm thinking if parental involvement, to the extent it even exists, hasn't compensated for the failing school so far, it's foolish to think it'll work in the future. And why can't the parents still be involved in a different school? Is there some law against it?

Another alternative for students in restructuring schools is to transfer to charter schools. If they do that, they leave the public school system for the privately run schools.

I knew Marx would rear his ugly head sooner or later. Also, if you insert "failing" between "the" and "public" it starts to sound like a good alternative.

However, the alternative of charter schools might actually be worse for those students, as recent studies by the federal government show that fourth grade students in charter schools are not performing as well as public school students. Essentially, charter schools may provide an ineffective alternative for students fleeing from a failing school.

This reminds me of this exchange fromthe movie Dumb and Dumber:

Lloyd: What are the chances of a guy like you and a girl like me... ending up together?
Mary: Well, that's pretty difficult to say.
Lloyd: Hit me with it! I've come a long way to see you, Mary. The least you can do is level with me. What are my chances?
Mary: Not good.
Lloyd: You mean, not good like one out of a hundred?
Mary: I'd say more like one out of a million.
Lloyd: So you're telling me there's a chance.
The point being that even a slim chance is better than no chance.

Oh, and that study she's referring to doesn't exactly say what she thinks it says.

Beyond the possible dire consequences for schools and students in the aftermath of NCLB's passage, we are also seeing the questionable effects of the statute's methods of measuring achievement on school curriculums. NCLB depends on yearly testing to evaluate students, and because of the high stakes of that testing, more schools are emphasizing reading, math and science -- to the detriment of other subjects, such as history, art, music and physical education. American public school students already lag behind students in other countries concerning knowledge of geography and world history. There are also increasing cries by public health experts that less physical education may be exacerbating the problem of childhood obesity.

This is the second time the professor kicked the ball in for the other side. This is an argument for making NCLB more inclusive, not less. In any event, math and reading are being emphasized because they are more important than the other subjects. Moreover, a school's inability to teach math, reading, and science efficiently so it doesn't adversely affect other subjects is merely further evidence that the school is failing at its primary role of education.

The statute has a laudable goal of improving academic achievement for all students regardless of socioeconomic status, race or ethnicity. And there has been some average national increase in math and reading scores since the act took effect. Despite NCLB's good intent, the statute's actual effects on public schools seem to be detrimental, with the potential to become devastating.

That's a feature, not a bug.

The NCLB comes up for renewal in 2007, so it is not too early for Congress to evaluate the law's results so far. Congress should reconsider the "testocracy" that the statute has set up to measure student achievement, which is directly impacting what is taught in classes. Testing is only one method of measuring a student's progress. Schools that can include other factors, such as a student's grades

grade inflation and subjective

and individual teacher evaluations,


into an assessment of a child's proficiency may feel less of a need to value NCLB test results unduly, above other important
though subjective and prone to abuse

measures of progress.

Also, Congress should re-evaluate the NCLB accountability requirements that focus on transferring students and reassigning personnel. Perhaps the the law should be modified to encourage states to reform failing schools from within.

That's exactly what it's doing already. Reform by 2014 or else. That seems like a powerful incentive to me. One that is sufficiently powerful to cause an public-school apologist law professor to write a lengthy opinion piece.

More aggressive steps can and should be taken to preserve the already existing school.

The only ones in danger of not being preswerved are the failing ones. I'm feeling less worked up already.

An increase in funding for more teacher training and after-school educational support services is one such option.

More money.

The federal government should also allocate funds for studying the actual effect of increased testing in schools, in order to make a better-informed decision about whether the statute's emphasis on standards and testing actually improves achievement levels.

More money.

Finally, funding should be preserved for states to implement more early pre-kindergarten education programs, such as Head Start.

Even more money. That's always worked wonders in the past. I actually had to go back andlook at the picture they ran of the professor to see if I could see an NEA thug holding a gun to her head after reading that last paragraph.

Unfortunately, Congress's proposed budget for this year actually includes cuts to federal support for public education.

An oldie but a goodie. Listen, Professor, cutting the rate of projected growth is not the same thing as actually cutting funding. It's easy to keep straight. Next year more will be spent on education than last year. That's all you have to know.

With NCLB the federal government took on the daunting task of increasing student achievement. While the law has wrought change, the ongoing question is whether this or other federal government initiatives are effective in assisting schools in the day-to-day struggle to improve a child's reading level, math skills and scientific knowledge. Thus far, NCLB has provided more questions than answers, and it is up to Congress to take the next step.

No, it's up to schools to take the next step.

Chicken Sexing

Note to googling perverts: you are looking for a different kind of discovery learning than this post will provide.

One of the most pervasive and damaging education fallacies held by our nutter educators is that learning should be done in authentic situations or contexts. Authentic learning is supposed to be good because it is natural. Everything natural is good. (Except those deadly natural poisons, bacteria, and viruses; they're not so good.)

As a result, these same nutter educators insist on having students learn in authentic or real world situations. For example, the nutter principal at Philly's soon-to-be expensive failure School of the Future was recently quoted thusly:
Students' class schedules look different, too. They don't take calculus, English or biology. Instead, they attend inquiry sessions, during which interdisciplinary instruction tackles real-life questions such as "Should Philadelphians be worried about avian flu?"

Students learn the science behind the disease and study the environmental concerns. They discover how to research the topic, then they learn how to communicate their findings.

"It's more like life and less like school. I can't think of anything I do that is 'This is math, this is social studies,' " said Shirley Grover, who is called the "chief learner."
I've taken the liberty of emphasizing the toxic buzzwords. These practices are the medical equivalent of being bled and leeched. Doctors don't do this anymore primarily because the life insurance companies and medical malpractice attorneys would open a can of whoop-ass on them. Such countervaling forces do not exist in education.

D-Ed reckoning tip to parents who hear these buzzwords from their child's school: run away as fast as you can and don't look back.

Myths and Misconceptions about Teaching lays out three fundamental problems with natural or discovery learning:
First, [discovery learning] is apt to lead to incorrect learning or no learning at all.

A second problem with discovery learning is that is favors those with more background knowledge... the more one knows about a subject, the easier it is to learn more about it.

A third problem with discovery is that it is inefficient at best, and ineffective at worst. It takes much longer to discover a new concept unassisted than with step-by-step instructions.
Moreover, no one has yet to prove that discovery learning confers any benefit to the learner. Certainly, increased student achievement has been elusive. No benefits coupled with significant drawbacks, especially for lower performers, would lead the rational educator to abandon such a failed pedagogy. Our educators are not so rational; the allure of "authentic learning" persists.

Cognitive scientists have known that "authentic learning" is a load of bunk:
What is authentic is typically ill-defined but there seems to be a strong emphasis on having problems be like the problems students might encounter in everyday life. A focus on underlying cognitive process would suggest that this is a superficial requirement. Rather, we would argue as have others (e.g., Hiebert, Hearner, Carpenter, Fennema, Fuson, 1994) that the real goal should be to get students motivated and engaged in cognitive processes that will transfer. What is important is what cognitive processes a problem evokes and not what real-world trappings it might have.
So what cogntitive processes transfer the best? Those that are directly taught and practiced to mastery.

Many experiments exist that confirm this. One of the more igneneous was the chick sexing experiment. E.D. Hirsch describes it well:

There’s a dramatic experiment in the literature. At issue was the problem of how to teach people to discern the sex of day-old chicks. The protosexual characteristics are extremely subtle and variable, and even after weeks of guidance from a mentor, trainees rarely attain a correctness rate of more than 80 per cent. Learning this skill has important financial implications for egg-producing farmers, and chick-sexing schools have been set up in Canada and California. The school training, which involves implicit learning from real-world live chicks, lasts from six to 12 weeks.

It occurred to two cognitive scientists familiar with the literature on implicit vs. explicit learning that these chick-sexing schools might present an experimental opportunity. They wondered if they could construct a more efficient learning program based on their knowledge of the literature. They decided to capitalize on the experience of a Mr. Carlson, who had spent 50 years sexing over 55 million chicks. From a set of 18 chick photographs representing the different types, Mr. Carlson was able to identify the range of critical features distinctive to each sex, and on the basis of his trait-analysis, a single-page instruction leaflet was created. Training was to consist in looking at this analytical leaflet for one minute.

To conduct the experiment, people without any chick-sexing experience were randomly divided into two groups, one of which looked at the leaflet. Thereafter, both groups were tested. Those who did not study the leaflet scored about 50 percent, that is, at the level of pure chance. Those who looked at the leaflet scored 84 percent, which was even better than the scores achieved by professional chick-sexers. Alan Baddeley, the distinguished psychologist from whose book this example was taken, interprets the experiment as “an extremely effective demonstration that . . . one minute of explicit learning can be more effective than a month of implicit learning.”

(You didn't think I was going to be able to tie in the title of this post to the content did you? Shame on you. What did you think this was just a cheap trick to attract traffic? We'll never know.)

just in case you're not yet convinced, here's another:
Numerous experiments show combining abstract instruction with specific concrete examples (e.g., Cheng, Holyoak, Nisbett, & Oliver, 1986; Fong, Krantz, & Nisbett, 1986; Reed & Actor, 1991) is better than either one alone. One of the most famous studies demonstrating this was performed by Scholckow & Judd (described in Judd, 1908; a conceptual replication by Hendrickson & Schroeder, 1941). They had children practice throwing darts at a target underwater. One group of subjects received an explanation of refraction of light which causes the apparent location of the target to be deceptive. The other group only practiced, receiving no abstract instruction. Both groups did equally well on the practice task which involved a target 12 inches under water, but the group with abstract instruction did much better when asked to transfer to a situation where the target was now under only 4 inches of water.
Presumably the students were able to generalize from 12 to 4 inches of water because they had understood the principle of refraction.

Notice how in both experiments that a small amount of explicit instruction designed to focus the student's attention on the critical features of the task to be learned dramatically improved the student performance in a short period of time. The object of effective instruction is to identify those critical features and find an effective way to present that information clearly to the student.

This is not what goes on in your typical "discovery learning" or "authentic learning" classroom. these classrooms revolve around the presentation of a series of 'authentic" problems that studenst are given to work on over the course of days or weeks. Students typically begin by floundering around in the dark for awhile and the teacher gradually offers suggestions and teaches the material during the course of solving of the problem. Once the lessons have been completed, the students are whisked off to the next problem whereupon the process is repeated.

If this sounds inefficient, it's because it is. Notice how the practice comes before or during the abstract instruction part of the lesson. While the kids are busy discovering the intended point, they are missing out on valuable practice time. To the extent they discover the right concept, and I would hope most good teachers assure this, the lack of practice ensures that the knowledge gained will be quickly lost to the ravages of forgetfulness.

Daniel Willingham has has stated this point well:
It is difficult to overstate the value of practice. For a new skill to become automatic or for new knowledge to become long-lasting, sustained practice, beyond the point of mastery, is necessary...

That students would benefit from practice might be deemed unsurprising. After all, doesn’t practice make perfect? The unexpected finding from cognitive science is that practice does not make perfect. Practice until you are perfect and you will be perfect only briefly. What’s necessary is sustained practice. By sustained practice I mean regular, ongoing review or use of the target material (e.g., regularly using new calculating skills to solve increasingly more complex math problems, reflecting on recently-learned historical material as one studies a subsequent history unit, taking regular quizzes or tests that draw on material learned earlier in the year). This kind of practice past the point of mastery is necessary to meet any of these three important goals of instruction: acquiring facts and knowledge, learning skills, or becoming an expert.

The take away from all this is that even if an "authentic learning" program is successful in transmitting conceptual knowledge to a student. The design and inherent inefficiencies of such a program virtually guarantees that students will not get sufficient practice to ensure mastery of the material. Students will also develop bad learning habits in such a spiraling problem-solving curriclum:
The accountability of the teacher is therefore more “comfortable” because the teacher is not expected to get through the material in a specified period of time or bring students to mastery. The spiral curriculum is more comfortable for students because they are not required to learn, use, or apply the skills from one unit to the next unit. They quickly learn that even though they do not understand the details of a particular unit, the unit will soon disappear and be replaced by another that does not require application of skills and knowledge from the previous unit. The design clearly reinforces students for not learning or for learning often vague and inappropriate associations of vocabulary with a particular topic.

If the systematic program is like a stairway, the spiral curriculum is like a series of random platforms suspended on different levels. Students are mysteriously transported from one platform to another, where they remain for a few days as they are exposed to information that is not greatly prioritized. Mastery is impractical with a spiral curriculum design because many students lack the background knowledge they need to stand on a particular “platform.” The poor design relieves the program designer of assuring that earlier-taught skills and knowledge are mastered and used. The poor design also relieves students of the responsibility of learning to mastery and it relieves the teacher of teaching to mastery. It therefore promotes poor teaching and poor learning.
It's all bad as far as I'm concerned.