August 14, 2007

It's Being Done

A few posts back, I lambasted journalist-turned-author, Linda Perlstein, for her hackneyed observations made during a year long visit to one cherry-picked elementary school.

Perlstein is one of the anointed. She doesn't seem to understand that when you observe a correlation between A and B, this might mean that:

  1. A causes B

  2. B causes A

  3. Both A and B are the results of C or some other combination of factors

  4. It is a coincidence.

The anointed stop the analysis at the first or second pattern of causation and don't worry too much about the last two, even though typically in education the evidence is rarely sufficient to conclude that the result is anything more than a coincidence.

But enough about Perlstein.

In the comments, another journalist-turned-author, Karin Chenoweth, left the following comment:

[F]or those readers interested in a book that demonstrates how successful high-poverty and high-minority schools operate, I would suggest they read my book, It's Being Done: Academic Success in Unexpected Schools.

The criteria I used to select schools to visit are fairly clear--the schools had to have substantial populations of students of color or students of poverty or both, and high achievement scores or rapid improvement trajectories. (The criteria are laid out in more detail in chapter one.)

For the most part, these schools did not spend endless amounts of time having students writing paragraphs about poems nor writing endless numbers of poems, but rather taught them some poetry, along with lots of other literature, science, history, and other stuff. In other words, they honest-to-goodness taught their students.

The book was published by Harvard Education Press. Learn more about it at

I realize this is a shameless plug, but it seems appropriate to the discussion.

An interview with Chenoweth can be found here. And, the criteria she used to select the schools she observed are:

  1. Significant population of children living in poverty and/or a significant population of children of color

  2. Proficiency rates above 80 percent, or a very rapid improvement trajectory

  3. Smaller achievement gaps than the state

  4. Two year’s worth of comparable data

  5. High graduation rates and high proportion of freshmen who are seniors four years later (Promoting Power Index)

  6. Annual Yearly Progress met

  7. Open enrollment for neighborhood children (no magnet, charter, or exam schools)

You cheapskates can find many of the articles that form the basis of the book here.

Chenoworth's interview is much better than Perlstein's. She appears to understand what is important to improve student achievement and what is not:

I think some of the structural reforms that people focus on are not all that important. For example, the grade configurations of schools. Whether a school is K–8 or broken up into elementary and middle school is not as important as making sure that teachers know what needs to be taught at each grade level. Similarly, whether a school has a block schedule or a six- or seven-period day is less important than the quality of instruction. At a district level, whether a school board is appointed or elected is not as important as whether the district has a coherent curriculum and a [teacher] development plan that supports the curriculum.

It is about improved instruction and making sure that the instruction is working. To do that you need strong leadership and data. You get data by testing. And, the strong leadership makes sure that the right course is taken in response to the data.

And, that's what came through when I read the Achieve Alliance articles Chenoworth penned.

However, what also came through was that many of these "successful" schools were still making it up as they went along. Which is to say that their decisions were not fully based on the current state of education research. These schools have made many right decisions, but they've also made many ineffective decisions--decisions that the leaders of these schools should have known were not the best decisions to take.

And, the articles, and I suspect the book as well, does not provide the level of detail necessary to inform other school leaders how to change their own schools to achieve similar results. Moreover, what we get are a bunch of "one-off" success stories, whereas it would have been better to have gotten replicated success stories so we can discount the possibility that the results are due to coincidence.

Then there is the problem that what the schools think is the cause of their success and what Chenoworth thinks is the cause may not in fact be the real cause. I don't see the track record of success from anyone that would lead me to trust their opinion. Nonetheless, there is value in dispelling the myth that schools are responding to NCLB by turning themselves into test prep factories and by limiting the curriculum.

Take a look at one school that Chenoworth visited--Stanton Elementary School in Philadelphia. Here's the report (pdf). I noticed this potential problem.

In the 2003-04 school year the scores skyrocketed when 71 percent of Stanton’s students met state reading standards and 47 percent met state math standards. The growth was so dramatic, in fact, that the district retested the students to
make sure there had been no mistake or chicanery. The retest confirmed that most students at Stanton were meeting state standards in reading, and many exceeded those standards. And when the 2005 test scores were released, showing that 73
percent of the students met state reading standards and 84 percent met state math standards, it was clear that 2004 had not been a one-year fluke but rather a reflection of new practices – practices that include a careful reorganization
of instruction, comprehensive professional development of teachers, close examination of student data, a curriculum tightly aligned to state standards, and shrewd use of federal Title I dollars.

Based on the data from SchoolMatters and despite the re-test, I am suspicious of these fifth grade scores. Stanton did indeed perform well, above the state average, in fifth grade in 2006. But, Stanton also performed below average in every other grade tested, i.e., third, fourth, and sixth, in 2006. I am especially suspicious of the anomaly in the reading scores. The cohort that had 73% passing in 2005 (fifth grade) sunk down to 16% passing in 2006 (sixth grade). Such a dramatic decline should be all but impossible if the student achievement was the result of a real educational improvement as opposed to artificial test prep. The decline in math was nearly as dramatic.

The fifth grade test has been around for a long time. The fourth and sixth grade tests are new. I have a feeling there was more test prep going on that Chenoworth observed on the two days she visited.


Karin Chenoweth said...

I know you can find some of the school stories from It's Being Done on the web and thus avoid actually paying for the book, but you then miss the updates which include, for example, a discussion of what happened to Stanton's sixth grade last year. And you also miss the concluding chapter which tries to distill the lessons learned from all the schools profiled in the book.

KDeRosa said...

Now that's the way to sell a book.