June 27, 2008

Question 4: What to do after 100 easy Lessons

Ari-free asks:

1) My sister is teaching her 4 year old daughter with "Teach Your Child to Read in 100 Easy Lessons" What should be the next step for her?

For most students, the next step is Reading Mastery III which you can find on EBay. Another option is Horizons 3/4 (the other Direct Instruction curriculum) which squeezes two years on instruction into one year. This is also available on EBay but more difficult to find.

2) What does Singapore Math (the new CA standards version) look like from the perspective of Direct Instruction?

I'd say that SM is a good coherent curriculum that requires a fair amount of teacher skill to teach. Lacking such skill, a good alternative is to use the DI math curriculum Connecting Math Concepts as the primary curriculum and supplement it with SM (and Saxon for the die-hard).

Question 3: What material should be overlearned

EHT asks:

[S]hould we expect each and every student to have total recall 9 months later? OR Should we expect students to be able to read a question regarding older content and be able to touch upon the right answer by reviewing the answer choices carefully and logically? AND shouldn't we expect students to forget certain things, but be able to have the skills to jog their memory such as how to check a reference source, etc.?

Willingham addresses this issue at length. So, let's go right to the source.

[T]he following types of material are worthy of practice:

1. The core skills and knowledge that will be used again and again. In this case, we give practice in order to ensure automaticity. The student who struggles to remember the rules of punctuation and usage (or must stop to look them up in a reference book) cannot devote sufficient working memory resources to building a compelling argument in his or her writing. The student who does not have simple math facts at his or her disposal will struggle with higher math.

2. The type of knowledge that students need to know well in the short term to enable long-term retention of key concepts. In this case, short-term overlearning is merited. For example, as noted earlier, a science teacher may want students to know a set of facts about certain species so that she can introduce an important abstract concept concerning evolution that depends on these facts. Or, a high school history teacher may want students to master the facts of several Supreme Court cases in order to build long-term understanding of a particular constitutional principle.

3. The type of knowledge we believe is important enough that students should remember it later in life. In this case, one might consider certain material so vital to an education that it is worthy of sustained practice over many years to assure that students remember it all of their life. A science teacher might spend the better part of a year emphasizing basic principles of evolution in the belief that the material is essential to consider oneself conversant in biology. Further, the curriculum might address and require practice in evolution in multiple years to assure that such knowledge will last a lifetime. Do we expect that a 40-year-old will have retained everything learned through the 12th grade? No, but do we expect that she will retain anything? Should she be able to grasp the basics of evolution or describe the different responsibilities of the three branches of the federal government or calculate the area of a circle? Exactly what sorts of knowledge merit the focus required to create long-lasting memory will be controversial, but that practice is required to create such memories is not.

How should practice be structured--should a teacher strive for overlearning in the short term or repeated learning over the long term? The answer will depend on whether the goal is automaticity in skills, short-term knowledge, or long-term knowledge--and what the teacher knows about the future curriculum students will encounter. For example, an English teacher might deem it very important that students understand the use of metaphor in poetry, but extensive, focused practice may not be practical or necessary. This knowledge will likely be developed over a number of years, and there will be opportunities for practice in the future. In other cases there will be future opportunities for practice, but the timeliness of the learning is important. For example, one teacher might provide just a cursory introduction to first-graders on how to tell time, figuring that the students will have ample opportunities for practice in the future. But another teacher might also reason that first-graders need to know how to tell time (so that, for example, they can monitor their activities during the day and be more self-directed) and so focus practice on this skill. Similarly, a French teacher may realize that students will have plenty of practice conjugating the verb ĂȘtre (to be) over the long term, but may justly believe that students must know this material early in their training or their ability to read, write, and understand French will be badly hampered.

Exactly when to engage students in practice, through what method, and for what duration are educational decisions that teachers will need to make on a regular basis. But, that students will only remember what they have extensively practiced--and that they will only remember for the long term that which they have practiced in a sustained way over many years--are realities that can’t be bypassed.

One of the problems with modern education is that little effort is made to distribute practice such that students have a fighting chance to retain the knowledge they have learned. Google is great tool, but it is not a substitute for a deep well of structured long-term knowledge.

June 26, 2008

Question 2: Why don't students retain what they've learned

Roger Sweeny asks:

During the year, students had "learned" various things. How do I know they learned them? Because they correctly answered questions on tests or used the appropriate formulas and ideas to solve problems (I teach physics and physical science).

But when they take the final, they show that they don't know lots of the things that they seemed to know earlier in the year.

So maybe it's incorrect to say that they actually "learned" those things. Maybe it's more accurate to say that they "memorized" various things and then forgot them.

Two questions: One, does anyone doing education research care about this? My impression is that it's a gigantic elephant in the room that no one wants to talk about. And yet teachers know it's there. Every teacher knows that if she gave the same test a month later, her students would do considerably worse.

The short answer is that learning something, even to mastery, is not sufficient. Merely mastering something does not guard it against the ravages of forgetfulness. In order to retain what you've learned you need to overlearn it past the point of mastery. Schools don't do this.

For the cognitive science behind all this read these three articles from Dan Willingham:

1. Students Remember...What They Think About

2. Why Students Think They Understand—When They Don’t

3. Practice Makes Perfect--But Only If You Practice Beyond the Point of Perfection

Roger continues:

So question two: Does anyone doing educational research use long-term learning as a measure of success? When researchers try to determine if various things "work," how many test for knowledge 24 hours later? a week later? a month later? more?

The DI people do, though the standardized testing instruments don't capture the long-term effects if learning.

Read Engelmann's Student-Program Alignment and Teaching to Mastery for one way how to truly teach to mastery with retention. Another way to do it is through Precision Teaching, as another commenter indicated.

The easiest way to learn how to teach to mastery is to take a look at one of the DI programs. I'd suggest getting a mid-level reading or math program from eBay (you must get the presentation book and teacher guide along with the student materials). Here's a series of posts I did analyzing the practice provided in a lesson from Reading Mastery rainbow Edition. Here's another series on how to teach a particular with distributed practice.

If anyone is interested I have a presentation paper that Don Crawford gives at the yearly DI conference on how to apply DI principles to content area texts. Here are a few points Don makes:

It’s not about you—it’s about the kids, and what they can articulate fluently.

Keep teaching and keep students practicing until knowledge is learned to the “walk around” level—known well enough that the kids can think about it and explain it when they are walking around without a book.

Why? Because the first prerequisite for knowledge to be used flexibly in higher order activities such as essays, comparisons, application, synthesis, evaluation, etc. is that it be learned to the “walk around” level.

Therefore you must test by production items (not multiple choice recognition level) to know if content knowledge is learned well enough. Also to capture relationships.

If you want a copy of the presentation, shoot me off an email.

Question 1: How to find DI schools

4Trojan asks:

Why is it next to impossible to identify schools in a given area that use DI
either for certain subjects, or as a whole-school model?

In all likelihood there is no "master list" of DI schools. A few months ago ADI was soliciting the members of the DI Listserv to send in which schools where they were working to create such a list. You might want to contact ADI again to see if that effort was successful.

To find out about whole school implementations you might want to companies that do them: Nifdi and J/P Associates (There may be one more).

There are also a few private schools using DI I've seen mentioned on the DI Listserv.

Other places I've seen DI schools mentioned is in DI News. If you join ADI for $40 a year, you get both DI News and The Journal of Direct Instruction.

Not sure how much this helps, but it's all I have.

June 24, 2008

Ask D-Ed Reckoning

Let's take advantage of the slow and uninteresting education news cycle.

If you have a burning k-12 education question you've been itching to ask (ignore that mixed metaphor) now is your opportunity to ask it.

Leave your question in the comments and I'll try to answer it or, at least, give my opinion. And, since the commenters are often more knowledgeable and smarter than me, and not shy about giving their opinions you'll get the benefit of their expertise as well.

June 18, 2008

Corey Responds

Corey Responds here. Here is my response.

As a preliminary matter, I want to make sure my position in clear.

It is not my position that environmental factors, like the home environment, don't matter. That's silly. Clearly, low-SES students are different from their middle-class peers. They are different in that they are less able to access the education that is available in your typical school (the exact causation is irrelevant). We know this already.

The issue we are debating is whether it is appropriate to assign causation to educational failure in the absence of appropriate instruction. My position is that its inappropriate and logical fallacy to do so. Corey also seems to think its also inappropriate, but nonetheless assigns causation anyway.

In summary of what went wrong at my school, let me be clear: myself and the other adults in the school failed the kids in many, many ways. The school was poorly run. I lacked adequate training. We could have done any number of things better (especially around discipline) and it would have helped the situation. But I stand by my assertion that the largest cause of problems at the school was the home life of the children.

Let's examine the reasons Corey gives:

1. If we took the low-SES kids in Corey's school and put them in a well-funded affluent suburban school and took the high-SES kids and put them in Corey's school,then Corey's school would become a high-performing school and the rich affluent school would become a failing school. I agree. The differences between the schools are educationally superficial. Both schools are designed to educate middle-class kids, not low-SES kids.

Here's a better experiment. Take low-SES kids from their low-performing school nd place them in a school with improved instruction and classroom management and see if student achievement rises to the mean level of student performance. We already know the answer from Project Follow Through. Student achievement will improve to mean levels. Providing appropriate instruction matters quite a bit, more than anything else that's been researched. The only reason why a dispute exists today, is because the research is ignored, perhaps conveniently so. Maybe Corey can supply an answer.

2. Some low-SES kids can be successfully taught using in a selective private middle-class school; these students appeared to come from stable families. Again, I agree with Corey; there is a selection bias problem with this example. Some low-SES can access middle-class instruction in a middle-class school setting. Academic success breeds motivation and reduces behavioral problems, as does the presence of a supportive family. These kids are receiving appropriate instruction. Our concern is the kids who are not.

3. It's not possible to overcome home effects given current funding levels. I disagree. The correlation between school spending and student achievement is very low, scatterplot low. As Corey concedes there are quite a few schools where low-SES students have high academic achievement at today's funding levels. These counterexample don't prove the point, but they do disprove Corey's point. What does prove the point is, once again, Project Follow Through in which dozens of schools raised student achievement of low-SES levels with funding far below today's spending levels in a controlled experiment.

4. The coleman report showed that "homes influence academic achievement more than schools." Again I agree. Given that most schools are designed to educate middle-class kids, this result is not surprising. Again, Project Follow Through, showed that school effects can compensate for the disadvantages resulting from home effects.

My syllogism for improving student achievement

To avoid any further confusion going forward, here is my hastily written syllogism with respect to improving the academic achievement of low-SES students:

Many Low-SES kids are different than their middle-class peers -- different in ways that make them more difficult to educate.

Most schools are designed to educate middle-class kids, as a result many low-SES kids are unable to access the education provided in these middle-class schools.

Many Low-SES kids require compensatory education which includes improved instruction that compensates for their skill deficits and the utilization of improved classroom management practices.

Compensatory education can eliminate or substantially reduce many of the academic achievement gaps that exist between many , but not all, low-SES students and their middle-class peers.

Compensatory education can be provided at today's funding levels.

With proper training, many teachers would be capable of providing a compensatory education to low-SES students; but most currently lack these skills.

Most schools do not provide compensatory education to their low-SES students and as a result the low-SES students who could have benefitted from compensatory education students fail academically.

Academic failure, coupled with present mainstreaming policies, exacerbates in-school behavioral problems, further decreasing the school's ability to educate the low-SES students who have access to the standard education provided.

Though many correlations can be found between factors thought to be associated with low SES and student achievement, there is little reliable evidence that improving one or more of these factors leads to increased student achievement. The weight of the evidence actually suggests the opposite -- that improving SES and the factors associated therewith will have either no effect on student achievement or a small educationally insignificant effect.

Nor is there reliable evidence that improving these SES factors will improve the academic benefits conferred by providing compensatory education.

There may exist other benefits to be gained by improving SES of low-SES people, but those benefits do not affect academic achievement and so should not be included under the mantle of education.

June 17, 2008

The Middle School Teacher Fallacy

One of the problems with the classroom observations of middle and high school teachers is that they are observing instructionally damaged students and often the teachers don't seem to realize this.

Take for example this post by Corey Bunje Bower over at his Thoughts on Education Policy blog in which Corey explains why he supports the Broader Bolder Approach.

Here's the reality of the situation: schools can help, but they can't change everything. I taught for two years in an atrociously run middle school in the Bronx. Every day was filled with much chaos and little learning. And here's what I learned during my two years: the biggest reason for the failure of the school was what was happening at home. Now, that's not to say that we couldn't have done better. I could name a couple incompetent and mean-spirited administrators that should've been replaced. Us teachers certainly could have done better. The school could've been better organized. Any number of things would have helped the situation.

I don't think the conclusions Corey draws from his observations of middle school students are valid.

I don't see how Corey is able to separate the the out-of-school effects from the in-school effects and conclude that the out-of-school effects are the primary cause for the chaotic and disruptive behavior of his middle school students.

One thing we do know is that, for whatever reason, the instruction failed at Corey's school for many students. Corey concedes that the instruction was a problem. Because, the instruction failed these students experienced at least six years of academic failure before they even reached Corey's classroom. Academic failure is a leading cause of disruptive behavior.

The question remains: why did the instruction fail? There are two primary causes:

1. There was mostly something wrong with the school and its instructional delivery mechanism (curriculum and teachers), or

2. There was mostly something wrong with the students and their home life.

Corey, after taking some blame for the poor school environment, cites reason 2 as the primary cause for the educational failure he observed. Here are the reasons he gives:

But that doesn't hide the fact that most of the problems existed before students ever set foot in the building. I had some horribly disruptive students my second year. The only thing that seemed to help one's behavior was calling home. But when I did that he would come back with large welts across his arms. When I would call the home of another one, his mother would tell me that she didn't know what to do. Kids would stroll into the building late with an empty stomach and an angry demeanor. When an argument erupted, it was quite difficult to defuse because kids would tell me that they were under instructions from their parents not to "stay hit." Kids would come in and fall asleep b/c they had been kept up till the wee hours of the morning by any number of activities. A couple kids disappeared for a month to go visit family in the Dominican Republic. And on, and on, and on.

It appears that Corey is blaming the students' bad behavior as the primary culprit. But, it sounds to me that Corey lacked the classroom management skills needed to establish a classroom atmosphere conducive to learning. I don't mean to blame Corey because he most likely did not receive any practical training in how to manage the classroom full of unruly academic failures that he inherited.

Contrast Corey's observations with Engelmann's observations:

Our observations of many failed schools would ... disclose that most teachers either completely fail to manage children or rule through intimidation (yelling at children, issuing demeaning comments, but rarely praising children). The instruction that we see is technically unsound according to all the evidence on how to communicate effectively, how to achieve mastery, and how to reinforce and manage children effectively.


Teachers lecture for long periods of time. What “tasks” the teacher presents occur at a very low rate. There are no systematic correction procedures, no attempts to repeat parts that are difficult for the children, and no serious concern with whether children master the material. The pacing of the presentation is laborious. The material the teacher uses is far too difficult for the skill level of the children. Most of the students’ time is often spent on pointless “worksheet” activities. The students don’t like reading, math, or any other academic activity.

What Corey sees as a student problem, Engelmann sees as an instructional problem.

Many of the justifications I see for Broader Bolder actually have instructional roots, like the one Corey gives. The problems Corey sees may start at home, but there is no reason to believe that they cannot be solved and compensated for by schools.

Teacher Effectiveness is not very effective

The edusphere is talking about the new study on teacher effectiveness, The Persistence of Teacher-Induced Learning Gains. The important part is on page 4.

These studies consistently find substantial variation in teacher effectiveness. For example, the findings of Rockoff (2004) and Rivkin, Hanushek and Kain (2005) both suggest a one standard deviation increase in teacher quality improves student math scores at least 0.1- 0.15 standard deviations. Aaronson, Barrow and Sander (2007) find similar results using high school data.


Indeed, the inherent optimism of this literature is captured by an oft-cited statistic that matching a student with a stream of good teachers (one standard deviation above the average teacher) for 5 years in a row would be enough to complete eliminate the achievement gap between poor and non-poor students (Rivkin, Hanushek and Kain 2005).

I don't understand why people get all giddy over a 0.1 - 0.15 standard deviation, i.e., educationally insignificant, gains. Gains like this just don't show up in the real world. This isn't even a real scientific experiment with a control group taught by random teachers and an experimental group taught by super teachers. This study is just the playing of statistical games and drawing conclusions from the tiny correlations.

I suppose they'll have to go to Lake Woebegone to find all these above average teachers -- not that we know what to look for anyway.

In any event, this study seems to disprove the notion that years of successive great teachers aren't cumulative since only about 20% of the gains are maintained.

The entire notion of cherry picking great teachers was silly from the get go.

June 16, 2008

Critical Thinking and How to Teach It

Buried in a lengthy, otherwise irrelevant, post gadfly Stephen Downes writes:

Our methods of teaching focus on the memorization of facts, rather than the cultivation of disciplines - such as, say, logic and critical thinking - that allow them to think for themselves.

Under the commonly understood meanings of these words, this statement is, to put not too fine a point on it, preposterous.

Current methods of teaching focus on the memorization of facts? Huh?

Not just the learning of facts, mind you, but the memorization of facts. Quick, somebody call Don Hirsch and tell him he can retire.

Then we get the highly misleading implication that critical thinking is somehow a discipline to be cultivated in and of itself, as opposed to being a domain specific skill.

If you want to learn about critical thinking, how it should be taught, and why Stephen is wrong, I advise you read Dan Willingham's article, Critical Thinking: Why Is It So Hard to Teach?.

If you want to see Stephen make a fool of himself defending his statement and lose his cool to boot, see this comment thread.

June 15, 2008

The Evidence against Broader Bolder

Those behind the Broader, Bolder Approach manifesto conveniently fail to mention all the evidence against their broader, bolder approach not to mention that no one has ever been able to use such a broader, bolder approach to "close [the achievement] gaps in a substantial, consistent, and sustainable manner."

In the interest of more complete and more accurate presentation of the issues here are a few posts debunking the broader, bolder rhetoric.

1. Achievement gaps can be eliminated by schools alone; providing comprehensive medical and social services wsa ineffective.

2. The folly of class size reduction

3. Improving teacher effectiveness isn't very effective

4. The inner city school counter-example

5. Teachers don't know best

6. The problem is vocabulary acquisition

7. Trying to improve student achievement by improving SES is a fool's errand

8. the lessons from Kansaa City

9. Povery, such that it is, is not the problem

10. The root cause of educational failure -- bad teaching

June 13, 2008

I Won

Chadd Lykins thinks my last post on the Bigger, Bolder manifesto represents the "awful state of education blogging."

The problem with these kinds of posts is that they are the poor man's substitute for rigorous criticism. Clearly, Chadd disagrees with my assessment of the Bigger, Bolder manifesto, but instead of doing the hard work of a point by point criticism he does a fly by which only addresses a fraction of the arguments I made.

Let's examine Chadd's criticisms by comparing them point by point to what I wrote.

Point One: Appeal to Authority with No Authority

Chadd doesn't like my denigration of the manifesto signatories. Fair enough, Chadd is entitled to his opinion.

But my larger point is that the sixty signatories know no more about raising the achievemnet of low-SES students than your average Joe. I say this because none of them have actually raised the achievement of low SES kids. They all have their theories and opinions, but those theories and opinions, by and large, aren't backed-up by and hard science or research in education.

Chadd, however, thinks that I shouldn't be criticizing the signatories because he assumes that I "ha[ve] not ... read the work of all 60 persons, but also [have not] met each of them in person."

In fact, I am familiar with much of the work of those signatories who actually have done actual "research" on education topics. It's much less than Chadd thinks.

I'm not sure why not having met each of them in person is relevant, maybe Chadd can explain.

Point Two: The manifesto is merely a repackaging of old, failed welfare programs.

Chadd fails to address this point.

Point Three: The manifesto advocates adopting all sorts of trendy interventions regardless of expense or demonstrated efficacy.

Chadd fails to address this point.

Point Four: We're already doing much of what is being advocating in the manifesto, merely doing more of it is going to result in small marginal improvement at best and at great marginal cost.

Chadd fails to address this point.

Point Five: The amount of variance attributable to SES (under the most generous of inerpretations) is, at best, low; therefore, the improvement we'd expect for programs directed to improve SES will be small and educationally insignificant.

Chadd's criticism here is that my explanation is confusing. Confusing doesn't mean wrong. And, Chadd doesn't argue that I'm wrong. What Chadd does arue is that I " fail[ed] to acknowledge that [the low variance attributable to SES] is a much larger chunk than many school inputs predict."

Many, but not all. There are some instructional interventions that are far more effective than the SES interventions advocated in the manifesto. And by far more, I mean four to five times as large as the best SES intervention.

The existence of ineffective instructional interventions, which are legion, is irrelevant. Nor is it convincing to compare your pet intervention to these failures. The relevant comparison is between your pet intervention and the most effective interventions which are all instructional interventions.

Chadd concludes:

I realized that most of us blog quickly and episodically, thus meaning that our thoughts may be ill-formed or hastily expressed. Thus, we bloggers are wise to strike a tone of modesty and generosity in our posts.

I think Chadd needs to start listening to his own advice. Chadd's making an extraordinary criticism (i.e., characterizing my post as representing the awful state of education blogging) which requires a high burden of proof (if one wants to be modest), a burden that Chadd has failed to meet.

Update: Chad responds.

Chad is backpedaling.

Chad doesn't like "posts which resort to name-calling." I guess Chad doesn't think characterizing other bloggers' posts as representative of the "awful state of education blogging" isn't a form of name-calling. Pot. Kettle. Black. Of course, what Chad and I are both doing is engaging in the rhetorical device of ridicule:

Words intended to belittle a person or idea and arouse contemptuous laughter. The goal is to condemn or criticize by making the thing, idea, or person seem laughable and ridiculous. It is one of the most powerful methods of criticism, partly because it cannot be satisfactorily answered ("Who can refute a sneer?") and partly because many people who fear nothing else--not the law, not society, not even God--fear being laughed at.

Chad also might want to look up the definition of irony.

Chad also believes that my post "fail[ed] to consider the merits of other positions." I didn't fail to consider anything. I considered and refuted the merits of the Bigger, Bolder position both in the post and in other posts (which I indicated at the top of the post).

Lastly, Chad believes my post "carr[ies] the pretense of certainty and arrogance when doubt and modesty are more appropriate." Another ironic criteria since Chad, a new blogger, is doling out awards ridiculing other bloggers on what I have hopefully established as dubious grounds.

June 12, 2008

A Dopier Approach to Education

I'm a little late to this game and most of the other edubloggers have already commented on the latest educational snake oil, EPI's Bigger, Bolder Education Approach. Conspicuously absent from that list of adjectives is "smarter."

Half of the posts on this blog are devoted to debunking the nonsense contained in this initiative, but I want to say a few words anyway.

First, what we have here is an Appeal to Authority without any authority. It's not like any of the sixty jackasses quasi-celebrities that lent their well-known name to promote this initiative know any more about raising the achievement of low-SES children than you do.

Second, there's nothing bigger or bolder about this thing. It's the same old meme Rothstein and his ilk has been peddling for decades. The only thing that has changed is the packaging. The irony of that fact should not be lost on anyone who knows anything about education.

Third, they've taken a kitchen sink approach to the problem. It appears that they're advocating the adoption of every single "intervention" that's ever been researched (and I'm using their definition of "research" which clearly includes a lot of stuff which fails to meet even the low standards of social science research). Most of the crap they've included, no doubt to gain the signature of the researcher who performed the crappy research, has low effect sizes, and by low I mean not educationally significant.

Fourth, all the low hanging fruit has already been picked for these initiatives. We already do most of the crap being advocated. The initiative is bigger and bolder because it's asking us to do more of what we're already been doing. Apparently, they're not going to be satisfied until we do enough so that it's considered to be bigger and bolder. Another way to look at it is that they want us to throw a bigger and bolder amount of money at the existing programs--programs that aren't working so well now. We've already picked all the low hanging fruit of these initiatives. We've already implemented all the easy things that offer the biggest bang for the buck (and quite frankly the bang has proven to be more like a whimper). Now we're stuck trying to spend lots more money trying to eke out marginal improvements. (It's like regulating pollution: we've already stopped polluters from dumping pollution directly into the air/water. Now we are only arguing about reducing some pollutant by so many parts per million or billion, the effect of which will be de minimus and of questionable value.)

Fifth, by their own rhetoric we should no expect any of these initiatives to have much of an effect. The correlation (R) between socio-economic status and student achievement is a somewhat low 0.4. This means that, at best, the amount of variance (R2) in student achievement that is attributable to SES is only about 20% 16% [Looks like my multiplication skills have deteriorated.]. This means the vast majority (80% 84%) of the achievement gap is not attributable to SES. So, even if all the snake oil directed to improving SES is implemented as proposed the results are going to be extremely disappointing (and which I suspect will serve as the basis of a renewed even bigger and bolder rallying call). But, we know this already. The research tells us that student achievement won't be raised in a sustainable way through any of these initiatives, either alone or in combination. So why is this initiative being peddled as an education intervention. It's not.

It's a shame that so much attention has been paid to such dubious policy.

June 9, 2008

Today's Video

Today's video is a remake of Dan Willingham's, brain-based education video from a few weeks back. This time with 87% better production values.

Dan should do a lot more of these.

June 4, 2008

The problem with education research

Eduwonk asks his readers what would they do with $5 billion to improve American education.

Many responded that they'd funnel the money into various education research projects -- some sensible, some not so much.

Show of hands. Who here thinks that education research will improve education?

If you answered yes, do you know about Reading First?

If you know about reading First, how could you have answered yes?

Our problem isn't necessarily the lack of quality research. Our problem is that schools often aren't following the research (for various reasons).

For example, high-quality reading research has existed for quite some time now and points to very specific teaching practices. Educators, however, have not voluntarily adopted these research-based practices. Why should they? They like what they are doing. There is no incentive in the system to improve or get better results. Results don't matter. This remains true for most schools under NCLB.

Reading First was a top-down legislative attempt to bribe educators into adopting these research-based practices. An attempt that did not survive the legislative process. In the end, the legislative language was so watered down (intentionally mind you) the vast majority of funding went to reading programs with no research base. Some legislators didn't want funding to be distributed broadly instead of going to the few reading programs with their own research base. Other legislators wanted funding to go to reading programs of their political supporters regardless of the program's research base. (Let's not even get into the completely ineffective research that was commissioned with Reading First funds which will likely yield no usable results.) In the end it was politically impossible to enact such a law, even though the law had broad bipartisan support.

Right now the best indicator that education research will continue to have no effect on education policy is what the publishers of all those bogus reading programs that received funding under Reading First are doing. Or rather, what they re not doing -- they are not conducting research on their own products, despite reaping a huge windfall from Reading First. They're doing this because research doesn't matter. It didn't matter before Reading First. It didn't matter under Reading First. And there are no signs it will matter after Reading First. In short, why should publishers waster profits on research that ultimately won't affect their sales.

Do you still think education research matters? Tell me why.

June 2, 2008

Does money matter in education

Over at the Quick and The Ed, Kevin Carey writes:

I'm not one of those people who believe that "money doesn't matter" in education. That's absurd; money matters a great deal, and there are plenty of schools that don't get their fair share.

While I tend to agree with Kevin Carey on education policy matters (at least the non-financial ones), I tend to disagree with all of his non-education policy posts over at The Quick and the Ed. And since I try to focus this blog on education policy, let's leave it at that. (I also don't mean to single Kevin out since this is a common meme in the edusphere.)

Nonetheless, the "money doesn't matter" meme is not as absurd as Kevin is suggesting, provided that it's qualified with "at today's funding levels."

At today's funding levels, money doesn't matter. The correlation between school expenditures and student performance is very low to non-existent.

The fact that some schools "don't get their fair share" is irrelevant. The important question is whether schools are getting sufficient funding to educate their students. No one knows the answer to that question. What I do know is that many schools with low funding outperform many schools with much higher funding.

Let me cherry-pick some data points like Kevin did in his School Funding’s Tragic Flaw study.

From the 2005 Pennsylvania dataset from SchoolDataDirect, I found ten high-performing/low-funded school districts and ten low-performing/high-funded school districts based on 11th grade test results and total expenditures. The districts had similar poverty rates and minority attendance. Here's a graph of those 20 schools.

None of the high performance/low expenditure schools got their fair share. On average the low-performance/high-expenditure schools had $6880 less in funding (44% less) than the high-expenditure schools. That's not fair, now is it? Of course, it didn't stop these schools from performing over a standard deviation better than these high-expenditure schools. Perhaps if the high-expenditure schools received less funding they would have performed better?

Not content to merely cherry pick schools to "prove" my point, I decided to take a closer look at the dataset and focus on the performance of the economically-disadvantaged students.

First I looked at the performance of these economically-disadvantaged students in the best (top decile) schools. The average proficiency was 66.7%, well above the state-wide mean of all students (59.1%). These schools had expenditures averaging $11,640.

Then I looked rate at the performance of economically-disadvantaged students in the worst (bottom decile) schools. The average proficiency in the worst schools was 20.1%, well below the state-wide mean of all students. These schools had expenditures averaging $11,926.

Similar expenditures, vastly different outcomes.

Then I looked at it the other way.

I looked at the performance of economically-disadvantaged students in the best funded school districts (top-decile). The average proficiency in these rich schools was 41.1%, well below the state-wide mean of all students. The average expenditures for these schools was a whopping $16,260.

Then I looked at the performance of the worst-funded schools. The average proficiency in these poor schools was 45.1%, well below the state-wide mean of all students. The average expenditures for these schools was only $8,604.

You read that right. The economically-disadvantaged performed about a quarter of a standard-deviation better in the poorly funded schools. To put that in perspective, that's about the differential you'd expect from reducing class-sizes or hiring more effective teachers. (See here.) But, in this case, it worked the opposite way.

You would think that these highly funded schools pissed most of that funding away on facilities and other window dressings, but surely some of it went to reducing class-sizes and hiring more experienced teachers -- two of the more favored educational panaceas. But, if it did, it still didn't work as the literature suggests it might. Which shouldn't be surprising if you are familiar with the way monopolies work, especially state-run monopolies.

You'd be hard-pressed to torture the data in a way that permits you to conclude that "money matters" at these funding levels.