April 1, 2007

Effective Mathematics Instruction The Importance of Curriculum

I found a nice little study comparing a fourth grade Direct Instruction math program with a well regarded fourth grade constructivist program. The results were surprising, to say the least.

The study Effective Mathematics Instruction The Importance of Curriculum (2000), Crawford and Snider, Education & Treatment of Children compared the Direct Instruction 3rd grade math curriculum Connecting Math Concepts (CMC, level D) to the constructivist fourth grade math curriculum Invitation to Mathematics (SF) published by Scott Foresman.

Invitation to Mathematics (SF)

SF has a spiral design (but of course). and relies on discovery learning and problem solving strategies to "teach" concepts. The SF text included chapters on addition and subtraction facts, numbers and place value, addition and subtraction, measurement, multiplication facts, multiplication, geometry, division facts, division, decimals, fractions, and graphing. Each chapter in the SF text interspersed a few activities on using problem solving strategies. Teacher B taught the 4th grade control class. He was an experienced 4th grade math teacher and had taught using the SF text for 11 years.

Teacher B's math period was divided into three 15-minute parts. First, students checked their homework as B gave the answers. Then students told B their scores, which he recorded. Second, B lectured or demonstrated a concept, and some students volunteered to answer questions from time-to-time. The teacher presentation was extemporaneous and included explanations, demonstrations, and references to text objectives. Third, students were assigned textbook problems and given time for independent work.

The SF group completed 10 out of 12 chapters during the experiment.

Connecting Math Concepts (CMC)

CMC is a typical Direct Instruction program having a stranded design in which multiple skills/concepts are taught in each lesson, each skill/concepts is taught for about 5-10 minutes each lesson and are revisited day after day until the skill/concept has been mastered. Explicit instruction is used to teach each skill/concept. CMC included strands on multiplication and division facts, calculator skills, whole number operations, mental arithmetic, column multiplication, column subtraction, division, equations and relationships , place value, fractions, ratios and proportions, number families, word problems, geometry, functions, and probability. Teacher A had 14 years of experience teaching math. She had no previous experience with CMC or any other Direct Instruction programs. She received 4 hours of training at a workshop in August and about three hours of additional training from the experimenters.

Teacher A used the scripted presentation in the CMC teacher presentation book for her 45 minute class. She frequently asked questions to which the whole class responded, but she did not use a signal to elicit unison responding. If she got a weak response she would ask the question again to part of the class (e.g., to one row or to all the girls) or ask individuals to raise their hands if they knew the answer. There were high levels of teacher-pupil interaction, but not every student was academically engaged. Generally, one lesson was covered per day and the first 10 minutes were set aside to correct the previous day's homework. Then a structured, teacher-guided presentation followed, during which the students responded orally or by writing answers to the teacher's questions. Student answers received immediate feedback and errors were corrected immediately. If there was time, students began their homework during the remaining minutes.

The CMC group completed 90 out of 120 lessons during the experiment.

The Experiment

Despite the differences in content and organization, both programs covered math concepts generally considered to be important in 4th grade--addition and subtraction of multi-digit numbers, multiplication and division facts and procedures, fractions, and problem solving with whole numbers.

Students were randomly assigned to each 4th grade classroom. The classes were heterogeneous and included the full range of abilities including learning disabled and gifted students. There were no significant pretest differences between students in the two curriculum groups on the computation, concepts and problem solving subtests of the NAT nor on the total test scores. Nor did any significant pretest differences show up on any of the curriculum-based measures.

The Results

Students did not use calculators on any of the tests.

The CMC Curriculum Test

For the CMC measure the experimenters designed a test that consisted of 55 production items for which students computed answers to problems, including both computational and word problems. The CMC test was comprehensive as well as cumulative; problems were examples of the entire range of problems found in the last quarter of the CMC program. Problems were chosen from the last quarter of the program because the various preskills taught in the early part of the program are integrated in problem types seen in the last quarter of the program.

The results here were not surprising, although the magnitude of the difference between the two groups may be.

The SF class averaged 15 out 55 (27%) correct answers on the posttest up from 7 out of 55 correct on the pre-test. The CMC class averaged 41 (75%) correct on the posttest up from 6 out of 55 correct on the pretest. I calculated the effect size to be 3.25 standard deviations which is enormous, though biased in favor of the CMC students.


The SF Curriculum Test

The SF test was published by Scott, Foresman to go along with the Invitation to Mathematics text and was the complete Cumulative Test for Chapters 1-12. It was intended to be comprehensive as well as cumulative. The SF test consisted of 22 multiple-choice items (four choices) which assessed the range of concepts presented in the 4th grade SF textbook.

The SF class averaged 16 out 22 (72%) correct answers on the posttest up from 4 out of 22 correct on the pre-test. However, surprisingly the CMC class averaged 19 (86%) correct on the posttest up from 3 out of 15 correct on the pretest. I calculated the effect size to be 0.75 standard deviations which is large, even though the test was biased in favor of the SF students.

The NAT exam Math Facts Test

The CMC group also scored significantly higher on rapid recall of multiplication facts. Of 72 items, the mean correctly answered in 3 minutes for the CMC group was 66 compared to 48 for the SF group for the multiplication facts posttest. I calculated the effect size to be 1.5 sd.

Posttest comparisons on the computation subtest of the NAT indicated a significant difference in favor of the CMC group. Effect size = 0.86. On the other hand, neither the scores for the concepts and problem-solving portion of the NAT nor the total NAT showed any significant group differences. The total NAT scores put the CMC group at the 51st percentile and the SF group at the 46th percentile, but this difference was not statistically significant.

Discussion


The CMC implementation was less than optimal, yet it still achieved significantly better performance gains compared to the constructivist curriculum. The experimenters noted:

We believe this implementation of CMC was less than optimal because (a) students began the program in fourth grade rather than in first grade and (b) students could not be placed in homogeneous instructional groups. A unique feature of the CMC program is that it's designed around integrated strands rather than in a spiraling fashion. Each concept is introduced, developed, extended, and systematically reviewed beginning in Level A and culminating in Level F (6th grade). This design sequence means that students who enter the program at the later levels may lack the necessary preskills developed in previous levels of CMC. This study with fourth graders indicated that even when students enter Level D, without the benefit of instruction at previous levels, they could reach higher levels of achievement in certain domains. However, more students could have reached mastery if instruction were begun in the primary grades.

Another drawback in this implementation had to do with heterogeneous ability levels of the groups. Heterogeneity was an issue for both curricula. However, the emphasis on mastery in CMC created a special challenge for teachers using CMC. To monitor progress CMC tests are given every ten lessons and mastery criteria for each skill tested are provided. Because of the integrated nature of the strands, students who do not master an early skill will have trouble later on. Unlike traditional basals, concepts do not "go away," forcing teachers to continue to reteach until all students master the skills. This emphasis on mastery created a challenge for teachers that was exacerbated in this case by the fact that students had not gone through the previous three levels of CMC.

Why didn't the CMC gains show up on the NAT problem solving subtest and total math measure? The experimenters opine:

Our guess is that a more optimal implementation of CMC would have increased achievement in the CMC group, which may have shown up on the NAT. In general, the tighter focus of curriculum-based measures such as those used in this study makes them more sensitive to the effects of instruction than any published, norm-referenced test. Standardized tests have limited usefulness for program evaluation when the sample is small, as it was in this study (Carver, 1974; Marston, Fuchs, & Deno, 1985). Nevertheless, we included the NAT as a dependent measure because it is curriculum-neutral. The differences all favored the CMC program.

That no significant differences occurred either between teachers or across years on the NAT should be interpreted in the light of several other factors. One, the results do not indicate that the SF curriculum outperformed CMC, only that the NAT did not detect a difference between the groups, despite the differences found in the curriculum-based measures. Two, performance on published norm-referenced tests such as the NAT are more highly correlated to reading comprehension scores than with computation scores (Carver, 1974; Tindal & Marston, 1990). Three, the NAT concepts and problem solving items were not well-aligned with either curriculum. The types of problems on the NAT were complex, unique, non-algorithmic problems for which neither program could provide instruction. Performance on such problems has less to do with instruction than with raw ability. Four, significant differences on the calculation subtest of the NAT favored the CMC program during year 1 (see Snider and Crawford, 1996 for a detailed discussion of those results). Because less instructional time is devoted to computation skills after 4th grade, the strong calculation skills displayed by the CMC group would seem to be a worthy outcome. Five, although the NAT showed no differences in problem solving skills between curriculum groups or between program years, another source of data suggests otherwise. During year 1, on the eight word problems on the curriculum-based test, the CMC group outscored the SF group with an overall mean of 56% correct compared to 32%. An analysis of variance found this difference to be significant...

And, here's the kicker. The high-performing kids liked the highly-structured Direct Instruction program better than the loosey goosey constructivist curriculum:

Both teachers reported anecdotally that the high-performing students seemed to respond most positively to the CMC curricula. One of Teacher A's highest performing students, when asked about the program, wrote, "I wish we'd have math books like this every year.... it's easier to learn in this book because they have that part of a page that explains and that's easier than just having to pick up on whatever."

It may be somewhat counter-intuitive that an explicit, structured program would be well received by more able students. We often assume that more capable students benefit most from a less structured approach that gives them the freedom to discover and explore, whereas more didactic approaches ought to be reserved for low-performing students. It could be that high-performing students do well and respond well to highly-structured approaches when they are sufficiently challenging. These reports are interesting enough to bear further investigation after collection of objective data.

17 comments:

Anonymous said...

Are these changes in the department of education something to be worried about?
http://www.leadertalk.org/2007/04/sweeping_change.html

CrypticLife said...

Sadly, my son is in a first grade TERC (i.e., constructivist math) classroom. As such, his ability to multiply, divide, square numbers, work competently with negative numbers (including multiplication/division), and do multiple-digit addition/subtraction is largely ignored. However, should he make a counting error when counting up 30 or 40 blocks, it's a mark against him.

American primary educational philosophy seems to labor under a strange view that kids can't learn math. Teachers seem to think kids in other countries, particularly Asian countries, are simply innately good at math. This is simply untrue (in fact, Richard Nesbitt in his book The Geography of Thought cogently suggests that Westerners may think more reductionistically (i.e., in a more math-friendly fashion))

TurbineGuy said...

crypticlife,

Somehow even with a back to basics curriculum, I doubt that your son would learn to "multiply, divide, square numbers, work competently with negative numbers (including multiplication/division), and do multiple-digit addition/subtraction" in 1st grade.

My biggest issue with my 1st graders classroom, is that they aren't required to master the addition (and subtraction) facts. It is so annoying to see kids counting on their fingers while adding 7 + 3.

TurbineGuy said...

crypticlife,

Apologies, I misread your post...

It sounds like your 1st grader has the same problems that my 3rd grader has. I am willing to bet that he could survive in Algebra, but is stuck doing stuff he mastered last year.

Why oh why is Ability Grouping such a dirt phrase?

Anonymous said...

Ooh! I know! Pick me!!!

Abilities grouping is considered "bad" (even though if we were honest with ourselves we would know it was necessary) because even in second grade kids know if they are in the "smart", "average" or "dummy" class. And believe me, those are the words we used in the 70s.

Can't have little Johnny thinking he's a dummy. It would be bad for his self-esteem. Maslow's Pyramid is more important that math you know.

Anonymous said...

shortwoman,

Don't forget that the struggling students learn just by being in the presence of the advanced students. Osmosis, you know.

KDeRosa said...

I like Engelmann's take th best:

"The notion that the lower performers are humiliated if they are in a homogeneous group with other lower performers is actually backwards. They will suffer far more if they are placed far beyond
their level of skill and knowledge, because they will receive an uninterrupted flow of evidence that they are dumber than all the other children in the group"

Anonymous said...

Why is "ability grouping" the great unmentionable? Because the racial and ethnic makeup of the groups is politically intolerable.

Anonymous said...

Anon:

Even though there may be a race/class component in many districts, I went to a lily white suburban school (a local apartment manager was regularly threatened by the locals about what might happen should she rent to any of those **** [insert racial slur here]) with a solid tax base of mostly middle class housing. This school was as culturally homogenous as you can get outside a rerun of some old sitcom.

Even in the absence of stereotypical low-performing minorities, abilities grouping had to be abandoned.

Joanne Jacobs said...

Success for All groups students by reading performance so that the teacher can focus on the skills the whole group needs to learn. I reported on a number of Success for All schools. Teachers said student behavior improved significantly. Poor readers, they thought, had been living in fear of being called on to read in class. So they kept their classes in an uproar to hide their incompetence. In a small group of readers with similar skills, students felt safe from humiliation.

I think "ability" grouping is a poor term. There are many reasons why some kids have trouble with reading or math that have nothing to do with innate ability. For example, they may have been taught poorly.

Anonymous said...

"I think "ability" grouping is a poor term. There are many reasons why some kids have trouble with reading or math that have nothing to do with innate ability. For example, they may have been taught poorly."

This is why many don't like tracking in the lower grades. It's quite understandable. When schools have such trouble with curricula and teaching methods, who can trust their ability to correctly place students. It may just allow them to ignore real problems and blame it on the students.

But just because schools are bad at teaching and selecting curricula doesn't mean that ability grouping shouldn't be done. Perhaps it would sound better if you called it capability grouping. It won't make their teaching problems go away, but it will sure help those who are ready and willing for more.

Look at it this way, if you don't separate kids by capability, then that doesn't guarantee better teaching methods and curricula either. Perhaps cpapbility grouping would get parents to question why their child is not in the faster paced group. Schools would have to justify their decisions. Without grouping, it's like Sergeant Schultz ..."I see nothing! NOTHING!"

By high school, everyone does abiliity grouping, but by then, everything looks like external reasons.

TurbineGuy said...

I don't think I buy the arguments against ability grouping. The arguments against it seem to argue that kids will be put into a "slower" group because they haven't been taught the material earlier, but this doesn't have to be the case.

Ideally kids would be grouped into levels based on current "level" and "speed". For example...

For example, imagine a program that had several levels (roughly equivalent to grades) 1, 2 3, 4, 5

Now within each level there would be three groups corresponding to the pace upon which that group is able to get through the program... i.e. a for slower kids, b for average, c for faster kids.

This means my 3rd grader might be placed in level 2, but he could also be placed in group 2c which would have him up to grade level within a year… and beyond it in another year.

Also note, because of the grouping, even kids in the slowest group would still move at a pace that ensured they were progressing trough the level in one calendar year.

Summary: I want ability groups based on "potential" not on simply current level.

Yes... some kids will move through the system exponentially quicker... so what... at least the kids who struggled a little more would move through the system at a quicker pace than they do now and not get left behind.

Right now our system takes all kids through to about 6th grade level at the same time. 1/3 are held back from what they are capable of… 1/3 are just right… and 1/3 are completely left behind with no way to catch up.

My daughter is sinking in reading in her current 1st grade class, but she is still tested and expected to read books way above her current level. She is slowly learning to hate reading, homework is torture for her, and she has recently taken to calling herself a dummy. I would much rather she get placed in a class that is on her level and moves at a pace that she can master what she needs too.

(Note: she also needs to get out of a classroom that promotes whole language)

CrypticLife said...

When I was in Junior High School, there were 5 levels in 8th grade math. I was in the second highest group, and performed rather poorly there (I believe because of my teacher's rather strict insistence on memorizing formulae -- when we had a substitute for about a week, my performance suddenly shot up).

The teacher decided that due to performance I should be moved down a level -- in 9th grade there were only four levels, so I was moved into the third group (even though those who had been in the third group the prior year went into the second group in the revised system -- I'm not sure, I think there was some confusion somewhere). Yes, being moved from the "pretty smarts" to the "pretty dumbs" did bother me for a bit, though I never actually heard any comments on it from other students.

I had no idea at the time, but this ended up being one of the best things that happened during my high school career.

We went over things laboriously and repeatedly. Over and over, again and again long after I knew the material. Immediately I aced every test, of course, which wasn't a great motivator but made me feel good anyway.

It's also worthwhile to note that it helped me develop more sympathy for those with lesser abilities.

By early 11th grade, I would correct the teacher regularly. By the end of 11th grade, I would see proofs of problems the teacher wasn't aware existed. In 12th grade, I started coming in to school early, talking to the calculus teacher for 15 minutes or so about his lesson, and then going out and helping kids doing their calculus homework in the hall. My base of knowledge was solid enough that picking up new information was a breeze.

I disagree with stevenh on changing it to "capability" grouping. In fact, to my ears this sounds even worse -- it's like saying the kids will always be at the lower rungs. Children do progress at different paces at different times, and have spurts in their intellectual growth just as in their physical growth. Ability grouping is just that -- current ability grouping. Joanne, I understand your concerns, but the reasons behind the differing ability are not necessarily at issue (except for the kids). It's not "innate ability" they're being grouped by. No one can judge their innate ability anyway.

Anonymous said...

The sad part about school today is the fact that the people trying all this new, progressive pedagogy like constructivist math don't have a clue what basic education is really about. They spend so much time playing games and doing tricks that the students don't learn the fundamentals.

I have seen from experience the damage done to children who do not learn the critical fundamentals at the early grades. Being a fourth grade teacher for four years, I have seen the disaster "progressive" education has done to some children. I personally have a student who is in my EIP (Early Intervention Program- a class limited to 14 low performing regular education students) class who has been retained once, been in the second and third grade "taught" by two supposedly "good" teachers and the child cannot decode at all. She was never taught it. It is not a learning disability. I am now trying to teach this child basic decoding, AFTER BEING IN AN AMERICAN PUBLIC SCHOOL FOR 5 3/4 YEARS.

I have encountered NUMEROUS students who, even to this day, do not know basic addition, subtraction, and multiplication facts entering the fourth grade. Fifth grade teachers lament the fact that students do not know basic facts.

I think what the problem is the lack of SYSTEMATIC instruction. In many places, like my current school, some teachers are allowed to do what ever they damn well please. Unfortunately, the student suffers under tutelage of "all high and mighty." These all high and mightys are either the ones who insist on games and gimmicks or are too lazy to properly teach and subsist on busy work and worksheets.

I have used a program with similar coomponents like DI (Voyager Time Warp). I like DI it because it's simplicity, but effectiveness. It takes the mystery out of instruction. It is consistent. It works for both the fresh rookie and the experienced veteran. It engages students. No child is left behind. If these teachers had been using DI instead of doing their own thing, these kids would be able to read now.

Thomas

Anonymous said...

Rather than "ability" or "capability" grouping, why not call it "readiness" grouping. There are all sorts of reasons for being ready, or not ready, for a particular level of instruction. Prior instruction, missing school because of illness, innate ability, emotional upset -- the list goes on.

dweir said...

re: Grouping

I wish this were just a matter of semantics, but the politics of hurt feelings has a powerful lobby. So, we must be careful with our word choice.

Joanne hit the nail on the head. These are skill based groups. These groups can be formed without assessment of fuzzy measures such as potential and readiness, or, ideally, even their age. Rather, their membership should be based on what skills the child has attained at what rate they can acquire new skills.

Skill based groups should be unique for each subject area, including art, music, gym. Such grouping should allow fluid movement between groups throughout the year. The name "skill based" should remind teacher and student that their task is to demonstrate skill mastery through any number of measuarble objectives.

An art teacher I once worked with defined talent as the outward representation of acquired skills. Anyone could learn these skills. Some will master them faster or with a greater degree of perfection than others, but we can all improve our skills. The physicist Richard Feyman knew this when he sought out someone who could teach him portraiture.

There should be no shame associated with skill based grouping, whatever it is called. I believe that by ensuring all subjects are so grouped, almost everyone is going to find themselves in a variety of levels as compared to their peers. Most importantly, the requirements for advancing to the next group should be clear and explicit, with all children given the opportunities to stretch themselves or be comfortable as they choose.

Travis Burke said...

Ability grouping happens all of the time in schools. The reason it is frowned upon, is because these groups were historically static, keeping children on the same track with the same other students at the same rate of learning.

The newest term is invitational groups, based on specific skills. So, if you had a class with a portion of fifth graders that needed decoding work, you could work with that group for just that skill. Students could move in and out of these groups as skills were brought up to grade level (mastery is such a false term). This flexible grouping allows for students to spend most of their time in independent practice and inquiry, while meeting for specific skill work that is needed.

In ability grouping, students are usually given a large assessment that measures a general level. If students are grouped based on this assessment, they will not receive specific skill work, rather, they will progress at a learning rate at this level with similar peers. This just puts children on a bell curve in a smaller group. This approach is better than the whole class model, but still not efficient or effective for each child.

A teacher that wants to effectively group students homogeneously will look at the results of a large assessment, and then get more information about students at levels of need by conducting authentic assessments such as interviews, "kid watching," running records, informal reading assessments from student chosen text and teacher chosen text. This will allow the teacher to get a better view of the whole child.

As far as constructivism, it is a misnomer that it is a pedagogical strategy. Contructivism is Piaget's theory on how students learn. When appied to education, it is very effective and takes a masterful teacher and classroom manager. It is not an easy philosophy to be successful with because our students are tested, schools are organized and designed, and the culture of the public is in line with traditional practices. In mathematics, children should learn the same content as traditional courses yet not rely on the same strategies, such as memorization of an algorithm. Instead, they should be coached to acquire more and more sophisticated problem strategies that are elegant and efficient and practiced. Children should have strategies for solving a multi-digit multiplication problem that are based on their own construction of knowledge that are accurate and work well for the child.

A child that has been in an effective constructivist setting should view each problem and devise a strategy that is most efficient for that problem (and accurate, logical of course). So for math facts, memory is typically the best first strategy. If memory is not sufficient or developed around these facts, a child should still be able to construct an accurate response in a reasonable amount of time.

The hardest thing to see is the role of the teacher. Teachers should be like facilitators, coaches, and learners all in one. When working with a math concept, a teacher can control the direction of the class by having clear learning targets and seeking out mathematical thinking from students to meet these targets. So, for a multiplication lesson, if I wanted to see my class move from counting on fingers to using an algorithm, I would provide time for my students to work independently and together on a situation or problem(s), and then look for a progression of strategies that students were using that I could teach from via student presenting. I would allow the students to share and discuss a few strategies, and use this work to help my class see the connections, and practice the more efficient strategies.

This teaching keeps all students in their zones of proximal development and allows them to accelerate by seeing and practicing strategies that are on the cusps of their learning edges.

We have a long way to go as educators as far as professional develop goes, but many great things are happening in progressive education that cannot be ignored.