Tougher reading program means low city grades
Let's see what's going on.
Ok, so it wasn't the more rigorous program that caused the drop in grades, it was the switch to a standardized grading system. But it gets better.Parents of some Pittsburgh elementary school students will find an unwelcome surprise -- unusually low marks in reading -- when their children bring home report cards Nov. 17.
Because the Pittsburgh Public Schools this fall introduced a standardized grading system and what it described as a more rigorous reading program, some students have seen their performance slip on classroom tests.
That will translate into lower grades on report cards than parents are accustomed to seeing, said Susan Sauer, curriculum supervisor for elementary reading, and Barbara Rudiak, executive director for 18 district elementary schools. Some parents already have noticed a drop in their children's test scores.
"This has created a certain amount of controversy with principals, parents and teachers," said Dr. Rudiak, who is project manager for the "Treasures" reading program, purchased from Macmillan/McGraw-Hill for about 13,250 students in kindergarten through grade five. The program is also used in elementary classrooms at the district's K-8 schools and accelerated learning academies.Low grades cause controversy because they show schools aren't teaching. Most professions would respond to feedback indicating widespread practioner underperfformance by trying to self-regulate and improve before the government steps in. What do you think the educrats will do?
Go read RightWingProf's post on academic groupthink first and see if that influences your answer. We'll wait.
While we wait, let me tell you about Pittsburgh's new reading program "Treasures" which is a new basal reading program that tries to mimic reading research in an attempt to qualify for Reading First funds. The important thing is that it has not been researched and it has no evidence of success. What it does have is lots of pretty pictures. Many school districts selected this program on this basis rather than the more rigorous "does it actually work" basis. Lets dub this the Pretty Picture Fallacy.
But even in education, you reap what you sow, pretty pirctures not withstanding, as we're about to find out.
Aligned to state learning standards and the federal No Child Left Behind Act, the program should boost student performance on the Pennsylvania System of School Assessment, said Richard Sternberg, principal of Grandview Elementary in Allentown and president of the Pittsburgh Administrators Association.Should boost performance? Should boost performance?
Ms. Sauer and Dr. Rudiak said changes in elementary reading complement the district's "Excellence for All" plan, which includes a 2009 deadline for academic improvement.
Based on what, you may ask.
Based on the typical educrat standby -- a wild ass guess. This is how education works in this country.
Educators select new programs based on such tangibles as pretty pictures and other faddish nonsense, and then guess, without evidence, that it'll work. When it turns out five or six years later that the new program doesn't work, the next best thing will be out and they'll switch to that without remorse and without regard to all the children damaged by the current program.
And, just because this is a phonics based program containing the five essential elements of reading instruction doesn't mean that it's any more likely to succeed. Such reasoning is based on the following logical fallacy:
If a dog is a Dalmatian, it has spots.It takes a lot more than phonics to make a good reading program. It takes a lot more than reading comprehension, vocabulary, fluency, and phonemic awareness for that matter.
Therefore, if a dog has spots, it is a Dalmatian.
The first statement is true. The second statement doesn't follow from the
first.
The probable response from most readers is that nobody could be naive enough not to recognize this flaw. English setters, some terriers, sheepdogs, and many mutts have spots. Unfortunately, there are many educational parallels to the argument that all dogs with spots are Dalmatians. Here's one:
If a beginning-reading program is highly effective, it has various features: phonics, phonemic awareness, and so on.
Therefore, if a program has these features, it will be highly effective.
Current reform practices revolve around this logic, but the logic is as flawed when it refers to effective programs as it is when it refers to Dalmatians.
There's only two ways to know whether a reading program has fit the puzzle together correctly--test it beforehand or inflict it on hapless students and let the chips lie where they may. Course two is the typical course of action taken by most publishers by the way.
If we want to find out how well the program works, the fail safe method is to look at the performance of the students, under, say, a fair grading system.
The distinction the journalist was looking for, but failed to produce, was the objective vs. subjective distinction. The old grading scheme was subjective and permitted the schools, through the teachers, to juice up the grades by making easy tests, grading tests more easily, and/or giving more weight to non-test work (i.e., more subjective criteria).Before this school year, each teacher decided what constituted an "A" or other grade in elementary reading. Some teachers gave more weight to take-home work than colleagues did, and some teachers designed their own tests because they didn't like those provided with the previous program, Harcourt "Collections," Ms. Sauer and Dr. Rudiak said.
Such discretion has been eliminated as the district standardizes instruction within a school and among schools to promote equal treatment of students, they said. All teachers now use tests provided with the "Treasures" program.
The new grading scheme is more objective and takes power away from the schools to fudge grades by using their own subjective measures.
Educators don't like objectivity and empirical results. You'll find out why this is so soon enough if you don't know already.
Imagine if you were a patient getting treated for a broken arm using a new experimental treatment. After the treatment the doctor makes you get an x-ray to see if the bone set properly. The x-ray shows the bone did not set right. The doctor, however, tells you that the x-ray machine has indicated too many mis-set bones using this new treatment, so the doctor tells you that they've adopted an additional criterion for determining if bones are really set properly. If the patient filled out his insurance forms correctly, then they've been a good patient and good patients heal properly. Therefore, your broken arm has set properly. You leave the doctors office relieved that the new treatment has successfully healed you, even though your bone has not set properly.Initially, the district decided that the weekly tests would make up 40 percent of a student's grade; unit tests, given about every five weeks, 50 percent; and assignments, 10 percent.
But that formula quickly yielded so precipitous and widespread a drop in grades that officials revised the system. In hindsight, Dr. Rudiak said, the initial formula gave too much weight to tests -- and did so when the format was brand new to students and teachers.
This is what's going on in Pittsburgh. The objective tests made by the curriculum publisher are indicating that kids aren't learning what the publisher intended.
There are three potential reasons for this: one, the curriculum isn't any good; two, the teaching isn't any good; or three, some combination of one and two.
I'm going with option three. The curriculum is untested and the authors of that curriculum have no track record of success and the schools in Pittsburgh have no track record of success teaching kids like the kids in the Pittsburgh school district.
Pittsburgh Federation of Teachers President John Tarka said his members advocated for a change in the grading system to keep students from being unfairly penalized. While cautioning that "we have more work to do," he said the revised formula is better for students and restores some of the flexibility teachers need to be effective.
"There's a fine line between there not being guidance and something being too rigid. Neither one is good," Mr. Tarka said.
Could someone explain these two paragraphs to me in a way that makes sense.
How does allowing students to pass when tests indicate that they've failed "unfairly penaliz[e]" the student? Is it that under NCLB, failing students indicate failing schools?
And, how does allowing schools to pass kids who don't deserve to pass enable teachers to be effective? Effective in what? Social promotion?Assignments now make up 30 percent of the grade, while unit tests and weekly tests each account for 35 percent. Ms. Sauer and Dr. Rudiak said they still expect some students to have lower-than-typical report card grades, in part because of the rigorous "Treasures" program.
While the Harcourt program tested students on passages they had read before, "Treasures" requires students to apply comprehension and other skills to unfamiliar texts. It includes more nonfiction passages than the old program.
So now subjective assignments are worth three times as much, and even that won't mask the failure in the Pittsburgh school district.
Ms. Sauer and Dr. Rudiak said additional adjustments to the grading system may occur in the future. Mr. Sternberg said students are catching on.
"As a matter of fact, they're starting to do better now," he said.
Better than what?
5 comments:
Wait. If assignments are 30% of the grade, and tests are 35%, what makes up the remaining 35% of the grade?
That threw me for a second as well> I resolved the ambiguity by reading it:
Assignments = 30%
unit tests = 35%
weekly tests = 35%
That tracks with the original break down which was:
Assignments = 10%
unit tests = 50%
weekly tests = 40%
Portfolios!
"And, how does allowing schools to pass kids who don't deserve to pass enable teachers to be effective? Effective in what? Social promotion?"
In the "old days", they flunked kids. The teachers knew they had to bite the bullet and make the decision. The kids knew it and the parents knew it. If you got a 'D' or 'F', it was summer school. If you got more, then you didn't get to go on to the next grade. They would see it coming. Four report cards. You could track the red D's and F's across the report card. Schools would call the parents. There was no surprise.
This doesn't mean that the curriculum or teaching methods were great, but there was an incentive on everyone's part to buckle down and work. There was social promotion, but it was rare adn more for practical reasons than for pedagogical or philosophical reasons.
Why, exactly, did this change?
With pedagogical social promotion, there is no longer any incentive on anyone's part to make sure that ANY learning gets done. Let's just pretend that the problems don't exist, or just blame it on external forces. The poor (poor) darlings are not developmentally ready and we don't want to harm them by holding them back, so let's design a spiraling system for the curriculum and move them along. If they see the material enough, they will learn it. Of course, it doesn't happen.
With social promotion and spiraling curricula there has to be lower expectations. This spawns crazy ideas like differentiated instruction to magically make things work (or at least sound good to parents). Still, there are no incentives to work hard and achieve results for anyone.
If you do well, it probably has more to do with IQ and parental help at home. Is this what these educators want? Is this any way to close the achievement gap?
Am I missing something?
"Assignments = 30%
unit tests = 35%
weekly tests = 35%"
Oh, of course. Had I written that sentence, I think I would have put an "each" in there, just to avoid possible confusion.
I was wondering if they had the same math class this principal took.
I read your blog with great interest every week. You seem very well informed. What do you know about Scholastic's Read 180? With a pot of money burning a hole in their pockets, administrators in my distict are about to purchase this very expensive program. Any insights?
Post a Comment