(This is the third post in the Gering Series.)
Thanks to the DI implementation and lots of hard work by the district, Gering Public Schools has managed to close some of the achievement gaps between Hispanic students and white students. In Gering, Hispanics represent a substantial minority of students, nearly 30%.
Here are Kindergarten scores from the DIBELS phoneme segment fluency test for Hispanic and white students before and after the DI implementation.
As you can see, the percentage of Hispanic students passing the test as increased by an amount sufficient to close the achievement gap with the white students whose pass rate has also increased.
Here are second grade scores from the DIBELS oral fluency test for Hispanic and white.
Again, note that white students have made significant gains compared to previous cohorts, but Hispanics have made even greater gains--gains sufficient to create a reverse achievement gap.
Bear in mind that closing the gap with respect to the number of students passing the benchmark is not the same as closing the achievement gap with respect to absolute achievement scores. It could be that the scores of white students are still higher than the scores of Hispanic students. I don't have data to report either way, at least not yet. But, keep in mind that NCLB is concerned with the closing the gap between the percentage of students meeting benchmarks, not with absolute scores.
This is how NCLB is supposed to work. Schools are supposed to be improving instruction such that student achievement is improved for all groups with the effect that more students from lagging groups will pass the benchmark and close he gap. This is how it is working in Gering.
I am not sure I understand the purpose of DIBELS phoneme segmentation test in kdg so I don't know how to interpret this data from Nebraska.
I know many reading folks believe that kids must have phonemic awareness knowledge before formal reading instruction begins. I am guessing that this is the basis for this test in kdg, to see if kids have this knowledge.
Many folks from the explicit reading instruction world believe PA develops from the instruction and does not need to be taught before instruction. Others feel it is a prerequisite for reading instruction and should be taught before instruction begins.
Does anyone know if the DI program explicitly teaches segmentation in kdg as a prereading activity?
Without that information it is hard to interpret the meaning of the higher segmentation scores.
They could just be because the the skill of segmenting was explicitly taught or it could mean the reading instruction in kdg helped to develop this skill and the kids are now reading words and have caught up the kids who came into kdg already reading.
I can't get too excited about higher segmenting scores without knowing how they got higher- good explicit instruction in reading or segmenting taught as a prerequisite to formal instruction?
Again the scoring on this test is questionable. There are 24 words on the kdg segmenting test. If the child just said the first sound of each word he would have a score of 24 which is listed as benchmark for the middle of the year and low risk for the end of the year. The child must say 35 sounds to get benchmark.
If he got the first and last sound in each word he would be benchmark. Again what does this score really tell us about real reading gains?
Can these kdg children read CVC words correctly? Can they read a sentence or short story with CVC words with fluency?
Now this data would be useful.
Kathy, my understanding is that in the DI programs there is no explicit PA instruction before or during formal reading instruction. Students do get lots of practice sounding out words, however. But I do not know exactly what was going on in Gering implementation.
Here is the last story in Reading Mastery Fast Cycle, all but the lowest of lower performers should get to this lesson. There are no context clue pictures accompanying the text:
Leaving the Land of Peevish Pets
Jean had found out fifteen rules. The last rule she found out told about making the wizard disappear. She needed only one more rule. So she sat down and began to think. Suddenly, she jumped up. She said, “I’ve got it. Every time I needed help, the wizard appeared. I think that’s the rule. I’ll find out.” She stood and yelled, “I need help.”
Suddenly, the wizard appeared. Jean said, “I think I know all of the rules. I know how to make you appear. Here’s the rule: if you want the wizard to appear, call for help.”
“Good,” the wizard said.
Then Jean said, “So now I can leave this peevish land of peevish pets.”
“That is right,” the wizard said. “You have found out all the rules. So you may leave. Just close your eyes.”
Jean closed her eyes. Suddenly, she felt something licking her face.
She opened her eyes. She was in bed. Her mom and dad were standing near the bed, and there was a puppy on the bed. He was licking Jean’s face. He was black and brown and white. And he had a long tail. He was very pretty. Jean hugged him.
“Can I keep him?” she asked. “Can I, please?”
“He’s your puppy,” her mom said. Jean hugged the puppy harder. The puppy licked her face again.
Jean’s mom said, “Somebody left this puppy for you. There was a note with him.”
Jean’s dad handed the note to Jean. The note said: “This dog is for Jean. His name is Wizard. And here is the rule about wizard: If you love and play with him, he will grow up to be the best dog in the land.”
Jean was so happy that tears were running down her cheeks. She said, “Thank you, Wizard. Thank you very much.”
She followed the rule, and her dog Wizard did become the very best dog in the land.
The story cited as "last story in Fast Cycle" includes a sufficiently representative sample of the Alphabetic Code and other linguistic conventions to provide a reasonable indicator that a "kid can read." For confirmation one would like to have a heavier weighting of the more complex grapheme phoneme correspondences, but this is close enough. If a kid can read and "tell the plot," the kid can read. There are of course technical lexicon and conceptual matters left to deal with but the kid can "read to learn" these.
If I read the SRA lit right, this would come at the end of Grade 2. There is still the "low group" to be concerned about, and that's not a trivial concern. Likely some quarter of the cohort? But the low group aside, that's accomplishing the NCLB aspiration a year before the Grade 3 NCLB target. Not bad at all. But the rubber-ruler norming of standardized tests inherently precludes crediting the accomplishment. Not fair. Not smart.
DIBELS and TN add nothing useful to this information and actually confound the message. The accomplishments can be made overriding racial/ethnic and SES factors. It appears Gering is doing this, but the test data they've presented don't do their instructional efforts justice.
The reason that standardized achievement tests measure only SES is an inherent artifact of the the test construction. I went through this fast in the previous post. IRT forces the data into the bell-shaped curve. If Gering 2nd graders were distributed on the basis of their expertise in reading the cited passage, you wouldn't get a "normal distribution." The scores would pile up at the top. That's the real-world "normal" distribution one gets with effective instruction.
Feds and states have mandated the use of the indefensible tests. They're the unaccountables in the enterprise. The weakness is at the top of the EdChain, not the bottom.
Just throwing out standardized tests won't get the job done. Since Ken cited Reading Recovery in making that point, I'll throw in my 2cents also. Reading Wreckovery hoodwinked the What Works Clearing House by running ungrounded "scores" through the stat ceremony to arrive at statistically significant differences.
The only thing the Clearinghouse "clears" is the ceremony, so Reading Wreck "passed." WWC lends a whole nother meaning to the term, "what works." The good news is that they "reports" are so difficult to interpret that few people give the information the time of day. But the Clearinghouse is another example of "weakness at the top of the EdChain."
My bet is that teachers, who are wholesale beaten for "resisting change" would adopt a defensible instructional accomplishment information system in a DC second. It's the government-academic-publisher complex that's resistant.
Not sure what the SRA lit says, but my understanding is that about 80% of kids can get through RM Fast Cycle in one year, kindergarten. The remaining 20% take RM I and RM II and finsih at the same point by the end of first grade.
I'm afirly certain that this is what NIFDI tries to accomplish and it is no easy task. This is why it takes a few years for the kindergarten teachers to get to a point of proficiency in teaching the DI fast cycle sequence in which they can accomplish this daunting task. The faster the kids learn to read the more they will read and the more vocabulary and language concepts they will learn with the hope that this will counter the effects of low-SES or whatever other cause you want to attribute to their low language skills.
If the passages at the end of RM II are in the same ballpark as those at the end of "Fast Cycle." and all kids can read these at the end of Grade 1, Gering has taught all kids to read two years ahead of the NCLB target.
I don't know what's in DI "Reading" beyond that, but whatever it is it unduly burdens the instruction, and hides the instructional accomplishment under an informational "bushel basket."
The passages are the same, RMI and RM II, just throw in more stories/instruction for the kids who neeed it.
In RM III-VI, it's more of the same but with increasingly difficult and less controlled text.
In the last story I posted, almost all of the words have been pretaught in wordlists, so students are reading a story of words they've seen before and have been taught how to read/sound-out. However by midway through RM III, the text becomes much less controlled with only the more difficult words being presented in word lists or to teach a decoding rule.
By RM V, students are reading the first 1/3 of the story out loud so the teacher can check decoding errors, and teh last 2/3rds they read to themselves. Then the students must answer questions based on what they've read to make sure they understand what they are reading.
The "instruction" consists almost entirely of reading and answering conprehension questions.
Is RM V the first occasion individual children read continuous text aloud? That seems really late for the teacher to be giving/getting feedback on students' acquisition of reading expertise. The SRA lit that I've been able to access provides very little detail re what's going on in the instruction from component to component. SRA isn't alone in this respect and possibly they provide schools with better info, but I'd be surprised if that's the case. The sales pitch is largely rhetorical, testimonial, or ungrounded "gee whiz" relative comparisons.
Even though Nebraska mandates DIBELS, there is nothing to prevent Gering from dumping the DIBELS flaws. The only subtests worth their salt are the Nonsense Syllables and the Oral Reading Fluency (which is Stan Deno's work) DIBELS ruins the validity/authenticity of each measure by reducing the tests to one minute timed measures and promoting interpretation with grade-based "norms" Neither God, the feds, or the state says that one has to do this.
If a kiddo can reliably read the nonsense syllables, the kid can handle the Alphabetic Code=read. Likewise if a kid can read the early passages in the "Oral Fluency Test"--or any other handy text that happens to be around, the kid can read. Report the number of kids that can do this. Aggregate them by grade, and whatever biosocial categories that may be of interest and you've got a transparent, readily verifiable "instructional accomplishment information system."
The information system could be refined, but this would do for starters. It addresses the intent of NCLB and provides a straight answer to our President's question, "What is our children learning?" with respect to reading.
I think individual turns begin either the end of RMII or the beginning of RMIII. In a typical lesson, the whole class reads the first 100 words of the story together and must do so with no more than two decoding erros. then after that students take individual turns reading the rest of the story,again within a prescribed error limit. Towards he end of RMIII, students begin reading the end of the story on their own and decoding is not monitored, though the children are asked comprehnsion questions to make sure they are understanding what they've read.
Take a look at lesson 68 from RM III linked in this post.
This isn't the place to critique the RM architecture or execution. Suffice to say that "Lesson 68" includes no attention to the "how to" of decoding. Rather, it promotes the memorization of words.
The good news: The story that is the focus of the lesson provides another handy test of whether a student can read. Getting at that matter, however, requires scraping away all of the rest of the lesson and giving individual kids a chance to read the story. If a kid can read it with sufficient expertise to follow the plot, the kid can read. If the kid can't, there's a problem. But the problem is not with the kid; it's with the instruction the kid has received to date. "Scoring" the kid diverts attention to the source of the deficiency and places it on the kid's shoulders--or some place inside the kid.
This is simple logic, but prevailing instructional and testing practice depart so far from the logic that it would be laughable were it not tragic. The "fix" is technically easy, but the political and economic obstacles to changing a belief system that has become ingrained over decades is daunting.
But the problem is not with the kid; it's with the instruction the kid has received to date. "Scoring" the kid diverts attention to the source of the deficiency and places it on the kid's shoulders--or some place inside the kid.
You are obviously very ignorant about DI. One thing that never happens is any blame on the student. All failure is instructional failure. There is no kid failure. The DI motto is, "If the student hasn't learned, the teacher hasn't taught."
Well, that blames the teacher, which is out of the frying pan into the fire, isn't it? I'd say that placing a child in a "lower group" is operationally communicating to the child that it's on the child's shoulders--or someplace within the child. But let's not get distracted with red herrings. I've been trying to respond to Ken's invitation to comment about "tests" and the helpful information he has provided on the "results out of Gering."
My focus is on reliably obtaining and crediting instructional accomplishments. DI and Gering happen to be handy examples. I know DI only from a distance and Gering not at all.
Lesson 68 doesn't give the kid a chance to read the entire passage without interruption by the script and without "scoring" complications, so there is no way to get a square shot at whether a can can read the story.
I'm submitting that the Lesson 68 passage, the DIBELS Oral Reading Tests or any other convenient passage provides a means of determining what Gering is accomplishing in reading that is much more informative than the ungrounded relative comparisons currently being used.
One can easily refine the protocol, but that's the core logic. "Logic" is actually too fancy. "Transparent common sense" is more apt. It's not limited to Gering or DI. I'm contending that it's applicable to Gering and DI.
Everybody play nice now. No one is behaving like a troll and no one should be treated like one.
Dick, lesson 68 is roughly lesson 248 in the DI sequence. Students are well past the "how to" of decoding by this point in the game. They should be proficient decoders by now. Instruction has shifted from "sounding out" words to letter identification. I'm no expert, but I believe the reason why they do this is because it is more efficient. You might want to check out Mark Seidenberg's work with computer models and how his models leanred to read more quickly by transitioning from "sounding out" to letter identification.
Students should not be memorizing the words and efforts are made to reduce memorization by presenting words in lists and the like.
Every ten lessons students read a portion of the preceding story as a timed test like the DIBELS ORF.
The scoring is part of the built in student motivation component. ANd the program is designed such that students can answer correctly at a high rate and achieve points.
Zig's paper on Master Learning is a good resource and so is Direct Instruction Reading if you want more info on the DI methodology.
If students are well-past the "how to" of decoding by this time, they should be able to read the passage in the lesson with no difficulty. Reporting the number of kids that can do this, by teacher, school and district would be very informative to parents and the citizenry and would give straight talk about "the results out of Gering."
Seidenberg's work with computer models not withstanding, the structure of the Alphabetic Code is in terms of letter/sound correspondences, not letter identification. Handling these correspondences and a set of morphemic characteristics is the crux of reading. Some children acquire expertise in doing this with little or no instruction. Others require considerable instruction. In all cases, the proof of the pudding is that a student can read a text with understanding equal to that were the communication spoken. But "letter identification" doesn't lead to such expertise.
I understand the rationale for "points" error bands, and such. And timed tests may have a place, but they are not reasonable indicators of reading expertise; the real world seldom, if ever raises such requirements.
Reporting the results in terms of normed "benchmarks" that morph by grade uses a rubber ruler that fails to credit the transparent accomplishments of students, teachers, schools, or the District.
I appreciate the heads up on Direct Instruction Reading. I may have read it at one time, but will take a fresh look.
The expectation is that they can read the passage with at least 98% accuracy decoding, based on the error limits given.
I would think that the students are using letter/sound correspondence for word identification. However, for words that are misidentified, in DI at this level of student reading skill I believe they consider it more efficient to have teh student focus on thstructure of the word by spelling it instread of going through teh sounding out ritual. And, I am surmising that the reason is similar to what Seidenberg's research shows with respect to the how skilled readers process words. See here at about fig. 12.
I think the original questions still remains. Is there an objective, uniform, and cheat resistant testing instrument to determine if a student has mastered the mechanics of reading and actual reading with comprehension?
“Is there an objective, uniform, and cheat resistant testing instrument to determine if a student has mastered the mechanics of reading and actual reading with comprehension?
Ah, the phlogiston of reading: "comprehension." As in spoken communication, “comprehension” is situational; specific to a given text. If the lexicon of a given text is not within the kid's spoken receptive lexicon, the kid will not "comprehend" the text any better than if the communication were read to the kid. But it doesn’t make sense to confound the consideration of “reading” in this manner. If one doesn't accept this proviso,(which prevailing tests don't), the answer to the test question is "no." But that's at odds with common sense and with everyday life.
And if one brings preconceptions that the "testing instrument" must be multiple-choice, machine scored, and be rooted in Item Response Theory, again the answer is "no." Again that's at odds with common sense and everyday life. But it has come to be accepted and goes unchallenged--other than by those who advocate no tests.
If one puts a period after the word, "reading." the answer is "yes." Text passages can be calibrated in terms of the complexity of the alphabetic code and the morphemic characteristics that impact on written communication. The passages can be arrayed to constitute a Guttman scale of Reading Expertise. Parallel forms can easily be constructed to preclude cheating and "teaching to the test."
But achievement tests that are not linked to the means of effecting reading expertise (=product/protocol="program") miss the point of instruction.
I'm contending that a program whose architecture and execution is as well defined as DI, has within it the means to determine the aspired instructional accomplishment, and that reporting this data can provides the kind of information parents and citizenry are seeking. This information is not now “coming out of Gering or any other district. But it easily could.
I appreciate the link to the Harm-Seidenberg article. However, since the computational model does not include any provision for a “spelling check” element, it hardly supports the DI protocol in Lesson 68 of using the spelling of selected words rather than the blending of letter/sound correspondences. The number of words that DI reliably teaches kids to spell would be another piece of “test” information, in and of its own right.
Harm and Seidenberg acknowledge that their studies do not speak to the matter of reading acquisition (although their findings do refute the claim of whole language ideologists regarding the “impossibility” of using grapheme/phoneme correspondences as the basis for reading.) Harm and Seidenberg explicitly state that applying the model to “acquisition” is “future research.” I’m less optimistic than they are about the prospects of such research, but the proof will be in the pudding. “Proofing the pudding” of DI and of the few other instructional product/protocols that have an equally well-defined instructional architecture need not await this or any other research.
Post a Comment