May 9, 2008

Today's Question (redux)

I'm cross-posting this question from over at Kitchen table Math posted. It is a follow-up from a previous post on reading asessments which did not yield as many satisfactory answers as I expected --meaning I asked the question poorly. This is take two.

NCLB is vague in its requirements with respect to measuring reading ability. NCLB is concerned with "proficiency in ... reading or language arts." That is the extent of the guidance given to us.

We know that reading is a complex skill comprising various subskills and content knowledge. But, what does it mean to be a proficient reader? What standardized test or battery of tests exist that accurately measure the "reading" ability of children and whether they are proficient?

Further, under NCLB it is the educators whose performance is being measured, even though the students are the ones taking the test. So the testing instrument must not allow educator subjectivity and must not be capable of being gamed by the educator. For example, Elizabeth's example in the post below describes a test that can be gamed by an educators since students can be taught to memorize the words appearing on the test and, thus, the test is not a true reflection of reading ability.

So, pretend you are a new superintendent of a school district who wants to accurately determine the reading ability of the children attending the schools in your district and how well they are being taught. So, for example, you want to know that your third graders are reading at a third grade level and will be capable of reading at a fourth grade level next year. You get to pick the standardized test(s) to be used. You will have non-reading-specialists monitoring the administration of the test(s). The monitors can identify outright cheating by teachers and/or students but nothing more subtle than that, i.e, they are incapable of making substantive determinations related to reading of any kind Otherwise, the administration of the tests is out of your control. Only the results of the test(s) will be reported to you.

What assessments do you select and why?

7 comments:

Anonymous said...

Excellent observation!!!

I work in a school system where the elementary teachers use the DRA to assess student reading ability. Although there are rubrics to measure the student's fluency and comprehension, the test is actually quite subjective. Teachers regularly fudge the results to make it appear that the student has made progress over the year.

In addition to the DRA as the sole classroom assessment, this system uses Guided Reading as the source of elementary reading instruction. Guided Reading/Balanced Literacy + DRA = A deadly combination

Thomas

palisadesk said...

For a well-informed answer to this question from someone with much experience working on improving reading instruction (successfully) in several districts and states, I suggest you contact the person who posts on the DI list with the handle "redyarrow." (She also uses her name, but I am not sure it would be appropriate to put it here in the blog without her permission). As a subscriber to the list, you can get the e-mail addresses of other subscribers (I'm sure you know how to do this, if not I can send directions).

She may not yet be one of your faithful readers, though I am sure if you contact her she will soon become one.

My experience in reading assessment is considerable but not at the whole-district level, so I think the person mentioned would have much more insightful (and practical) suggestions.

PS Sounds like Thomas works in my district;-( [We are DRA and Guided Reading all the way!!] You would think someone would find it odd that the same kid can be at DRA level 12, then 3, then 28, then 14, then 40, then 16, then 10....

"Subjective" doesn't begin to describe it. Of course after several years of reading the same story over and over again they finally DO get pretty "fluent."

I think we can all agree to eliminate the DRA as the "district-level" assessment. Lacking in inter-rater reliability.

Eric said...

How about calulating value added from the Stanford Achievement Test for elementary and using the LSAT for high school students? (I'm not competely joking...) Won't the LSAT scores be impressive if the kids have mastered Reasoning and Writing?

Anonymous said...

Administer an IQ test to determine aptitude and develop a realistic plan for improvement using a reading specific test. Too many times children who are below the mean are expected to attain unrealistic goals and students above the mean are allowed to settle at the common denominator when more should be expected.

Anonymous said...

No one so has done a good job of answering your question directly to I will take a shot at it.

One approach:

1) Pick a normed assessment that has separate scores for vocabularly, decoding skills and critical reasoning (verbal comprehension.)

The Woodcock Reading Mastery Test is the gold standard.

The WRMT is not a group administered test, so you wouldn't want to give this test to every student, just a random sample -- say 50 students from each school -- each month.

The assessment should NOT be administered by the classroom teacher, or even school level personnel. Your district needs a roving team of proctors. (Well run franchise businesses can provide some insight here.)

Do this and you will know how well your schools are doing, even in critical sub-skills such as decoding.

Another approach:

2) If you want to test every student for some reason, you could choose an assessment with a LARGE number of items that are randomly selected each time the test is administered. It makes it impossible for the student to teach the test answers, but this basically requires that the test be given on a computer.

Scholastic and accelerated reader both sell these sorts of assessments.


Best approach for younger students:

Adopt Reading Mastery (DI) district wide. Have the in program tests administered by an independent team of proctors.

This won't tell you how well your students read "now", but it will tell you whether they are on track to master the entire DI sequence.

And if they do that, they will certainly read at the 3rd grade level at the end of third grade.

I'm interested: Do any of the above proposed solutions make sense to you?

KDeRosa said...

They all make sense to me, though it sounds like we really don't have an off the shelf solution to testingreading achievement. I've heard good and bad about the WRMT.

Anonymous said...

We have a number of off the shelf tests for assessing reading, actually. Your statement that we do not shocks me.

I'm curious: What have you heard that is bad about the WRMT?

I think it is pretty hard to argue that WRMT does not do a good job of assessing reading ability. Maybe you are taking issue with the way it calculates an overall score?

You can ignore the overall score and just look at the student's decoding score (younger students) or passage comprehension score (older students.)