I know that a lot of very knowledgeable people read this blog. And, just because I disagree with some readers/commentors doesn't mean that I don't think they aren't knowledgeable. (How's that for a triple negative?)
In any event, I'd like to know what you think are decent standardized tests for academic subjects such as reading and math. I'm especially interested in hearing your opinions on testing the various aspects of reading ability.
What tests are good for measuring the mechanics of reading, such as decoding ability, vocabulary knowledge and fluency? Are there any reliable tests. how about the end-product of reading instruction -- reading comprehension?
What about math? What's a good test for skills students should possess at the end of elementary school and/or to see if they are ready for algebra?
What about tests for history, geography, science?
Be sure to list the tests' weaknesses along with the advantages.
10 comments:
As a private math tutor, I do not create tests. I create problems to gauge the imagination and creativity of my students. Can they "see through a problem" rather than simply attempt to apply the formulas and equations they have been taught to memorize in dull, dull school? Can they find a solution by looking at the problem from different angles? Can they combine observation, logic and math to find a path to the soultion?
Here is one of my favorites: Given a strip of paper about one inch wide and precisely 12 inches long. Using nothing but a pair of scissors, cut a length that is exactly 7.5 inches long. Remember, you cannot use any other device or tool except for the scissors. No guessing or estimating allowed.
I'll post the answer later if you or your readers get stuck.
Fold the paper in half 3 times. Count 5 folds. Cut.
How would you get them to solve a problem like "The positions of a particle and a thin (treat it as being as thin as a line) rocket of length 0.280 m are specified by means of Cartesian coordinates. At time 0 the particle is at the origin and is moving on a horizontal surface at 23.0 m/s at 51.0°. It has a constant acceleration of 2.43 m/s2 in the +y direction. At time 0 the rocket is at rest and it extends from (−.280 m, 50.0 m) to (0, 50.0 m), but, it has a constant acceleration in the +x direction. What must the acceleration of the rocket be in order for the particle to hit the rocket?" whchis more along the lines of what they'll be doing in any half-decent physics class.
There are many reliable standardized achievement tests, but the underlying theory by which the tests are developed (Item Response Theory) precludes their instructional sensitivity. IRT is complex statistically, but the logic is simple;it derives from the
days of yesteryear when psychologists viewed matters in terms of “traits and factors.” IRT is a means of generating a scale that reflects a “latent trait.” The “trait” is defined only by the test. The scale can be “chopped up” into “grade levels” and into chunks termed degrees of “proficiency,” but the “grade levels” are ungrounded and the degrees of “proficiency” are nothing more than cut-scores on what has been forced into a normal distribution.
The thing is,instruction isn’t a matter of “latent traits.” it’s a matter of effecting specified performance capabilities. Prior to instruction, the student “can’t do it;” “scores” pile up at the bottom. Effective instruction enables specified capability; “scores” pile up at the top. IRT precludes either distribution, because items that students flunk and items that students ace are both thrown out. The “normal curve” inherent to IRT is operationally a function of what students have had an opportunity to learn. That gets you up to the mean of the normal distribution. The downward slope of the bell-shaped curve is a function of what only some students have had an opportunity to learn. The test is sensitive to socioeconomic status, but not to effective instruction.
So what’s the alternative? In reading, the aspired aspiration is for a student to read any English text with understanding equal to that were the communication spoken. When this has been accomplished, no further formal instruction in reading per se is required. The student can then use the capability to acquire other capabilities—“read to learn.” Aggregate children entering school have a more-than-adequate spoken lexicon to make reading instruction feasible without burdening the task with terms and concepts they don’t understand in spoken communication. Confounding reading instruction and testing with such terms and concepts (under the banner of “comprehension”)is instuctionally unjustifiable.
It’s a simple matter to determine if a kid can read. Put a text in front of the kid’s face and say: “Read this and tell me what it says.” Instructional and testing matters arise only when the kiddo can’t read. The instruction, not the kid, should be the focus of the tests involved. This is what we do in every area of life other than “school.” We don’t hold the “end-user” “accountable” for the effectiveness of a“treatment/product.”
Constructing useful indicators of instructional accomplishments is very straightforward. The relevant literature of “business intelligence,” “key performance indicators,” and “executive dashboards” is unknown in EdLand, but it is commonplace in the Corporate World. All that’s required is to specify 5-9 successive indicators that mark the trajectory from beginning to end. In the Corporate World the bottom line is typically a “sale.” In EdLand it’s a specified instructional enablement (or a specified societal service such as feeding kids, providing the first line of health screening , and such that are important societal benefits but that go unrecognized.)
The 5-9 Key Performance Indicators constitute a Guttman Scale; such scales have a very respectable psychometric lineage extending back to Binet’s “intelligence test.” With Key Performance Indicators marking the instructional trajectory in hand, dynamic Executive Dashboards that reflect the instructional status of students, aggregated by teacher, school, district, and biosocial categories of interest are feasible without any artificial or intrusive “testing.” The “results” are transparent and can be verified “with one’s own eyes,” rather than by chasing “numbers in the air.” Prototype Executive Dashboards for Instructional Accomplishments and for Societal Services can be accessed at www.3RsPlus.net .
The task of generating Key Performance Indicators for Instructional Accomplishments is complicated only by the fact that few instructional “programs” have the structure and substance to deliver specified aspired instructional aspirations. Fortunately, DI Reading Mastery does have the requisite elements to make Key Performance Indicators and Executive Dashboards a feasible endeavor. All that is required is a “thinking cap.” As a matter of fact, one could even do the necessary thinking without a cap.
It’s possible that there’s another way to get a handle on instruction, but the route sketched here has proven widely applicable outside of education and appears equally applicable to instruction. The same logic is applicable to “math” and to other instructional matters. Tests for some “subjects” such as “history,” “geography,” require thinking what about what should reasonably be “in the kid’s head” and what is reasonable to “look up on demand.” The advent of the Internet changes balance.
The sketch here has gone fast. Googling for any of the technical terms will generate a Wikipedia link and more additional information than anyone will need to follow the logic. The logic is a far cry from prevailing instructional testing practices, but it is much simpler, unobtrusive, and to the point.
Dick Schutz
3RsPlus@usinter.net
Good job, Ken. I hope you saw the subtle features en route to the answer that evade most grade schoolers. From my experience, about one in twenty will solve this problem.
Kids are conditioned to think that rulers are used to measure inches, not fractions of feet. They are conditioned to think rulers are structurally rigid measuring tools and overlook that a flexible, unmarked strip of paper can be made into a ruler that measures fractions of feet by merely folding it.
But most interestingly, when they examine the uniqueness of the "seven and one half", if it is expressed as a decimal (7.5) they rarely discover that by using complex fractions, it is also 5/8 of a foot. If the number is expressed as 7 1/2, they are a little more likely (but not much more) to get to the 5/8 since they apparently see that fractional 1/2.
It's a simple, somewhat trivial problem...but it illustrates even at such young tender ages, conventional math instruction gets kids locked into one way of thinking about things thereby causing them to lose the flexibility they need to solve problems.
I agree that flexibilty of knowledge is what is needed and I think that most elementary school children lack this flexibility, not necessarily due to what or how they are tuaght, but rather that they have not practiced math sufficiently yet to gain that flexibility.
I probably could not have solved this problem as a child, and certainly not in the five seconds it took me as an adult. As an adult having gone through the rigors of engineering school I have a certain amount of flexibility in basic math up to algebra one, because I was forced to use that math in solving thousands of math problems from age 5 to 23.
As a private math tutor, I do not create tests. I create problems to gauge the imagination and creativity of my students.
This is appropriate if you want to test the imagination and creativity of your students.
However, if you want to test something else, like have they learnt a specific technique, then a different sort of test is needed. For example, if you want to know if they have learnt the Order of Operations correctly, then testing their imagination and creativity won't help.
Now...about that physics problem.
For starters, I would get the student to recast the problem in terms they could understand. In this case. drawing a Cartesian co-ordinate grahical diagram would be very helpful.
At time zero, the "particle" is at the origin (0,0). The "rocket" is 50.0 meters above the "particle" with its nose at the coordintes (0, 50.0 meters) and its tail at (-.280 meters, 50.0 meters).
(By the way, a rocket 0.280 meters long? An 11-inch long rocket?
And by the way #2...a "thin line"? A line has no thickness. It is a path of points. Only our pencils give it thickness.)
At the same moment, 1)the "rocket" begins to move horizontally at an unknown but constant acceleration and 2)the "particle" is launched on a 51 degree trajectory at a velocity of 23.0 meters per second whose vertical component of acceleration is a constant 2.43 meters per second per second.
Next, I would have the student rephrase the question in terms that they understand.
In this case, what must the acceleration of the "rocket" be so that its nose and the "particle" occupy the same co-ordinates (the "rocket" hits the "particle")at the same time?
We already know the y-coordinate of both the "rocket" and the "particle" at impact is 50 meters.
So the question reduces itself to what must be the acceleration of the "rocket" so that its nose and the "particle" have the same coordinates (unknown but same number of meters,50.0 meters)at the same time?
Lastly, I'm afraid I'm not much of a physicist. So the student would be on his own to use the applicable relationships of velocity and acceleration to solve the problem.
But the initial strategy I described...re-phrase the problem and re-state the question...both in understandable terms...I believe would go a long way in helping the student solve the problem.
Good start.
The only physics you need to know is the distance formula (distance = initial distance + initial velocity x time + 0.5 x acceleration x time^2) and how to decompe a vector into xa nd y components, which I think is taught in algebra.
The rest is algebra I, trig, and basic math.
Here's a post I did on this problem back in 2006.
Tracy...what you say is quite true. If a student doesn't know the Order of Operations, no amount of imagination will replace teaching it and then checking to see if the student learned it.
However, because I have the luxury of one-to-one relationships with my kids (not one-to-twenty like in a classroom), I can not only teach math, more importantly, I can show them how to think...how to combine observation, logic and math tools to solve problems. This is where that "paper strip" problem fits in.
At least, this is what I try to do. It doesn't always work out, but when it does...it's something to behold.
However, because I have the luxury of one-to-one relationships with my kids (not one-to-twenty like in a classroom), I can not only teach math, more importantly, I can show them how to think...how to combine observation, logic and math tools to solve problems.
I don't think that this sort of teaching is confined to one-to-one teaching relationships. I went through engineering school, where we were taught this en masse.
The course consisted of a combination of lectures, labs and assignments, cumulating in a year-long project in the final year (we also had standard courses and exams then). The lectures taught us the basic skills and conventions, and also had some explicit teaching in problem-solving techniques, such as how to design a testing programme, likely sources of faults, how to design a product process, etc. The labs and assignments got more and more complicated.
Of course a lot of the work was learning solutions that had been previously developed. But once those solutions are in your head, you can then apply them to new situations.
Post a Comment