Here's a good example of what I was talking about in this post when I wrote that "it's a lot of hard work educating children when you actually care about what those children are learning. Better to focus on stuff that can't be measured like the process of education and other nonsense." Many educators seem to care more about the process of educating, rather than the end result of that process. These process-oriented educators inevitably have common characteristics. One common characteristic is that they aren't getting good results so they want to ignore or downplay them. Another characteristic is that they practice educational methods that are known not to produce good results--constructivism, child-centered education, discovery learning, and the like. Instead of jettisoning these failed methods that are producing low student achievement, they want to redefine education.
Today's example comes from Borderland's Doug Noon (Ed: the link is to a google search since cowardly Doug has redirected any links from this site to his):
My classroom doesn’t work the way I want it to. In the Age of Accountability, I focus on process, and see product as a secondary concern. I’m an ill-fitting peg, uneasy about participating in what, for me, amounts to a charade - emulating archaic practices designed for kids from bygone eras.Yeah, those basic reading and math skills are as obsolete as the buggy whip in today's high tech world. Nowadays, we all have calculating and decoding robots that follow us around to perform basic math calculations for us and decode written text so we can focus on higher-order thinking.
Do you know any skilled reader whose decoding skills aren't automatic? Do you know anyon skilled in solving mathematics whose basic math facts aren't automatic? Of course not. These skills are as important today as they were 200 years ago and probably always will be. The only dispute is how best to teach those skills, not whether they are necessary. It is unfortunate for our new-fangled educators that those "archaic practices designed for kids from bygone eras" continue to work best, but wishing your new processes work better isn't the same as them actually working better.
Looking at the group I’m with now, thinking about them, and not the generic, bloodless beings called Students, statistical incarnations of demographically catalogued learners, I feel more strongly than ever that I owe each of them more than mere delivery of the curriculum, and concern for where they stand relative to a standard that I don’t endorse.First of all, we don't just expect "mere delivery of the curricilum," we expect students to actually learn the curriculum. Students don't need a tour guide through the curriculum, they need someone to teach it to them. It's all well and good if you think they need more, but don't shirk your fundamental obligation to make certain they've learned the curricular requirements.
And what's this busines about a standard that you don't endorse? Whether you endorse it or not is irrelevant, your job is to assure that your students meet the standard.
Being a teacher means too many things for me to say that I know how to do it well. I surely don’t know how to move a group of kids to universal competence when their needs span the curricula for 4 different grade levels, and when they come with varying interests, talents, and beliefs about themselves and about school.
I don't think it's the four different grade level span that's causing the problem. I'm sure, if three of those levels were above grade level, we wouldn't have this gnashing of teeth. No, the problem is that your class has low performers that have not and are not learning the material which is being presented. So, maybe focusing on the process is the solution; so long as the focus results in disgarding the failed process.
Discussions about teaching and professionalism on both The Education Wonks’ and Jenny D’s recent posts illustrate the difficulty we have in making headway on the question of what we Should be doing, because a critical perspective is altogether overlooked in discussions about How we should measure teacher effectiveness.This one is easy. Teacher effectiveness should be measured by how effective you were in getting the students to learn at least the minimal material required by that standard that you don't endorse. Otherwise you wind up in the odd situation unwittingly highlighted by this parent whose school which isn't making AYP:
That's great, a tiny art connoisseur who can't read or do basic math. Who endorsed that standard?Grand Ledge parent Shari Burg approves of the new teaching techniques. Her third-grade son, John, can easily identify Van Gogh and Monet paintings because of Willow Ridge Elementary School's teachers.
Growing up, Burg never received that kind of learning.
12 comments:
The problem I see is that many teachers view the problems of education as what walks into their classroom. What are they supposed to do if kids come into their sixth grade classroom not knowing their times table? Parents properly see the problems as teacher effectiveness, school effectiveness, curriculum, and year-to-year standards. Many teachers are just trying to fix their own problems and do whatever they think is best.
I think it's less moving the goal posts than it is having no goal posts at all.
In my case, I am interested in the product of my teaching, and I am interested in the process I need to follow to get there. The difficulty lies in the method used to measure success. I have yet to see a multiple choice test that can meaningfully measure success in a skill such as writing.
"The difficulty lies in the method used to measure success."
This doesn't mean it can't be done. There are perhaps two levels of testing. Testing for basic skills and knowledge can be done with simple tests. For more advanced skills, the evaluation becomes more difficult, but it still can and should be done. I assume that you have some specific criteria for pass/fail in your classes.
If you are talking about standardized tests, then I don't understand the problem. Are there some advanced knowledge and skills that would make it OK not to do well on the simple, perhaps multiple choice, tests?
This is not a problem in math, but you'd be surprised how many teachers complain how standardized tests can't check for "higher-level thinking". In other words, they think it's OK to fail the basics.
"I have yet to see a multiple choice test that can meaningfully measure success in a skill such as writing."
Perhaps you're unaware that standardized exams, such as the SAT, have essay components.
This internet site is incredible. Its about a pipe dream come in real to me. - contact lenses
Basic information (not skills such as writing) can be measured by the multiple choice tests -- not a problem. Standardized tests have writing components -- of course, and again not a problem.
Higher skills can be measured by means other than multiple choice testing, but it seldom is done that way.
In my state, the tests attempt to measure writing skills by using extremely awkward and convoluted multiple choice questions. That is a problem -- and the one I stated in my comment.
Measuring skills, such as writing, with a tool that works for basic knowledge, such as multiple choice tests, is like measuring liquids with a linear measure. Yes, it can be done, but does it give an accurate and useful picture of what you have?
What I don't understand is that all the so-called higher-order skills at the K-12 level are the basic skills at the undergrad level and they, for the most part, are easily testable with multiple choice questions.
"In my state, the tests attempt to measure writing skills by using extremely awkward and convoluted multiple choice questions."
Actually, you said nothing about "convoluted" in your post.
This is an interesting process to see in our state. Let's see if I can explain what is happening. The federal government mandates (via NCLB) state testing. The state gets to choose the test. Our state's educational administration is given free reign to select the test. Having a fuzzy, progressive idea of education, they don't like testing, but they see the writing on the wall and want to be proactive. It's better that they select the test rather than someone else. Therefore the tests are very easy. However, they want to keep their mantle of champions of "higher-order thinking", so they select tests that include many questions that try, through perhaps "convoluted" logic to test for these qualities. Rather than just test for the easily-determined basic knowledge and skills and leave it at that, they try to find questions that see how kids are thinking. Of course, they can't really define what "higher-order thinking" is, so they are not going to succeed.
In any case, however many of these "convoluted" questions there are on the test, it still doesn't explain why the results on the tests are so incredibly bad. As a parent, standardized tests are meaningless to me because they are so simple. However, they are now the maximum goal that schools strive for.
"Higher skills can be measured by means other than multiple choice testing, but it seldom is done that way."
You're going to have to define fairly specifically what you mean by higher skills.
"In my state, the tests attempt to measure writing skills by using extremely awkward and convoluted multiple choice questions. That is a problem -- and the one I stated in my comment."
That's odd. ETS has shown, in several studies, however, that grammar section scores -- which are tested with multiple choice questions -- strongly correlate both with reading scores AND writing scores. So you're correct in that you can't directly assess writing with multiple choice questions, but you can indirectly assess it with multiple choice questions.
Ideally, of course, you'd want to assess any skill directly, but there are practical considerations that could indicate an indirect assessment.
NCLB requires that the goal posts move every year, until every school arrives at the same end-point in 2014 - 100% proficiency.
Here in Massachusetts, the definition of proficiency and our scoring system mean that every school will have to match the test scores posted by Boston Latin Academy last year.
Boston Latin is an exam school, and only lets in the highest scoring kids in the city. It was one of only a few schools across the state that came close to scoring 100% on the AYP proficiency scale last year.
A few percent more might squeak by because the state allows an error margin of 2.5 to 4.5 percent, that shifts the mandate from "100%" down to "95.5 to 98.5%" proficient. Still cuts out all but about 2% of subgroups across the state; the combinatorial reaper (all subgroups must pass both the ELA and MATH threshold) would narrow that down a lot more.
Massachusett's chosen standards reek of a lack of ability to comprehend the math and statistics implied when requiring 100% of students to score at the "proficient" MCAS level. The choices simply could not have been made by competent mathematical and statistical reasoning.
That leaves the only logical conclusions:
a) The Mass Department of Education wanted a scoring system that would label the majority of schools as inadequate, or
b) The Mass Department of Education does not have competent statisticians able to model how their scoring system works in the context of No Child Left Behind proficiency mandate.
Massparent, NCLB only requires the proficiency of the students required to be tested. Currently, 1% can take an alternative assessment. and up to 5% can be expemted due to absence.
To understand just how fair Mass's exam is, you'd need to evaluate what actual grade level is being tested in each grade and where is the cut point set for a passing grade.
For example, the math portion of the 8th grade NAEP exam has questions that are mostly drawn from material that should have been learned in grades 1-5. Even if Mass's test isof a similar difficulty level (it is doubtful it is more difficult), it is likely that they've set the cut-point sufficiently low so that many more of its students pass than pass the NAEP exam.
The reality is most likely that the Mass exam only requires studenst to be proficient at a level far below actual grade level.
This is the general rule; it's possible that Mass is the exception.
Post a Comment