February 7, 2009

From the Department of Huh?

Comes the conclusion of this study out of Ohio State.

A study of college freshmen in the United States and in China found that Chinese students know more science facts than their American counterparts -- but both groups are nearly identical when it comes to their ability to do scientific reasoning.


But when you look at the researchers' description of the underlying study, you see that this conclusion isn't supported and leads me to question the researcher's own ability to reason scientifically at least in the domain of education.

The researchers administered three tests to incoming college freshmen from China and America who had just enrolled in a calculus-based introductory physics course.

The first test, the Force Concept Inventory, measures students’ basic knowledge of mechanics and the student's understanding of mechanics and forces. "The Force Concept Inventory is not 'just another physics test.' It assesses a student’s overall grasp of the Newtonian concept of force. Without this concept the rest of mechanics is useless, if not meaningless." (Force Concept Inventory, Hestenes, Wells, and Swackhamer, The Physics Teacher, Vol. 30, March 1992, 141-158).

The second test, the Brief Electricity and Magnetism Assessment, measures students’ understanding of electric forces, circuits, and magnetism.

The third test, the Lawson Classroom Test of Scientific Reasoning, measures generic science reasoning skills. You can see the kinds of questions on the exam of the appendix of this study.

The tests were given to Chinese students and American students. According to the researcher, in China, "every student in every school follows exactly the same curriculum, which includes five years of continuous physics classes from grades 8 through 12" and "schools emphasize a very extensive learning of STEM content knowledge" In the United States, "only one-third of students take a year-long physics course before they graduate from high school. The rest only study physics within general science courses. Curricula vary widely from school to school, and students can choose among elective courses" and "science courses are more flexible, with simpler content but with a high emphasis on scientific methods."

Keep those descriptions in mind because they'll be important for the conclusions drawn by the researchers.

Now let's turn to the results.

On the FCI, "[m]ost Chinese students scored close to 90 percent, while the American scores varied widely from 25-75 percent, with an average of 50." Clearly the Chinese students understand mechanics better than their American counterparts. One the BEMA, "Chinese students averaged close to 70 percent while American students averaged around 25 percent -- a little better than if they had simply picked their multiple-choice answers randomly." I guess all those Physics course helped the Chinese students understand physics, whereas all that emphasis on scientific methods at the expense of content didn't pan out so well for the Americans. These results are hardly surprising. Knowledge is domain specific and transference between domains is generally minimal.

On the Lawson Classroom Test of Scientific Reasoning "[b]oth American and Chinese students averaged a 75 percent score." So, the Chinese students were just as capable as the American students even though their course supposedly didn't emphasize "scientific methods" like the American students did.

The researchers, however, concluded:

Lei Bao, associate professor of physics at Ohio State University and lead author of the study, said that the finding defies conventional wisdom, which holds that teaching science facts will improve students’ reasoning ability.

“Our study shows that, contrary to what many people would expect, even when students are rigorously taught the facts, they don’t necessarily develop the reasoning skills they need to succeed,” Bao said. “Because students need both knowledge and reasoning, we need to explore teaching methods that target both.”


What? This isn't the conventional wisdom. The conventional wisdom is that that learning facts in a domain will improve the ability to reason in that domain. This wasn't tested in the study. What was tested in the study, via the FCI and the BEMA, was the students' understanding in the domain (physics) which was significantly higher for the Chinese students compared to the American students. Not surprisingly, the American students didn't understand much physics since they didn't learn many physics facts and their "scientific methods" instruction failed to fill the void. Constructivists take heed.

What the study also showed is that learning facts in one domain will not necessarily lead to transference to a different domain and an improvement in reasoning skills in general, whatever they may be (assuming they exist). Again, not a surprising outcome. But, the researchers' spin obscures this conclusion.

And here's the kicker.

Bao explained that STEM students need to excel at scientific reasoning in order to handle open-ended real-world tasks in their future careers in science and engineering.

Ohio State graduate student and study co-author Jing Han echoed that sentiment. “To do my own research, I need to be able to plan what I’m going to investigate and how to do it. I can’t just ask my professor or look up the answer in a book,” she said
.

The irony is that this physicist didn't do a very good job conducting an investigation in a foreign domain (education). If he wanted to know who was more capable of "handl[ing] open-ended real-world tasks" he should have tested this in a domain specific way. He should have given both groups open-ended real world physics problems and determined which group handled them better. I'm thinking it would have been the Chinese students.

And then we have the most unsupported conclusion of the study:

“The general public also needs good reasoning skills in order to correctly interpret scientific findings and think rationally,” he said.
How to boost scientific reasoning? Bao points to inquiry-based learning, where students work in groups, question teachers and design their own investigations. This teaching technique is growing in popularity worldwide.


The American students who presumably were instructed in inquiry-based techniques fared no better than the Chinese students in general reasoning ability. Inquiry-based teaching once again failed to show results. And it certainly did the students no favors when it came to the students understanding of physics in which they performed poorly.

I see nothing in this study that shows any benefits for inquiry learning. If anything, the study supports the notion that you can't teach general reasoning directly, both methods of teaching failed. What the study also clearly shows is the continuing importance of learning content if you want to understand something.

27 comments:

Stephen Downes said...

When your long commentary concludes that the investigators misinterpreted the results of their own study, I know something amiss - with your commentary.

Let's recap:

The Chinese students were given detailed, even grueling, instruction in the facts of the discipline, specifically physics, to the point where they were scoring in the 90 percent range, on average.

The American students' knowledge of the basic facts in that discipline was basically non-existent (how ironic that we see Republicans campaigning to eliminate education funding today - but I digress).

When evaluated for scientific reasoning skills, however, the Chinese students barely passed, showing no ability over and above their American counterparts.

I think we can say pretty conclusively: fact-based education does not improve reasoning ability. You can have a class that has 90 percent average grades in a domain, and they still can't reason in it.

Ah, but wait. How is De Rosa going to reframe this?

"The conventional wisdom is that that learning facts in a domain will improve the ability to reason in that domain."

So - because the reasoning test is of reasoning in *generic* science, not physics specifically, we shouldn't expect the Chinese knowledge of facts to be of any help

Wait... huh?

First of all, more than half the questions in the generic test sample questions were of physics problems. Fact based knowledge should have resulted in *some* bump. Physics knowledge should improve physics reasoning. But there was *no* bump.

Secondly, improving one's reasoning skill in one discipline alone is pretty useless. Almost no form of employment or production involves only one discipline. Reasoning ability needs to be general.

So - even if fact-based knowledge improves reasoning in a particular domain (something that is refuted by this study) it is still insufficient for what the Chinese *actually* want, which is general reasoning skills.

You say such skills don't exist. So much the worse for your theory. Out there in the wider fact-based reality, people believe they exist, and that is what they want their students to learn.

And - as we always knew would be the case - teaching the facts of a discipline for years on end, until near perfection is achieved, does not improve general reasoning one whit. Worse, it doesn't even improve domain-specific reasoning one whit.

No wonder you're so keen to reframe this study.

Anonymous said...

fact-based education does not improve reasoning ability.

Neither does discovery learning (or whatever you would call having "science courses [that] are more flexible, with simpler content but with a high emphasis on scientific methods"), according to the study:

Neither group is especially skilled at reasoning

and later,

in a test of science reasoning, both groups averaged around 75 percent -- not a very high score, especially for students hoping to major in science or engineering.


So, nothing being taught currently seems to result in high scores on "scientific reasoning."

Chinese students scored high on content, and averaged 75% on "scientific reasoning.

American students scored low to average on content and *also* averaged 75% on "scientific reasoning."

Why would you dilute the one method that was shown to have *some* positive results?

If the goal is to find a way to improve "scientific reasoning," I would think the first step would be to figure out how to do that, not combine the method that had some positive results with a method that had *no* positive results.

Anonymous said...

Educationese again fogs up what is going on. "Facts" don't add up to the substance and structure of a discipline, which was what the Chinese focused on and which the US universities involved didn't

I didn't realize that constructivist instruction had trickled up and is now popular in at least some University Physics instruction. My, Oh, My! They even bring in Piaget and his discredited "stages" for cryin out loud.

The test labeled "Scientific Reasoning" is a naive caricature of the "ability to do scientific reasoning."

http://en.wikipedia.org/wiki/Scientific_reasoning

The test embodies the fatal flaws in using Item Response Theory to construct achievement tests.
--Posit a latent trait.
--Use a common template to construct a set of items that are highly correlated and have superficial face validity.
--Reify the latent trait and set out to "teach" it.
--Run statistically adjusted correlations pre and post instruction which are one step above gibberish, and that are mindlessly interpreted.

As Ken says, it's ironic that the study purports to be about "scientific reasoning."

Each one of the items in the "scientific reasoning" test involves a straightforward physics principle. To get perfect scores on the test, one would be sure that the student was taught this background information if it was not already known.

Would the student be any more capable to engage in the complicated endeavor of scientific reasoning? Of course not,silly.

Anonymous said...

Isn't it weird to compare an intervention administered over 5 years with one administered over - at most (usually) - 1 year? How are we supposed to conclude anything at all about the relative merits of the interventions?

Anonymous said...

Point well-taken, Paul. Even worse. there was no "intervening"-- just ill-defined instruction and grab-bag comparison groups.

The most intriguing data, in my view is the fact that the correlaions for Harvard differed so far from the patterns for the other universities. This the authors ignored altogether. Researchers these days seem to have forgotten the importance of descriptive statistics and have never learned how to run analyses of variance. Correlational analyses rule.

KDeRosa said...

Stephen, not unsurprisingly, I disagree with most of your comment:

The Chinese students were given detailed, even grueling, instruction in the facts of the discipline, specifically physics, to the point where they were scoring in the 90 percent range, on average.

There's no indication that their instruction was just in the facts of physics. It was more likely a traditional quantitative physics course with an emphasis on problem solving. Note that the FCI was a qualitative test, not a quantitative one, that supposedly measures understanding. Also note that the Chinese scores on the BEMA were only 70%.

When evaluated for scientific reasoning skills, however, the Chinese students barely passed, showing no ability over and above their American counterparts.

They scored the same as their American counterparts who had received specific instruction in scientific methods in lieus of content instruction in the (mistaken) belief that it would promote scientific reasoning abilities. It didn't. And, to boot, they didn't learn any content either.

I think we can say pretty conclusively: fact-based education does not improve reasoning ability. You can have a class that has 90 percent average grades in a domain, and they still can't reason in it.

The Lawson test was not a test of domain reasoning. It was a test of general scientific reasoning ability. None of the administered tests measured domain (physics) reasoning ability unfortunately.

So - because the reasoning test is of reasoning in *generic* science, not physics specifically, we shouldn't expect the Chinese knowledge of facts to be of any help

We'd expect some transference but not much. I'm not sure there's much cross over between physics and biology, statistics, chemistry and physical science which were among the tested items on the reasoning test.

First of all, more than half the questions in the generic test sample questions were of physics problems. Fact based knowledge should have resulted in *some* bump. Physics knowledge should improve physics reasoning. But there was *no* bump.

You have an odd understanding of what physics comprises. I only see about 20%-25% of the questions being related to physics and I'm not sure that appendix is an actual exam, so I'm not sure those mix of problems was accurate anyhow.

Physics knowledge should improve physics reasoning. But there was *no* bump.

Bump from what? There was no pre-test that measured their reasoning ability prior o physics instruction. The only comparison was between Chinese students who were taught physics and American students that were taught scientific methods.

Secondly, improving one's reasoning skill in one discipline alone is pretty useless. Almost no form of employment or production involves only one discipline. Reasoning ability needs to be general.

This is wrong. We have many specialties in the sciences because we need people who can reason well within those domains. The ability to reasoning within and across more than one domain is an important skill but I wouldn't characterize this as some general reasoning ability.

So - even if fact-based knowledge improves reasoning in a particular domain (something that is refuted by this study)

Actually, this wasn't refuted in this study because there was no test of physics reasoning was administered. Moreover, the qualitative physics tests administered clearly indicated that the Chinese students understood physics better. Understanding of physics is what the first two tests sought to measure. Read the research on the FCI in particular.

it is still insufficient for what the Chinese *actually* want, which is general reasoning skills.

What everyone wants is experts who can reason well at least in their domain of expertise. What is apparent is that it takes more than five years of high school level physics before a student knows enough science to think scientifically. This shouldn't be surprising. What also shouldn't be surprising is that a few years of content-free constructivist heavy courses don't do any better in this area.

And - as we always knew would be the case - teaching the facts of a discipline for years on end, until near perfection is achieved, does not improve general reasoning one whit.

More accurately, it doesn't improve the ability to reason (by much) outside of the domain which is what I've been saying all along.

Worse, it doesn't even improve domain-specific reasoning one whit.

This study doesn't answer this question because no domain specific tests of reasoning ability were administered.

Anonymous said...

"Isn't it weird to compare an intervention administered over 5 years with one administered over - at most (usually) - 1 year? How are we supposed to conclude anything at all about the relative merits of the interventions?"

I must be missing something. We can conclude that the Chinese students are taught more physics then the American students and that they remember/know more physics than the American students. No?

I might be tempted to leap to the conclusion that we *can* teach more physics to our high school students than we are currently doing IF WE WANT TO DO SO.

-Mark Roulo

KDeRosa said...

Isn't it weird to compare an intervention administered over 5 years with one administered over - at most (usually) - 1 year?

I was under the impression that the American students received more than one year of science instruction, it just wasn't necessarily physics.

There wasn't an indication that there was an equivalent amount of instruction, so this could be a confounder.

Anonymous said...

Am I the only one who wonders how much one can generalize a study on college students who were ready to take Calculus as freshman onto a general population?

With 50% of the CalState freshman requiring remedial math or English (or both), I'm pretty confident that the majority of college freshmen are not prepared for Calculus.

So maybe we know something about STEM majors, but not about anyone else.

Right?

-Mark Roulo

Anonymous said...

From the article:

"We need to think of a new strategy, perhaps one that blends the best of both worlds."

If I understand the study correctly, the Chinese either outscored the Americans (on content) or scored the same (on reasoning).

What "best of both worlds" is the American approach supposed to contribute? If we had outscored the Chinese on reasoning, I would understand the statement, but according to the article, we didn't.

My read would be that the Chinese are pretty good at teaching their kids "facts", and neither approach is terribly good at teaching "reasoning." Isn't that what the study shows?

-Mark Roulo

Anonymous said...

"I was under the impression that the American students received more than one year of science instruction, it just wasn't necessarily physics."

The typical US high school science sequence goes: Biology -> Chemistry -> Physics.

It is rare for a US high school student to take physics without also taking biology and chemistry first.

[NOTE: This sequence appears to be logically inverted from the actual dependencies ... knowing physics helps understanding chemistry, but not the other way around. Still, this is what we do.]

-Mark Roulo

KDeRosa said...

What "best of both worlds" is the American approach supposed to contribute? If we had outscored the Chinese on reasoning, I would understand the statement, but according to the article, we didn't.

That's my read, Mark. I don't see where the call for balance is coming from. I see no advantage to the American method coming out of this study at least.

My read would be that the Chinese are pretty good at teaching their kids "facts", and neither approach is terribly good at teaching "reasoning." Isn't that what the study shows?

Or it simply takes more instruction to make an expert who can reason critically at least in the domain of expertise.

Anonymous said...

"...it simply takes more instruction to make an expert who can reason critically at least in the domain of expertise."

YES. "Reasoning" is not anything that can be taught as such. It falls in the category of empty reified abstractions that also includes--
--comprehension
--critical thinking
--higher order thinking
--problem solving
--appreciation
--awareness
--others that don't come to mind at the moment.

Individuals differ in general ability. and they differ in motivation and other personality and social characteristics that can be measured as such. But the measurement of these general factos is beside the point when it comes to the reliable accomplishment of specified expertise. We teach the students we have, not the students we'd like to have. The same holds for teachers.

All organisms, including human beings have greater capacity to learn than they're typically given credit for. But "raising standards" or "raising expectations" is idle talk.

Unless there is focus on minimizing the prerequisites for taking on instruction, an operational specification of what constitutes the expertise that will be delivered, and a transparent means of monitoring the delivery of that expertise, it's "talking through your hat."

DI and a few other instructional architectures aside, today's instruction is all hat and no delivery.

Sure, some students learn. But some students learn without any instruction and some learn despite mal-instruction. And there are ingenious teachers, principals and administrators that one can point to as exemplifying "best practice." The thing is the practices don't export because they are ideocentric to the individuals.

About the only thing we can reliably replicate is "Departments of Huh?" And we have a plethora of those, as Ken keeps demonstrating.

Tracy W said...

Dick - 1. IRT does not "posit a latent trait". Physics knowledge, or ability to do scientific reasoning, is a latent trait as defined by the psychometricians in that it can't be directly measured. You can't look at a person and decide if they can do a physics problem, you actually need to ask them a physics problem. That makes physics ability a latent trait by their definition. If you have some way of measuring a student's physics knowledge or scientific reasoning without them needing to do anything related to physics or scientific reasoning (be that answering pencil and paper test questions or answering verbal questions or constructing a tool or aiming an artillery gun), do tell us.

2. What do you mean by a common template? And how precisely does IRT require this?

3. I don't know what you mean by saying that items are highly correlated. Is it possible that you are referring to the use of characters in the test questions, eg tabulating the number of "a"s, "b"s, etc used in each question and seeing how they correlate with each other? Is not that level of correlation merely a result of the English language and the subject area? Anyway, I will assume that you are at least attempting to say something sensible and that you meant to say that the answers are highly correlated. Whether you want items to have high correlation depends on the sort of exam you are designing. If you want your exam to produce a normal distribution of answers at the end (something you tend to blame standardised achievement tests for doing) you want items that are not highly correlated, so only a few students get most or all of the answers right. Personally I can't think of a reason for designing an exam in which you want the answers to the questions to be highly correlated - even for a threshold exam in say "knowledge of high explosives" where you only want to pass people who you are really sure genuinely know a lot about high explosives, you'd want questions that test as many different aspects of the relevant knowledge as possible, so as to catch out someone who is weak in one area. (Obviously not every threshold exam needs to be as tough as "knowledge of high explosives".)

4. There is nothing in IRT that stops people from properly validating achievement tests.

5. How are you defining reifying in this context?

6. Achievement tests do not set out to "teach" anything. Taking an achievement test may result in the testee learning things, because of practice (eg my driving test meant that I did get a bit of extra practice parallel-parking, driving on motorways, etc), or because of exposure to new information (eg a test of historical reasoning ability that asks the student to read and interpret some source material that in turn exposes the student to some new historical facts), but achievement tests try to measure achievement, not teach, or even "teach".

7. Whether statistical correlations are one step above gibberish or many steps above gibberish depends on the quality of the scientific experiment in general, not on IRT. IRT is about the test questions. It is entirely possible to administer only once an achievement test designed using IRT, and never run any pre and post instruction correlations. Avoiding gibberish in pre-and-post instruction correlations is a matter of choosing suitable control groups, avoiding spurious correlations, etc.

8. There is nothing in IRT that requires people to mindlessly interpret things, any more than a car requires you to drive badly. Despite what you think, IRT is not some monster out of a SF movie that has magical abilities to make the characters do completely stupid things, it's a mathematical process. It can't make people be mindless, it can't magically churn out normal distribution curves regardless of the make=up of the test questions, etc. If anyone mindlessly interprets things, it's their own fault, not that of IRT.

Anonymous said...

My post was not about IRT. I'd regret that I ever introduced the theory and practice to the D-Ed Reck dialog, were it not so important and so pandemic in current education. However, this is not the place to provide DI to straighten out misconceptions re the detailed workings of the theory. Get thee to an advanced psychometrics course for that.

The authors of the "Scientific Reasoning Test" likely knew very little of IRT, but they did speculate that the abstraction (which they consider "real") might function as a "hidden variable." That's a lay way of viewing a "latent trait."

If anyone has any evidence that any of the constructs I've listed can be instructed directly, flagging the evidence here would help the Department of Huh.

Ken has aired this matter from several angles. To date, no weapons of latent trait instruction have been identified. But the evidence could be hidden unpublished in someone's desk drawer.

Anonymous said...

I agree with Dick Schutz in his first comment above when he says that " 'Facts' don't add up to the substance and structure of a discipline . . ." Content knowledge is not just a collection of facts. It never was, not in the twentieth century, not in the nineteenth, and not today. Content knowledge in most any subject includes a lot of connections, which includes a lot of cause and effect connections, as well as other kinds of connections. Proponents of "inquiry based learning" seem to think that if we do not adopt their language and methods, then we must be teaching facts in isolation. Thus if students listen to a physics lecture, rather than discussing physics in a small group, then they must not be engaged in critical thinking. They must be learning facts by rote. I think that view is totally unfounded. Indeed it seems more the opposite to me. A lecture in physics, or most any subject, involves carefully guiding the listeners through the development of connected ideas. An expert can present those connected ideas in a coherent and directed way, pointing out and emphasizing those connections among the ideas. A small group discussion, though beneficial and appropriate in some ways and for some purposes, is not good at that.

It is hard to learn facts in isolation. Why would anyone try? Why would anyone think that other teachers try to teach facts in isolation? It is true that a single isolated fact may be easily committed to memory, but to take many facts, without connections, and reliably commit them to memory, is not so easy. Normally we do not try. We teach science, and math, as clusters of ideas, concepts, and connections that form structures of knowledge. The various parts are connected, and those connections are important. This is not to say that facts are easy to learn in intricately connected structures of knowledge. The parts of those intricately connected structures of knowledge must be carefully tweezed apart and analyzed. The connections must be carefully developed. Both teaching and learning are hard work. But it is satisfying work when it is done right, and well worthwhile.

It does seem to me that this study presents an important question. If American students are behind the Chinese students in content knowledge, then why would they perform approximately equally well on a test of scientific reasoning? Are, indeed, American students getting something that compensates for this deficiency in knowledge? And could it be instruction in critical thinking? I am very skeptical. How could we establish that American students are indeed getting instruction in critical thinking? How do we know what actually goes on in American classrooms? It's not hard to know what the current educational fads are, but we have always had plenty of evidence that those fads are rejected by many teachers. And when it is attempted, how do we know that it has any benefit? Indeed, how do we know how to teach critical thinking? Some people think we teach it by doing projects and discussing things in small groups. I think we do it by explaining carefully, assigning well chosen problems as homework, by giving feedback on that homework, and by testing and having a system of incentives for achievement. Call me old fashioned, but I think it works.

How do we know that tests for scientific reasoning are either valid or reliable? And how do we know that various forms of selection, self or otherwise, are not more important than instruction?

How much can we trust a test of scientific reasoning? I went to the link http://myweb.lmu.edu/jphillips/PER/ajp-12_05.pdf and looked at the test questions. Some of them seem similar to what Piaget did, and aimed for elementary school students. Some seem just a matter of logic. Some depend on a knowledge of probability. And some seem to depend on having some knowledge.

But all of them, it seems to me, are heavily weighted in language. Are the problems being interpreted correctly? And if not, does that say more about the test makers, or the test takers, or the nature of the task, or the nature and limitations of language? Was the test translated into Chinese for the Chinese students? Are the Chinese so good at English that it didn't need to be? If the test is translated, how do we know the translation is good, and if it's given in English, how do we know their English is that good?

And since language is involved, then of course culture is involved, and cultural expectations, and cultural perspectives.

I long ago decided to take all educational research with a lot of skepticism. We should not dismiss it all by knee jerk reflex, of course, but I think it makes sense to treat it all as merely suggestive. My own view, admittedly subjective and intuitive, is that the tests for content knowledge are meaningful. The test of scientific reasoning is not.

Tracy W said...

Dick: My post was not about IRT. I'd regret that I ever introduced the theory and practice to the D-Ed Reck dialog, were it not so important and so pandemic in current education.

I agree with you that it's important. If we can't properly test achievement, I am extremely doubtful about our ability to improve instruction in order to improve achievement. This is why I keep responding whenever you say something silly about IRT - I'm scared that your misconceptions will lead people to abandon standardised achievement tests or to badly design them. And the idea that a mathematical theory can make people mindlessly interpret correlations is just so outrageously wrong that I can't resist jumping on it.

However, this is not the place to provide DI to straighten out misconceptions re the detailed workings of the theory. Get thee to an advanced psychometrics course for that.

Dick, I think you need to start with some basic university mathematics and statistics courses before you head off to an advanced psychometrics course. Your surprise that I expected a mathematical proof to back up your claim that standardised achievement tests always produce a normal distribution, your idea of a "proof", and your description of the mathematics of IRT as "arcane mathematics" makes me think you'd be totally lost in an advanced psychometrics course.

More generally, I don't get you. You say things like "Reify the latent trait and set out to 'teach' it". But when I ask you even what you mean by "reify", you ignore the question and claim this isn't the time or place. If it was worth saying in the first place, why isn't it worth explaining?

The authors of the "Scientific Reasoning Test" likely knew very little of IRT, but they did speculate that the abstraction (which they consider "real") might function as a "hidden variable." That's a lay way of viewing a "latent trait."

"Hidden variables" I don't think I would call lay, just from a different area of science. That sort of thing happens a lot. Anyway, it gets across the same point, that scientific reasoning, or physics knowledge, is not measurable in the way that weight or height are.

As for your remarks about scientific reasoning, are you arguing that there is no detectable difference between a person who can reason scientifically, and a person who can't? Eg that there is no difference between someone who knows when a control group is needed versus someone who doesn't?

And to repeat my earlier questions -
What do you mean by a common template? And how precisely does IRT require this?
What do you mean when you say that items are highly correlated?
How are you defining reifying in this context?

You were happy to post these statements here, I expect you to explain and defend them.

Anonymous said...

Brian Rude asks: "If American students are behind the Chinese students in content knowledge, then why would they perform approximately equally well on a test of scientific reasoning?"

Answer: Because "scientific reasoning" doesn't exist as such. It's as fictitious as a three dollar bill, Santa Claus, and the Tooth Fairy.

Proof: Run a thought experiment. --Look at the test items. Each presupposes background information (commonly upgraded in usage as "knowledge") of a physics principle.
--Prepare a lecture, write a DI script, or list a Wikipedia link for each of the principles.
--In less than an hour American college students will surpass the Chinese and any other Nation in Scientific Reason.
--Chuck the stimulus package for Education. IRT has solved the problem. Our Scientific Reasoners are world class and have the smarts to solve global warming, straighten out the Mideast, and so on. All we need are more IRT-generated tests.

Tracy W said...

Dick Schultz, you propose a solution that doesn't include IRT at all, and then say "IRT has solved the problem". What is it about you and IRT? One moment you're accusing it of causing mindless interpretation of statistical correlations, the next you are claiming that lecturing, writing a DI script, or listing a Wikipedia article all invoke IRT to magically create instruction. It's just maths. Maths isn't magic.

Secondly, I don't believe you could teach the average American college student, or any country's average college student, all the scientific principles involved in less than an hour. You might be able to find a country in which the average college student already knew then, but that's not the same thing. If you have managed to come up with such a course, please provide some data on it. As for the Wikipedia link, how do you plan to get the students to read and understand them all?

Thirdly, how does having scientific reasoning ability solve global warming and straighten out the Mideast? Those aren't scientific problems, they're political ones. The process of solving or failing to solve those two problems can disprove some hypotheses in economics and in political science, but both problems are so unique that I can't see how they can be solved by any scientific method I know of.

To be blunt, Dick, just because you say that scientific reasoning doesn't exist doesn't mean that scientific reasoning doesn't exist. You may be ignorant of scientific reasoning, as indicated by your "thought experiment", and your response to my requests for a mathematical proof of your claim that standardised achievement tests always produce a normal distribution, but that merely shows that you are ignorant of scientific reasoning, it doesn't prove that everyone else is. Just like my inability to run a four-minute mile doesn't mean that no one can run a four-minute mile.

Finally, to repeat my earlier questions -
What do you mean by a common template? And how precisely does IRT require this?
What do you mean when you say that items are highly correlated?
How are you defining reifying in this context?
As for your remarks about scientific reasoning, are you arguing that there is no detectable difference between a person who can reason scientifically, and a person who can't? Eg that there is no difference between someone who knows when a control group is needed versus someone who doesn't?

Anonymous said...

I thought everyone knew what a "thought experiment" involves.

Wrong!

Tracy W. for one doesn't know. Or if she does, she didn't run the experiment I suggested.

A "thought experiment" saves you the time and effort of running a real experiment, because the outcome is a forgone foolish conclusion.

Let me try explaining this from a slightly different angle. The test of "Scientific Reasoning" is a pseudo measure. One can "teach to the test" in a variety of ways and in a very short period of time reliably get perfect scores on the test. This is so transparently evident that there is no need to run an actual experiment to prove the point.

Tracy W. is trying to drive the thread from the Department of Huh? to the Department of Huh? squared.
Such is life on the Internet.

When are you going to give us something new to think about, Ken?

RMD said...

I don't have time to delve into the details of this discussion at this moment.

However, one thing strikes me .. .

Why are people continuing to argue that American students should do less than their foreign counterparts? Shouldn't we be striving for excellence? What is lost by striving to make sure our students know as much as possible when they graduate?

I still can't understand the argument for less in education.

Tracy W said...

Dick Schutz:
I thought everyone knew what a "thought experiment" involves.

This strikes me as overly optimistic about the education of philosophy. However, you have have the misfortune to irritate me, someone who is not impressed by fancy words like "thought experiment". It's not enough to know what a "thought experiment", you actually have to construct a convincing one, and one that proves the point you originally tried to make.

The test of "Scientific Reasoning" is a pseudo measure. One can "teach to the test" in a variety of ways and in a very short period of time reliably get perfect scores on the test.

Dick, you are committing the fallacy of equivocation, in that you are redefining terms. In response to Brian you said that "scientific reasoning" doesn't exist as such and presented your "proof". Here however you are switching from scientific reasoning to a particular test of scientific reasoning. Frankly, I don't believe that within one hour you could teach the average American college student, or any country's average college student, to reliably get perfect scores on the "Lawson Classroom Test of Scientific Reasoning". But even if you could, that would not prove that "scientific reasoning" does not exist - your earlier claim. It is entirely possible that students who could get perfect scores on the "Lawson Classroom Test of Scientific Reasoning" would suck at a different test of scientific reasoning aimed at the same level of ability. Despite your insinuations here, test developers are well aware that teachers could just teach exactly to any standardised achievement test and produce students who can churn out the right answers without actually improving in their underlying ability. There are two ways of dealing with this:
- The apparent American system of keeping the exact test questions secret.
- The NZ system of publishing the old tests and writing a new test every single time with different questions in it.
Both systems have their problems (keeping something secret over many years is harder than keeping something secret just for the deveopment time, and secrecy reduces the incentive to write good test questions, on the other hand writing a new test each time is expensive and raises comparability problems between years). However, test developers, even IRT guys, are well aware of the learning problem you discuss and seek to avoid it.

In this case, your thought experiment is not a valid criticism of the study in question. Both Chinese and American students averaged a 75% score on the Lawson Classroom Test of Scientific Reasoning. Therefore they haven't been taught how to get perfect results on this test.

And also, even if you could, within one hour, instruct students in scientific reasoning so well that they could get perfect scores on any set of questions about scientific reasoning at the skill level measured by the "Lawson Classroom Test of Scientific Reasoning", this would not say anything about IRT or standardised achievement tests. All that would tell us is that you're a damned fine curriculum designer/teacher who should be enticed away from Internet arguments about IRT to working full-time on designing curriculae to teach scientific reading.

This is so transparently evident that there is no need to run an actual experiment to prove the point.

Three things:
Your claim that within one hour you could instruct the average American college student (or the equivalent in any country) well enough to get a perfect score on the test Lawson Classroom Test of Scientific Reasoning is not transparently evident.
Secondly, even if you could, this would not show that the student would therefore get a perfect score on any test of scientific reasoning just after an hour's instruction.
Thirdly, the results of this experiment, if actually run, even if they came out as you predicted, would not support your claim that scientific reasoning doesn't exist.

The test of "Scientific Reasoning" is a pseudo measure.

I do not know how valid the "Lawson Classroom Test of Scientific Reasoning" is, but you have not shown any better way of testing those attributes that psychometricians describe as "latent traits" (like scientific reasoning or reading ability or driving ability, as opposed to things like height or weight) than a well-designed and validated standardised achievement test.

And accusing the "test of scientific reasoning" of being a pseudo-measure is a very different thing to claiming that scientific reasoning itself doesn't exist.

Tracy W. is trying to drive the thread from the Department of Huh? to the Department of Huh? squared.
Such is life on the Internet.


Dick, please answer my questions below:
What do you mean by a common template? And how precisely does IRT require this?
What do you mean when you say that items are highly correlated?
How are you defining reifying in this context?
As for your remarks about scientific reasoning, are you arguing that there is no detectable difference between a person who can reason scientifically, and a person who can't? Eg that there is no difference between someone who knows when a control group is needed versus someone who doesn't?
How does having scientific reasoning ability solve global warming and straighten out the Mideast?

Anonymous said...

Oh boy. You've got too many things garbled, Tracy. Just delete the terms you don't understand and the sentences you don't like. I'll be glad to redact them, because they're tangential to the two points I've been tying to make here:

One, "scientific reasoning" is among a set of constructs that cannot be taught directly. It IS possible to teach matters of science. And with such expertise an individual can make conjectures and draw conclusions--within the field of expertise--that can be called "reasoning." But "scientific reasoning" is not a general trait, task, or skill. It's discipline-specific.

Two, The Lawson "Test of Scientific Reasoning" is a pseudo- measure. In a very short time aggregate high school students could be taught the rudiments of the principles necessary to get near-perfect scores on the test. The kids wouldn't learn all that much physics, and they'd likely forget what they were taught in short order. But the test results would chalk them up as at the top of the scale of "Scientific Reasoning ability."

The Ohio State profs who Ken called out were misinformed on both these points. That's my contention. But it's conceivable that the two points are flawed. If so, I'd like to be straightened out.

The larger point is that there are a number of other pseudo-measures in (mis)use. The tests stem from IRT and from general psychological theory that was long ago abandoned except by the educational testing industry
. The tests are counter-productive pedagogically. But I have no interest in delving further into the workings of IRT--at least not on the D-Ed Reck watch.

Tracy W said...

Dick: Oh boy. You've got too many things garbled, Tracy.

Nice use of the ad hominen there.

Just delete the terms you don't understand and the sentences you don't like.

No. You write sentences I don't like (particularly when they say silly things about IRT and standardised achievement tests) and I will do my best to refute them. You use terms I don't understand, or I suspect that you don't understand, and I will ask you to explain them. I know this is tough for you, Dick, but if it's not worth defending or explaining your statements, I suggest not making them in the first place.

Dick, before you type in another instruction to me along these lines, think about it from my point of view. What interest do I have in not arguing with you? What can you offer that might induce me to stop this debate?

I'll be glad to redact them

The definition of the word "redact" I am aware of is "prepare for publication or presentation by correcting, revising or adapting." If you wish to correct your ideas because I have convinced you that they are wrong, I am very happy about that. If you wish to revise or adapt them, please let me know what to. But I don't think your ideas are ready for publication or presentation yet.

Perhaps this is a typo for retract?

One, "scientific reasoning" is among a set of constructs that cannot be taught directly. ... "scientific reasoning" is not a general trait, task, or skill. It's discipline-specific.

This is again different from saying that it doesn't exist. I assume that you now withdraw your earlier statement that scientific reasoning doesn't exist. Your earlier statement I will quote here: "Answer: Because "scientific reasoning" doesn't exist as such. It's as fictitious as a three dollar bill, Santa Claus, and the Tooth Fairy."

You have now apparently introduced a new hypothesis that scientific reasoning does exist, and is discipline-specific - am I right? This to me implies that if we want to teach "scientific reasoning" we need to teach and test it across a variety of different scientific disciplines, like the driving licence test I did tested a variety of different skills, such as driving on the motorway, parallel-parking, hill starts, driving in inner-city traffic, 3-point turns (and before the test I was taught and practiced all those skill separately, except for 3-point turns and hill starts which I practiced together as both were necessary to get my mum's car out of the garage and facing the right way up the driveway.)
Now it may not be worthwhile teaching scientific reasoning in a variety of different disciplines to many students. The time it takes may be too long. But that's not the same as saying that scientific reasoning does exist.

In response to your point about the Lawson Test of Scientific Reasoning, yes, it is plausible that you could teach students to get really high scores in a short period of time on that specific test, if you had access to all the questions (although I am heartily skeptical about less than one hour). I very much doubt though that they would thereby get good scores on slightly different tests of scientific reasoning aimed at the same skill levels. And if you could, that would merely show how good you are at teaching scientific reasoning. This is the point I made previously.

As for your statements about pseudo-measures, your blaming of all these problems on IRT is just silly. IRT does not cause people to mindlessly interpret statistics, or badly design experiments. The point of IRT is to be able to estimate a student's underlying ability independently of the difficulty of the test (which it can't do incidentally if the student gets all the questions wrong or right).
what precisely is the general psychological theory that was long ago abandoned except by the educational testing industry?
And "the tests are counter-productive pedagogically" - this is a mysterious claim. If you don't use standardised achievement tests you can't say anything reliable about the success of instructional methods. Standardised achievement tests are essential pedagogically because of the problems with observer bias in scientific research.

But I have no interest in delving further into the workings of IRT--at least not on the D-Ed Reck watch.

Well if you tell me where else on the world wide web you would prefer to discuss this, we can go and talk about it there if I can get access to it.

As for my questions from earlier:
What do you mean by a common template? And how precisely does IRT require this?
What do you mean when you say that items are highly correlated?
How are you defining reifying in this context?
How does having scientific reasoning ability solve global warming and straighten out the Mideast?

Are you now redacting all these statements about reifying, IRT requiring a common template, your claim that items are highly correlated, and that scientific reasoning can solve global warming and straighten out the MidEast? And if so, what to? Or are you retracting all these statements? If neither of these two options, can you please answer my questions?

Anonymous said...

--The American students who presumably were instructed in inquiry-based techniques fared no better than the Chinese students in general reasoning ability. Inquiry-based teaching once again failed to show results. And it certainly did the students no favors when it came to the students understanding of physics in which they performed poorly.

But why presume that? Really, why presume that high school physics classes are inquiry based?

All of the data to date on the FCI shows that traditional classrooms at both the high school and university level are abysmal at teaching physics. Teachers have no idea that their students have grasped nothing of the basics of mechanics, even as their students succeed at performing the mechanical problem solving correctly. (Non traditional classrooms have higher variance in their physics teaching...so some are terrible and some not.)

I'd say what this study shows is something else: the Americans don't know how to teach physics. The Chinese do. Teaching physics doesn't teach reasoning per se, and Americans don't know how to teach that either. But drawing a conclusions about inquiry based physics is really a bridge too far.

Anonymous said...

Now, the interesting question is WHY Americans don't know how to teach physics, and largely the answer is because they don't know any.

This is true of the professors in the universities too. The physics teaching literature (AJP, Physics Teacher, etc.) is filled every journal with more assessments of what university students don't know; most of the best theorists have memoirs or journal articles that admit how many of their own colleagues can't solve problems; most profs admit at least one teaching prof in their department has taught Special Relativity entirely wrong; etc. etc. etc.

This might be mind boggling, but the simple answer is: Newtonian physics is not as "common sensical" as you might expect. We humans don't recognize inertial frames; we don't know how to tell when forces are acting on an object; we don't know how to tell when something is accelerating. Our intuition is wrong, and the way physics is taught, we practice our wrong intuition until it's cemented in, and most profs never notice, because they think their students and themselves share an overlap of consciousness re: what their common vocabulary means.

It appears that eventually, some students find out their mistakes, and correct their misconceptions. Apparently, this happens in a counter intuitive way.

The students learn to do the problems to mastery. They do them so well that they know they've done them right. Somewhere in grad school, their oral exam or their thesis prof or some undergrad students of theirs force them to examine their misconceptions; they blurt out the wrong idea, but later work the problem on paper AND TRUST THEIR RESULT. THEY PAY ATTENTION to the result, and realize their intuition was wrong--the paper answer is right. Eventually, they do this until their intuition matches their mastery.

This implies something more interesting I think---we often have the idea that we need good intuition to solve problems well. In physics at least, though, the best students who go the farthest don't appear to have better intuition at all; they have better skills, though, and can rote solve a problem and TRUST THEIR SOLUTION IS RIGHT.

This is formal mastery without intuition---the opposite of discovery learning, in one sense. Yet, this is obviously not optimal, since the student doesn't get to the discovery until grad school. And then, eventually, trough extensive questioning, they are led to correct their intuition. Not discovery either, as its strongly socratic and usually guided by an instructor. Think of all the years they did the problem right without realizing their intuition was wrong!

The Chinese students have mastery and intuition. Which came first? Did they just learn earlier the same way?

And lastly, does this mean are notion of mathematical intuition needing to be built up is wrong too? Is it better just to rote teach for years and then hit them over a hammer and say "the answer is telling you you're wrong!" or could something be done in between, to teach the intuition and the rote problem solving simultaneously?

Tracy W said...

Hmm, Allison, interesting points about what you say here about Newtonian physics.

I think I'm getting more convinced that scripting lessons is the way to go for much of schooling. Many concepts need to be explained so precisely that we just don't have enough people who know them well enough to really be qualified to teach them if we want to reach the maximum number of students. Not just Newtonian physics, but the details of learning reading, or basic arithmetic. I can read fine and I can do basic maths fine, but when I read what university-level linguists write about the details of reading and speaking (see http://itre.cis.upenn.edu/~myl/languagelog/archives/001382.html), or what Hung-Hsi (http://math.berkeley.edu/~wu/) Wu writes about the details of mathematics, I realise I don't know anything like enough to properly teach those subjects. And it sounds like education schools aren't teaching their students the details of how to read or a really fundamental understanding of arithmetic, let alone all the other subjects that schools wind up teaching.