Stephen Downes, I asked you before in a comment on one of Ken's posts if there is any evidence that could convince you that content knowledge is necessary for thinking. I suppose I was too late and you were no longer reading the thread, I'll try again here, running the risk of course that you might no longer be following the thread.
Is there any evidence that could convince you that content knowledge might be necessary for reading and critical thinking?
I pulled Tracy's comment up front because I'd also like to hear a straight answer from Stephen for this question and the best way to attract an answer from him is to write a blog post which he'll see in his feed reader. Typically, he'll be the first to comment.
Of course, Tracy is trying to thread the needle by expecting a concise, clear, and (fully) supported answer from Stephen that answers the precise question asked. Tracy expects this type of answer because that's how she would answer (I know this based on the numerous comments Tracy has left on many blogs). Though there is a slight chance will provide such an answer (just to prove me wrong), it is far more likely that he will respond in one of two ways.
He will either provide a concise answer based on at least one re-defined term of the question. In this case he'll re-define the term "content knowledge" to mean "only facts taught by rote." And, he'll provide an answer along the lines of "Facts are almost never necessary for critical reading and thinking skills." Technically, his answer is accurate based on his re-definition. The problem is that he often doesn't tell you the redefinition, leaving it implicit. It took me two years to determine that he was operating under this particular re-definition of "content knowledge."
Or, he'll provide the classic shaggy dog answer. I often get to the end of one of Stephen's lengthy blog posts only to find the conclusion doesn't necessarily follow from what's preceded it. this doesn't mean that he's necessarily wrong; all it means is that I cannot follow his argument as written. That's a failure of advocacy.(My day job entails the ability to quickly understand and analyze arguments based on technical descriptions written by experts for other experts in domains in which I am not necessarily an expert, so if anyone should be make sense of Stephen's arguments it's me.)
We actually don't need to reinvent the wheel on this issue. Stephen has already provided partial answers.
Let's start with this one.
[Y]es, you need domain knowledge to answer requests for facts within that domain.
That answer came in response to four specific questions whose answers depended on domain knowledge.
But notice the qualification: "requests for facts within that domain." It's superfluous. The questions did request a factual answer because that's the easiest and most common way of demonstrating one's knowledge. That knowledge could have been demonstrated without the use of facts. This is readily apparent in the last two questions.
1. Jones sacrificed and knocked in a run. Where are Jones and the runner?
2. When John walked out onto the street, he nictitated rapidly. Where might John have just come from?
Now imagine that the reader has been presented with four pictures or diagrams for each question. One of the pictures/diagrams is a depiction of a correct answer. The other three do not depict the correct answer. The reader is instructed to select the correct picture/diagram.
The reader can now demonstrate his knowledge without knowing any facts. However, in both cases (the fact-based answer and the picture/diagram-based answer) the reader must still be able to successfully discriminate between an example of the higher order concept "sacrificed and knocked in a run" and "nictitate" and non-examples of those domain-specific concepts. The reader doesn't even have to be able to articulate an accurate definition of those domain-specific concepts to perform a successful discrimination. (And, to make sure we're clear, I'm using the term concept to mean something that has a defining feature.)
There was no need for Stephen's qualification. The form of the answer is irrelevant to the need for the reader to possess domain-specific content knowledge when the reader is requested to discriminate between examples and non-examples (or to identify an example) of a particular piece of domain specific-knowledge. That is the underlying salient point to the argument I presented to Stephen.
I made this same point (though less elegantly) in my response to Stephen.
That these answers can be provided in the form of declarative knowledge does not detract from the premise that the [reader] must understand the underlying knowledge to activate the declarative knowledge needed to answer the question.
But what happens is not that the person had the fact, but rather, that the person created the fact out of the complex mass of previous observations and experiences, and presented it to you.
I don't disagree, although it is possible that the answerer also knew the definition or fact associated with the concept needed to perform the discrimination/identification which underlies the answer. And as far as that "complex mass of previous observations and experiences" goes, that's domain or content knowledge. The reader, to use Stephen's words, must be able to use that "complex mass of previous observations and experiences" to perform the underlying discrimination/identification needed to answer the questions presented. Of course, you might not have caught that since it was buried in a comment trying to refute my argument. Compounding the confusion is that these accurate statements are tangled up with inaccurate ones, like this one:
I've said on numerous oc[c]asions, simply responding to a demand for some fact does not constitute reasoning. These questions may resemble high school tests - but they do not resemble inference or reasoning.
As I explained above, responding to a request for a fact (such as in a high school test) often does require reasoning. Specifically, it requires simple deductive reasoning, something all preschoolers possess. For example, in the reader's "complex mass of previous observations and experiences" he knows a general idea (concept, rule/proposition, routine), he uses the general idea to examine an unknown potential example using the information contained within the general idea, “decides” whether the new example fits within the boundaries defined by the "complex mass of previous observations and experiences," and “treats” the example accordingly by indicating an answer.
That seems to resemble reasoning to me even when the end-product of the process results in a bubbled in (c). But, apparently, not to Stephen, so I provide a clearer example that required both content knowledge and reasoning. Stephen responded by chiding me for repeating myself and then gets all cryptic on me.
When you represent 'knowledge' as 'answering questions' you get a trivial, stunted version of knowledge.
He leaves this unexplained. It might be profound. Or simplistic. But, certainly it demands an explanation.
So, like Tracy, I'm interested in getting a straight answer to Tracy's question, or better yet, a more precise question:
When is domain-specific content knowledge (i.e., "complex mass of previous observations and experiences") not needed for evaluating a proposition made in that domain once the proposition has been filtered for trustworthiness (i.e., derived from evidence not opinion; author/speaker is qualified, trustworthy, and not biased bereft of logical fallacies and faulty arguments)?
Then perhaps we can get on to the more interesting questions related to learning and teaching.
(Commenters feel free to further clarify my question for Stephen.)