October 30, 2009

How to teach a student to write a point that is supported by evidence

The following is one way to teach a student how to write one form of a simple paragraph in which the student agrees with one of a pair of arguments presented to him in a fact pattern based on a comparison of the arguments with a source of evidence.  The lesson comes from SRA's Reasoning and Writing, Level D (Lesson 82, exercise 1).  The lesson is suitable for a student who can copy words at a rate of 15 words per minute and  possesses basic paragraph writing skills as determined by a placement test. 

The student reads the following passage:
Fran Dent wanted a raise.  She told her boss why she deserved a raise.  She told the boss that she worked hard, that she worked fast and accurately, and that she was always on time.

Her boss said that he didn't think she should get more money because she wasn't always on time.  He said, "You're late most of the time."

She said, "I'm never late to work."


Teacher: The place where Fran worked had a time clock that showed the time everybody came in each morning. Fran and her boss decided to get the records to show how often Fran was late.



(Below the box is the student's prompt.)

Teacher:

Fran indicated that she was never late.  The boss said that she was late most of the time.  You'll write a paragraph that tells whose claim agrees with the time clock record.  The equal box prompt show what your paragraph will say.  You'll start by saying what the person who was right indicated.  Then you'll tell that the time clock record supports that person's claim.  Then you'll give enough facts to make it clear that one of the persons is right and the other is wrong.

The time clock record shows some of the employees.  The first column shows their names.  The next column shows the time they are supposed to be at work.  They're all supposed to be at work at 8 a.m.  If they come later than 8 a.m., they're late.  The next column shows the number of days they were absent during the whole year.  The last column shows the latest time they arrived during the whole year.

Some of the information in the table is relevant to the disagreement between Fran and her boss.   Look over the table carefully.

Write your paragraph.  Tell when she's supposed to arrive.  Then give any fact that's relevant.

Here's what the student (a nine year old fourth grade student) actually wrote:

      Fran indicated that she was never late for work.  The time clock record supports this claim.  The time clock record indicates that Fran's latest time for work was 7:56 a.m., four minutes before work.

The student would receive feedback to correct the vague wording of the last sentence.  Then a good example paragraph would be read to the student:

     Fran indicated that she was never late for work. The time clock record supports this claim. The record indicates that she was supposed to get to work by 8 a.m. and she was never later than 7:56 a.m.

In subsequent lessons the prompts would be faded until the student could construct a suitable paragraph for a similar type of problem.

Position Papers from SLA

A position paper is a classic, basic form of argument that every high school student should know how to write well--even the 21st century variety.  Here's how a tech savvy 21st Century student might learn how to write a position paper.

It's not rocket science, but it does take lots of practice to do well.  Sadly, many high school students never learn how to write basic position papers well, if at all.

Chris Lehmann, principal of Science leadership Academy (SLA) of Philadelphia, has asked his students, mostly seniors, in his Modern Educational Theory class to draft a position paper.  Chris has posted the assignment on his blog, Practical Theory, and wants you to take look at the student's position papers and to comment.

We've visited SLA before.  SLA is a magnet high school which proudly sets itself out as an "inquiry-driven, project-based 21st Century school with a 1:1 laptop program." Last time we looked at one example of a student's writing assignment, probably one of the better examples since it was picked for the Family Handbook.  This time we have the writings of an entire class of (mostly) seniors.

Let me state at the outset, I'm sure Chris and SLA mean well and care about their students.

Here's the assignment.

We, at this point, looked at several different views of education, from Deborah Meier's vision of democratic education, to Robert Pirsig's "Church of Reason," to Diane Ravitch and E. D. Hirsch's views of core knowledge, to Nel Nodding's ethic of care, to President Obama's speech on the first day of school.

Now, it is your time to take your stand.

You are to write a two page position paper creating your vision of what school should be.

Your paper should consider the following points:

  • Clearly define your vision of school:
    • What is its purpose?
    • Why is it good for the individual?
    • Why is it good for socie[t]y?
    • What does your vision of school value? Prioritize?
  • Given this vision of school -- what differences would you see in the structure of school when compared to a "traditional" school?
Readers of this blog should be sufficiently familiar with the differing views on education to be able to evaluate the students' work on the merits and to determine if well supported positions have been taken on their views of what school should be.

Here is Chris' take on the student's position papers:

I'm really thrilled with much of the thoughtfulness that the kids display in the essays. It is, obviously, clear that the kids have been at SLA for years, but I don't think that's their only vision of what school can be -- which is important to me. The kids have their own thoughts, and I'm really interested to see how these visions continue to evolve.

I'm not sure I understand the purpose of this assignment.  It is coming at the beginning of the course before the students should have learned much about modern educational theory.  Is the important thing to actually learn and understand modern educational theory or how to write a position paper?  I'm going to assume the object was to accomplish both.

According to my view of education and learning, I would not expect most students to have acquired a deep understanding of modern education theory after just a few weeks of exposure.  I would expect only a superficial understanding that is closely tied to the examples (i.e., the specific pundits' opinions) the students were exposed to.  And, that is exactly what we see in the students' work.  This isn't meant to be a criticism of the students' work.

I made this same observation in the last SLA assignment that I reviewed.  Then, my criticism was directed at SLA because SLA was overselling (and continues to oversell) these projects  that supposedly "can only be completed by showing both the skills and knowledge that are deemed to be critical to master the subject and demonstrate that deep level of understanding." (2009 Family Handbook, p. 4)  And, the primary assessment of student knowledge continues to be these projects:

At SLA, there may be multiple assessments – including quizzes and tests – along the way, but the primary assessment of student learning is through their projects. Id.

Last time I got pushback from Chris and Tom Hoffman.  Both of their arguments basically attempted to redefine deep understanding downwardly to mean the ability to express an opinion. No doubt they'll try the same gambit again.  Chris thinks the papers were thoughtful and that the kids had their own thoughts.  That's not exactly a challenging standard.  But, we don't need to go there this time because I would not expect most students to have a deep understanding of the subject matter yet.  Time will tell if this situation improves by June.

So, let's turn to the position paper part of the assignment.  A well written position paper at the high school level should follow the traditional format of introduction, body, and conclusion.  At a minimum, the body should contain an explanation of why the position has been taken and should contain supporting evidence for the position.  A good position paper will have a thesis and a concluding summary of the main points.  The body would include the counter arguments and their rebuttals. 

Chris' prompt seems at odds with the standard definition of a position paper.  Chris apparently is just looking for the students' opinions (or vision) of what school should be, provided that those opinions state a purpose, the benefits to the student and to society, the values and priorities, and the difference with respect to the structure of traditional schools.  And, that's largely what Chris got -- mostly opinion.  As far as support for the opinion, most students provided more of their opinion and occasionally a tie-in to one of the pundit's opinions.  Most of the essays go off point, some stray far off point.  All the essays could use a good editor, at least one rewrite, and should be tighten-ed up considerably.

Chris calls these essays a first draft.  A first draft of the students' opinions maybe, but more like a zeroth draft of a properly written academic standard position paper.  I call Chris a brave man because publishing these very raw essays on the internet and then calling attention to them in your blog takes quite a bit of professional bravery since these essays are a reflection on SLA's teaching ability.

I am assuming that no teacher has reviewed and  made editorial comments on these essays prior to their being publishing.  The essays are full of language usage problems, grammatical mistakes, informalities, and colloquialisms.  Does SLA really want the world to see the essays in this form? 

Apparently so.

I must be missing something.  Most of the students have formed an opinion that school should be just like SLA; but, their very own essays demonstrate that school should not be just like SLA if basic writing skills are one of the goals.

This is not an indictment of the kids or their abilities.  Clearly, these kids want to learn.  They have stuck it out this long, overcoming whatever adversity was in their way.  No, it's an indictment of their schooling, only a part of which SLA is responsible for.  If these kids are college bound, remediation is in their future.

But, what I really don't understand is that based on the demonstrated abilities of these students why are they wasting their time learning Modern Educational Theory when they should be learning basic writing and language skills?  They're already getting a painful lesson of the pitfalls of some of elements of Modern Educational Theory the hard way (ironically enough, the ones they largely favor), they just won't realize it until next year.

October 28, 2009

Murray on Curriculum

[I]t’s time for me to get in touch with my inner optimist. We can’t make our kids much smarter than they are naturally, but we can do a hugely better job of teaching them stuff. If you get away from the worst schools in the big cities, I think the central problem with the public schools is not poor teachers, but the curriculum teachers are given to teach, especially in elementary and middle school.


- Charles Murray

Murray goes on to tout the Core Knowledge sequence, as he did in his last book, Real Education.

I'd say the second biggest obstacle is teacher preparation and training.  Teachers largely do not have the necessary skills to effectively teach the most effective curricula.  They have difficulty teaching from a scripted curriculum in the absence of a considerable amount of training.  Needless to say, they aren't getting this training in Ed school.

Some Clarity on NCLB

Below is a lengthy quote of Judge Sutton from the recently decided School District of the City of Pontiac, et al. v. Secretary of the United States Dep’t of Educ (back-story). The opinions in the case, which deadlocked, are long and mostly concern boring legal issues of statutory construction and other procedural issues which no one, apart from lawyers, care much about.  The excerpted quote, however, is a clearly written analysis of NCLB and the basic bargain it made with the states:  federal funds for achieveing progress along with substantial flexibility for achiving and defining that progress.  Read the whole thing, it is worth your time. (The quote starts at about page 48.  I've redacted some non-relevant sections and bolded the important bits.):

[T]he No Child Left Behind Act clearly requires the States (and school districts) to comply with its requirements, whether doing so requires the expenditure of state and local funds or not. A contrary interpretation is implausible and fails to account for, and effectively eviscerates, numerous components of the Act.

The basic bargain underlying the Act works like this. On the federal side, Congress offers to allocate substantial funds to the States on an annual basis—nearly $14 billion in 2008 for Title I, Part A, a 60% increase in relevant federal funding since 2001—exercising relatively little oversight over how the funds are spent. On the State side, the States agree to test all of their students on a variety of subjects and to hold themselves and their schools responsible for making adequate yearly progress in the test scores of all students. In broad brush strokes, the Act thus allocates substantial federal funds to the States and school districts and gives them substantial flexibility in deciding how and where to spend the money on various educational “inputs,” but in return the schools must achieve progress in meeting certain educational “outputs” as measured by the Act’s testing benchmarks. As the Supreme Court recently explained:

NCLB marked a dramatic shift in federal educational policy. It reflects Congress’ judgment that the best way to raise the level of education nationwide is by granting state and local officials flexibility to develop and implement educational programs that address local needs, while holding them accountable for the results. NCLB implements this approach by requiring States receiving federal funds to define performance standards and to make regular assessments of progress toward the attainment of those standards. 20 U.S.C. § 6311(b)(2). NCLB conditions the continued receipt of funds on demonstrations of  “adequate yearly progress.” Ibid.

Horne v. Flores, __ U. S. __, 129 S. Ct. 2579, 2601 (2009). The school districts’ position—that they can accept the federal dollars, spend them largely as they wish, yet exempt themselves from the Act’s requirements if compliance would require any local money—undoes this bargain by nullifying some provisions of the Act and undermining several others.
Accountability. Accountability is the centerpiece of the Act, and a plausible interpretation of the legislation cannot ignore that reality. Instead of focusing on how much money school districts spend on each child or “dictating funding levels,” the Act “focuses on the demonstrated progress of students through accountability reforms.” Id. at 2603. The Act begins with a “Statement of Purpose” that drives home Congress’s interest in establishing accountable public schools: “ensuring . . . high-quality academic assessments [and] accountability systems”; “holding schools, local education agencies, and States accountable for improving the academic achievement of all students”; “improving and strengthening accountability”; and “providing . . . greater responsibility for student performance.” 20 U.S.C. §§ 6301(1), (4), (6), (7).

Flexibility. The school districts’ interpretation is inconsistent not only with the Act’s accountability requirements but also with the flexibility the Act gives States and school districts in return for increased responsibility for student achievement. As the Act’s Statement of Purpose makes clear, that is the central tradeoff of the Act: “providing greater decisionmaking authority and flexibility to schools and teachers in exchange for greater responsibility for student performance.” id. § 6301(7) (emphasis added); see also Horne, 129 S. Ct. at 2601 (the Act “reflects Congress’ judgment that the best way to raise the level of education nationwide is by granting state and local officials flexibility to develop and implement educational programs that address local needs, while holding them accountable for the results”). Unlike most spending programs, this one comes with few strings telling the States how they should comply with its conditions. Under the Act, States develop their own curricula and standards, 20 U.S.C. § 6311(b)(1), their own tests to assess whether students are meeting those standards, id. § 6311(b)(3), and their own definitions of progress under those standards, id. § 6311(b)(2)(B), so long as the progress culminates in near-universal proficiency by 2014, id. § 6311(b)(2)(F).

This flexibility extends to spending as well. As the school districts rightly acknowledge, the Act “provide[s] school districts with unprecedented new flexibility in their allocation of Title I funds.” Final Reply Br. of Pontiac Sch. Dist. at 3 (internal quotation marks omitted). Some federal funds, to be sure, must be spent in certain ways. See, e.g., 20 U.S.C. § 6303 (reserving some Title I, Part A funds for school improvement); id. § 6317(c)(1) (same); id. § 6318(a)(3)(A) (reserving some funds for parental involvement programs); id. § 6319(1) (reserving some funds for professional development). And the Act strictly confines the use of Title I funds to geographic areas with heavy concentrations of low-income students. See id. § 6313(a). But within these areas and with respect to these priority students, the Act gives States and school districts substantial flexibility in choosing how to spend the money. For instance: Section 6314 gives school districts wide discretion to consolidate funds from various sources and to focus them on certain schools in whatever ways will improve student performance there; § 6313(b) gives school districts discretion to transfer funds between schools within certain guidelines; and § 7305b allows States and school districts to transfer up to 50% of the funds allotted to other education programs to supplement their funds under Title I, Part A.

The substantial flexibility the Act gives recipients over federal funds is surpassed by the near-complete flexibility they retain over their own funds. The only limitation is that participating States cannot reduce their own spending and offset it with federal funding but must use the Act’s federal dollars to supplement, not supplant, their own. 20 U.S.C. §§ 6321, 7901. Beyond that basic requirement—a prohibition on fiscal cheating, really—the States can use their dollars however they see fit, whether for teachers or for computers or for facilities or for whatever else they think will help their students the most.

The express and unprecedented flexibility the Act gives to the States in prioritizing the spending of federal dollars—especially in Title I, Part A—cannot coexist with an interpretation of the statute that allows school districts to exempt themselves from the accountability side of the bargain whenever their spending choices do not generate the requisite achievement. Were the school districts correct, a State could use this flexibility to focus its federal and local resources almost exclusively on improving, say, teacher quality—a legitimate goal no doubt, but one that would allow the State to sidestep the Act’s mandatory assessment requirements by contending that it lacked the funds to administer them or to make progress under them. Sch. Dist. of City of Pontiac v. Sec’y of United States Dep’t of Educ., 512 F.3d 252, 284 (6th Cir. 2008) (McKeague, J. dissenting). That is not what Congress had in mind. It gave the States a clear and consequential choice: between taking the bitter (accountability) with the sweet (unprecedented flexibility in spending federal and state dollars) or leaving the money on the table.

Costs of Compliance. Not surprisingly, in view of the expansive flexibility that the Act gives States in spending federal and local funds, the Act says nothing about the bill of particulars at the heart of the school districts’ complaint: the costs of complying with the Act’s requirements. How could it be otherwise? The Acts’ spending flexibility necessarily makes it impossible to calculate or even define the costs of complying with the Act’s requirements.

The primary formula for allocating Title I, Part A grant money does not say a word about costs of compliance. See 20 U.S.C. §§ 6313(c), 6333(a), 6334(a), 6335(a)–(c), 6337. While the Act asks States to submit plans to the Secretary, id. § 6311, and asks school districts to submit plans to the States, id. § 6312, it does not require either entity to estimate the cost of compliance. Nor, in fulfilling their various reporting responsibilities under the Act, must the States or school districts estimate the costs of compliance. See, e.g., id. §§ 6311(h), 6316(a)(1)(C). If, as the Supreme Court recently explained, the Act “expressly refrains from dictating funding levels,” Horne, 129 S. Ct. at 2603, why would Congress exempt failing school districts from the accountability requirements based on inadequate “funding levels”? The school districts have no answer.

But even if Congress wished to make costs of compliance a legitimate excuse for, say, inadequate yearly progress, how would it do so? Once Congress decided to measure accountability by educational outputs (gauged by tests scores), as opposed to educational inputs (gauged by dollars), it made objective measurements of compliance costs virtually impossible. Any effort to measure these costs surely would vary from school to school, if not from student to student, and they surely would vary from year to year. The phrase “costs of compliance” has no discernible meaning in this context, as the Act leaves it to the States, no matter how little or how much funding Congress provides, to make discretionary cost choices about how to make meaningful achievement-related progress.

Take a cost estimate for adding an extra hour to the school day, for lengthening the school year or for hiring more math or reading teachers—all plausible ways to improve a school’s achievement scores. Each innovation has an estimable cost, to be sure. But that does not establish that the estimate would lead to the requisite progress. And if it did not, then what? Perhaps extending the school day by one more hour, extending the school year by one more week or hiring one more math or reading teacher would do the trick. But maybe not. What works for one school district might not work for another. What, indeed, works for one classroom might not work for the classroom next door, given the correlation between great teachers and great teaching—and the occasional operation of that principle in reverse. Even more discrete costs like developing and administering tests cannot be accounted for in advance given the considerable flexibility States have under the Act in implementing those requirements. Within certain general limits, a State may develop whatever curricular standards and tests it wants. 20 U.S.C. § 6311(b). The State may use pre-existing standards that meet the Act’s requirements, id. § 6311(b)(1)(F), or it may create new ones.

In their complaint, to use one example, the school districts say that Brandon Town School District “estimates that . . . it needed to spend $390,000 more than it received in NCLB Title I funding to ensure that the school makes [adequate yearly progress].” Compl. ¶ 65. The school district may be right, and we have no license to say that it is not at this Rule 12(b)(6) stage of the case. The issue, however, is not whether the school districts can fairly say that compliance with “adequate yearly progress” requires more federal dollars than the Secretary has allocated to them. It is whether a State could tenably think that the Act excuses non-compliance whenever a school district maintains that it has insufficient resources to make the required progress. Surely every school district could do more with more money. And if that is the case, every failing school district could do more with more federal money—and maybe enough to make adequate yearly progress. It is hard to imagine when—or, for that matter, why—a failing school would ever concede that it was getting sufficient federal funds to make such progress.

“Reflecting a growing consensus in education research that increased funding alone does not improve student achievement,” the Act moves from a dollars-and-cents approach to education policy to a results-based approach that allows local schools to use substantial additional federal dollars as they see fit in tackling local educational challenges in return for meeting improvement benchmarks. Horne, 129 S. Ct. at 2603 & n.17. The Act, in short, rejects a money-over-all approach to education policy, making it implausible that the heartland accountability measures of the law could be excused whenever schools, exercising their flexibility over how to spend federal and local dollars, decided they cost too much.

* * * * *


Depending on whom you ask, the No Left Child Behind Act might be described in many ways: bold, ground-breaking, noble, naïve, oppressive, all of the above and more. But one thing it is not is ambiguous, at least when it comes to the central tradeoff presented to the States: accepting flexibility to spend significant federal funds in return for (largely) unforgiving responsibility to make progress in using them. The theme appears in one way or another in virtually every one of the Statements of Purpose of the Act, and it comes across loud and clear in the remaining 674 pages of legislation.

That said, I have considerable sympathy for the school districts, many of whom may well be unable to satisfy the Act’s requirements in the absence of more funding and thus may face the risk of receiving still less funding in the future. Yet two Presidents of different parties have embraced the objectives of the Act and committed themselves to making it work. So have a remarkably diverse group of legislators. If adjustments should be made, there is good reason to think they will be. But, for now, it is hard to say that the judiciary will advance matters by taking the teeth out of the hallmark features of the Act. It is the political branches, not the judiciary, that must make any changes, because the Act’s requirements are clear, making them enforceable upon participating States and their school districts.

October 25, 2009

Today's Quote

Why the belief that SES is causal is so deep and wide is perplexing and astounding. The only explanation I can come up with is that it lets publishers, professors and other "authorities", who ARE causally responsible, off the hook.
- Dick Schutz

A Good Example of Why Education Professors Shouldn't Blog

Education Optimist, Sara Goldrick-Rab, must not get embarrassed very easily judging by this post criticizing a mostly dopey Kristof column on education and poverty:


Social science researchers across the nation are scratching their heads. Where in the world did Kristof get this one? For decades, solid analyses have demonstrated that while aspects of schooling can be important in improving student outcomes and alleviating the effects of poverty, the effects of factors schools cannot and do not control are much greater (for a place to start, read Doug Downey's work). Kristof emphasizes teachers and improving teacher quality by taking on the teachers' unions because he reads the data to mean that "research has underscored that what matters most in education - more than class size or spending or anything - is access to good teachers." Simply put, wrong. Access to good teachers is the most important factor affecting student achievement that is under schools' control (or as many put it, the most important school-level factor). What matters most in educational outcomes is the poverty felt by students' families. And to my knowledge, no study has ever rigorously compared the effectiveness of interventions based on cash transfers, housing subsidies, and teacher quality improvement-- what's needed to reach the kind of conclusion with which Kristof drives his argument. At the same time, a simple glance at the relative effects of programs like Moving to Opportunity, New Hope, etc which target poverty itself rather than how adults interact with children from poverty (the aim of improving teacher quality), should convince anyone than his target is misplaced.
(emphasis added)


That is one densely packed paragraph of bad reasoning.  One more poorly reasoned sentence and it might have collapsed upon itself into a educational black hole, if you will.  Fortunately the following hasty call for censorship, another Goldrick-Rab staple, was placed in the next paragraph.
 
Experts who think daily (Ed -- how about the ones that only think fortnightly?) about how to end poverty could, and undoubtedly will, inform the next steps taken by Democrats. Dems should listen to them, and not to Kristof.
 
First of all, there are few if any solid analyses of the effects of poverty on educational outcomes.  I don't think there is any scientifically sound research upon which anyone could draw the causal conclusion that alleviating poverty has educationally significant effects on student outcomes.  But why don't we take a quick glance at the two pseudo-research studies Goldrick-Rab cites:
 
The Moving to Opportunity (MTO) for Fair Housing Demonstration Program interim report found:
 
MTO had no detectable effects on the math and reading achievement of children.

OK.  Let's move on to the New Hope Project:

One of the most striking findings from the earlier evaluation reports was New Hope’s positive effects on children’s academic achievement at the two-year mark, in the form of increased teacher-rated academic skills, and at the five-year mark, in the form of higher standardized reading test scores (these tests were not administered at Year 2) and higher parent-reported grades in reading. However, these effects did not persist to Year 8, at least for the full child sample, although there were continued small effects on reading test performance for boys. No effects on math test performance were found. Overall, there was a tendency for impacts to be greater for boys than for girls. (emphasis added)
(See p. 29 and Table 5)

Do you remember the scene in the movie My Cousin Vinny where Vinny's girlfriend is looking at all the bad pictures his girlfriend has taken to help him win the lawsuit against his cousin and his friend that he's trying to win? 

Okay, you're helping. We'll use your pictures. Ah! These *are* gonna be - you know, I'm sorry, these are going to be a help. I should have looked at these pictures before. I like this, uh, this is our first hotel room, right? That'll intimidate Trotter. Here's one of me from behind. And I didn't think I could feel worse than I did a couple of seconds ago. Thank you. Ah, here's a good one of the tire marks. Could we get any farther away? Where'd you shoot this, from up in a tree? What's this over here? It's dog shit. Dog shit! That's great! Dog shit, what a clue! Why didn't I think of that? Here's one of me reading. Terrific. I should've asked you along time ago for these pictures. Holy shit, you got it, honey! You did it! The case cracker, me in the shower! Ha ha! I love this! That's it!


The MTO and New Hope studies are the case crackers of poverty interventions on education outcomes.

Both studies show what most studies typically show for poverty interventions on education outcomes:  small or undetectable effects that tend not to persist past adolescence.

And, here we have Goldrick-Rab citing them as conclusive proof of just the opposite.  Simply amazing. The woman has no shame.  Where's Bracey's rotten apple award when you need it most?

Good Night, Sweet Hack

Education Gadlfy, Grald Bracey, died this week.  He had a sharp analytic ability.  It is sad that he only applied it to refute some of the shoddy evidence, like international testing comparisons, against the things he believed in.  For all the other shoddy evidence supporting the things he believed in, like poverty/school outcome research and whole language research, he consistently failed to use his sharp analytic abilities.  This curious dichotomy and his penchant for cherry picking evidence permitted him to be an apologist for America's public school system.  It takes quite a bit of cognitive dissonance to be one of those and Jerry was a full-throated one.  He deserved but never received one of his own bad apple awards.

October 22, 2009

Actually, That's Not What the "Research" Shows Either

In her third post of her new blog, The Educated Reporter, Linda Perlstein makes an (all-too-common) error that reporters should not be making, especially reporters claiming to be educated.

Perlstein starts off well by criticizing the inane trope President Obama repeated about "teacher effectiveness" in a recent speech.

In his education speech to the Hispanic Chamber of Commerce in March, President Obama said, “From the moment students enter a school, the most important factor in their success is not the color of their skin or the income of their parents. It’s the person standing at the front of the classroom.”


To put it bluntly: “He’s wrong.”

Indeed.  First of all, the research on this issue isn't really research in the conventional sense that there are properly conducted controlled studies on point.  There aren't.  The "research" is merely correlational studies which does not rise to the level of real scientific research.  At best, these studies might suggest profitable future avenues to pursue for conducting real scientific research.

Furthermore, these correlational studies are on teacher effectiveness, not "the person standing at the front of the classroom."  There are a lot of variables tied up in the term "teacher effectiveness": the teacher, the pedagogy, the curriculum, the classroom environment, and the like.  Only some of these variables are under the control of the person standing in front of the classroom, that is, the teacher. Moreover, the correlational studies are incapable of teasing out which variable(s) is responsible for correlation with student outcomes, in any event.

Nonetheless, this trope gets trotted out all the time in the dopier quarters of the edusphere.  And, Perslstein is right to jump on it.

Perlstein, however, steps in it when she tries to state what the research actually shows:

Of the various factors inside school, teacher quality has had more effect on student scores than any other that has been measured. (emphasis in original)

To put it bluntly: “She’s wrong.”

First, to the extent that the studies are correlational in nature, they are incapable of showing that a variable, in this case teacher effectiveness, had "more of an effect" on anything, including student scores. The studies only show that "teacher effectiveness" (however the study attempted to define the variable) is correlated with student scores by some small amount.  Correlation is not causation.

Second, "teacher effectiveness" is not the most effective factor inside school.  See here and here.  I don't know how this particular trope got started, but it is amazing how often it gets uncritically trotted out by education reporters and bloggers.  Look at Perlstein's conclusion:

But for now, just remember: When you read that teachers are the most important school factor, you can’t drop the “school” and pass it on.

Regardless of whether you drop the "school" caveat or not, you should not be passing it on-- because it's not accurate.

Any educated reporter commenting on education should know this.

Welcome to the edusphere, Linda.

October 14, 2009

Insurance

Your spouse asks you "Will you wash the dishes tonight"?

You respond, "Sure"

Knowing you're a slacker, your spouse asks, "Promise"?

You respond, "Yes."

Is your affirmative response an example of insurance or its equivalent?  Why or why not?

You certainly need to know something to determine an answer and provide a correct explanation.  I don't care how finely honed your reasoning skills are, you aren't reasoning your way to an answer unless you understand that something. (Although I'm happy to entertain a counterexample, Stephen.) That something is content knowledge.

One thing you need is receptive language to understand all the words I used in the example.  You also need enough expressive language to articulate an answer.  That language makes up part of the content knowledge needed for a successful answer.  But let's take language out of the equation and assume that humans have evolved direct mind reading abilities and are now able to directly access the thought processes of others.  We are now free of the burdens of language.  Language will therefore form no further part of this example. (Though, sadly, in the real world I must continue to use language to explain things to you, hopefully you won't be confused.)

Your knowledge of "insurance" (the concept, not the word) probably comes from your experiences and observations with examples (and possibly non-examples) of insurance--auto insurance, life insurance, health insurance, home and property insurance, disability insurance, insurance in the game of blackjack, an "insurance" run in baseball, and the like.  Those are all imperfect examples of insurance, but you may have been able to tease out the defining features of insurance.  Then you could use those defining features to determine if the above example is an example of insurance.

Your thought process is basically pattern recognition.  Do you recognize the common pattern inherent in all the examples of insurance you've experienced or observed in your life?  Does this new form of insurance fit the pattern?

This is a difficult problem because the defining features of insurance, assuming you can even identify them, are rather nebulous and complicated themselves.  In fact, the defining features of insurance are all higher-order concepts, just like insurance is.  Let me give you a hint:  Insurance has four main defining features.  Now, you have four patterns to wrestle with.  Yikes.

The four defining features of insurance are concept knowledge.  Notice how I haven't used the word "definition" so far.  Knowing the definition of insurance or its four defining features, isn't going to help much.  So go ahead and close down that tab your're using to google the word insurance.  And for those of you who I didn't catch in time, did the definition you located help much?  Probably not.  You're probably already googling one of the words in the definition that you also aren't quite sure of.  So stop right there.

I don't need google to tell you the definition of insurance.  But, mine is a sad story and I will make a brief digression in the hope that it serves as a cautionary tale for you.

The year was 1979.  Jimmy Carter was president (shudder).  Margaret Thatcher had just been elected prime minister.  Rocky II was in the theaters.  A little gadget called the Walkman had just entered the market.  I was in seventh grade and had just learned the definition of insurance.

 And by learned I mean I was forced to memorize the definition. My seventh grade teacher (whose name I can't remember ironically) made us memorize the definition of only one word.  That word was insurance.

Why insurance?  I have no idea.  We didn't do anything with the definition afterward.  Perhaps we were being punished for something we had done.  To this day I don't know.  But here's one thing that has stuck with me ever since:  the definition of the word insurance.

Insurance is a contract which guarantees against risk or loss.

It's seared into my brain much like Senator John Kerry's trip to Cambodia.
But, I still didn't understand what insurance was.  You can probably figure out why.

In fact, I can give you four reasons why I didn't understand what insurance was: 1. contract, 2. guarantee, 3. risk, and 4. loss.  To understand the concept of insurance I needed to understand the concept of contract, the concept of guarantee, the concept of risk, and the concept of loss.  These are all higher order concepts.  Insurance itself is a higher order concept -- a higher order concept whose four defining features are all higher order concepts.  Yikes.  Oh wait I already said that.

And as you premature googlers probably found out, even looking up the definitions for the four defining features probably makes matters worse.

So, what's the problem? The problem is that knowing all those definitions does not constitute concept knowledge.  Which is not to say that these definitions aren't useful to read or learn, at least initially.  But concept knowledge requires something more or something else.  Think about it.

And we'll get to the that  something else in the next post.  So take off the disco albums and platform shoes because we're returning to 2009.

(And, Dick, I promise we're almost at the dinosaurs.)

October 12, 2009

On Content Knowledge and Stephen Downes

(Update: I've edited this post (twice) to correct some typos and clarify my argument.  The substance has not changed. 3:28 pm EST)

Tracy Won't Get Her Answer.

At least not from Stephen.

Stephen has taken his ball and gone home.

Dick was right.  I shouldn't have bothered.

Sad.

Here's the problem as I see it. (And if you are familiar with the problem or simply don't care to read my explanation, just skip to the warning at the end of the post.)

Stephen fancies himself as an expert in this area and has a strong desire to maintain that status.  This means that he can't be viewed as being wrong or mistaken, or heaven forfend admitting such, in a disagreement with a non-expert. I've understood this for quite some time.  So, to the extent you get any information from Stephen it will be in the form of a disagreement. You take what you can get.

Stephen apparently takes the position that "kontent knowledge" (I've used a variant spelling because Stephen's definition of that term remains unknown) is not needed for critical reasoning, reading, learning, or whatever the issue happens to be when someone states that "content knowledge" is needed for something or other.  Stephen has become a bit of a gadfly or contrarian in this regard.  Stephen, apparently, also values this gadfly status.

I and many others, on the other hand, do not see how "content knowledge" (my definition isn't necessarily the same as Stephen's definition) can be foregone. To think you need something to think about.  However, we also recognize that learning "content knowledge" is a long laborious process, instructional time is a scarce resource, and, to the extent "content knowledge" need not be learned, then this would provide a good opportunity to re-focus instruction elsewhere to address other student deficiencies.  Thus, the issue of what constitutes content knowledge and whether it is important to teach is an important consideration.

This is why I chose to jump into the thickets with Stephen--not to prove him wrong, but to see if he was right or at least to determine if there was a misunderstanding.  (Hey, who wouldn't want to forgo the tedious learning of content knowledge.  That's win-win as far as I'm concerned.)  And, to the extent he was right, I would have been happy to adopt his view.  And, to the extent there was a misunderstanding, I would be happy to determine where there was agreement and disagreement.  And to the extent the disagreements were irrelevant to instruction and learning, then they could be safely ignored.

I tried.  I failed.  I tried to be nice, but it didn't matter. I bent over backwards to be clear; it didn't matter.  I tried to concede any point I could to get a straight answer; it didn't matter.  I pulled in a neutral third party (Tracy) to eliminate me as the source of any contention; it didn't work. So what went wrong?

I can't be sure. But, it appears that Stephen has gotten himself very far down a path based on a mistake or a misunderstanding (I'm being charitable here because other less charitable alternatives exist) and cannot rectify the situation.  Because rectifying the situation might seem to indicate that Stephen has made a mistake or has misunderstood and by golly Stephen Downes does not make mistakes or misunderstands anything ever. (Stephen, if it'll make you feel better, just blame it on me and my poor communication skills.)

In a nutshell, Stephen has built an elaborate house of cards on a bad foundation.  For whatever the reason, Stephen believes that when anyone (other than he) discusses "content knowledge" the term must include:

1) a fact, a verbal association, or language (For example, "content knowledge" of the concept of the color red must include a verbal association to the word "red" (and perhaps a formal definition of the word "red")) along with

2) an association to the concept (i.e., color the eye perceives) whose boundaries are defined by the many examples (of the color red) that a person has observed or experienced.

That is the functional definition he has proceeded under (as best I can tell).  That is the foundation of his argument.  It's also wrong.  I should know what my own view is and it is not how Stephen characterizes it.  And, I'm sure if you polled almost everyone he's ever disagreed with on this issue, they'd tell you the same thing.  His foundation is bad (for whatever reason) and I think he knows it.

Stephen will be glad to engage and argue with you as long you let him mischaracterize your definition of "content knowledge" (must be (1) and (2)).  However, as soon as you correct him and clarify that "content knowledge" can mean only (2) he will immediately disengage (occasionally with an excuse).  You can see this clearly in my last series of post when I clarified my definition at the outset as not necessarily including (1), Stephen failed to engage at all with me (except on some minor tangential point where he could express a disagreement).  Instead, he engaged only with Tracy whose view he could misrepresent in his customary fashion with a very uncharitable reading of Tracy's clarification.  As soon as, Tracy attempted to correct him, he disengaged.  (The pretext of "contradiction" that he gave was bogus; when engaging in a productive argument you read your opponent's argument so as to maintain consistency when possible.  In this case a consistent reading is possible, in fact, it is relies upon the standard usage of the words.)

This leads me to believe that Stephen does not want to explicitly agree that there is a definition of "content knowledge" that is a prerequisite to reasoning, reading, and the like (in the domain of that knowledge) upon which we might all agree. (For example, if you want to read or reason about the color red, then you need some understanding of the abstract concept of redness, not necessarily the word "red.") I suspect this is because, he believes that by agreeing to a common understanding, he'll be on a slippery slope that might undermine other arguments he favors.  All I know there is a severe reluctance to engage in a serious discussion using the arguments that others make, rather than the argument Stephen wishes they made.

I have seen no indication that this condition is going to improve any time soon.  And, as such, it is fruitless to to engage further with Stephen on this issue.

Warning

So, let this post be a warning to all those that come across any writing of Stephen Downes on the issue of content knowledge.  His opinions are based on a mischaracterization of the arguments of his opponents. No serious person holds the views that Stephen is arguing against. He has erected the classic straw man but has disguised the nature of that straw man so it is not readily apparent to the casual reader. It is, however, apparent to the rest of us.  You are wasting your time.

And, to those fellow-travellers of Stephen's who he views as a friend. I implore you to show Stephen the errors in his way.  He last lost credibility on this issue and his reputation is suffering as a consequence.  I have tried to engage with him productively.  I have failed.  I couldn't care less about Stephen's reputation and the fact that many will view his behavior as buffoonish.  But you probably do.  Help him.

October 10, 2009

Tracy Hasn't Gotten a Straight Answer Yet

Tracy asks:
How could we learn language without content knowledge? For example, how could a small child ask their mother for a biscuit without some awareness of what a biscuit is?


When Tracy writes "some awareness of what a biscuit is," I read this as meaning an "awareness of the concept of biscuit," rather than an "awareness of the word biscuit."

If Tracy meant the latter, I'm close to certain she would have written "without knowing the word biscuit." Moreover, does anyone seriously disagree that a small child could "ask their mother for a biscuit without knowing the word biscuit."



Also, when Tracy writes "ask their mother," I read this as expecting the child to use expressive language to put a question to his mother, rather than expressing a desire to their mother which includes expressive language and non-verbal communication.

Stephen, on the other hand, interprets the sentence to mean "how could a small child express a desire to their mother for a biscuit without knowing the word biscuit"? Yeah, that's quite the burning question we've been waiting with bated breath to have answered.  It "asked" this question to my six year old daughter, she immediately responded with "you could point to it." Stephen, in contrast, broke through Google's 4000 character limit explaining his answer.  An answer to a question no one really asked for or cares about.

So let's get back to Tracy's real and far more interesting question: "how could a small child ask their mother for a biscuit without some awareness of the concept of biscuit."  Or, why don't we go back to an even broader question "how could a small child express the desire to their mother for a biscuit without some awareness of the concept of biscuit."  This eliminates the need for facts and receptive/expressive language altogether which seems to be confusing Stephen or at least is serving as an excuse for avoiding the direct answering of Tracy's question.

Of course, I anticipated all of this which is why I clarified the issue in my post which Stephen has studiously avoided addressing.

And, while we're at it, the term "concept of biscuit" does not require an knowledge of language. Concepts are a form of domain or content knowledge.

And let's also exclude the rare situation in which a person could express a desire as the result of a habitual response or the like.

Lastly, I'm sure Tracy would still like to know the answer to her original question, if we want to stick with the same concrete example.  How does one read about or think critically about biscuits without understanding the concept of biscuit.

Back to you Stephen.

(Prediction:  I will regret not defining (for yet another time) what a concept is and how a concept is known.)

October 8, 2009

Tracy would like a straight answer

Lightning Update: I already provided a potential answer in the comments and dredged up a similar one by Stephen.

Tracy asks:

Stephen Downes, I asked you before in a comment on one of Ken's posts if there is any evidence that could convince you that content knowledge is necessary for thinking. I suppose I was too late and you were no longer reading the thread, I'll try again here, running the risk of course that you might no longer be following the thread.

Is there any evidence that could convince you that content knowledge might be necessary for reading and critical thinking?

I pulled Tracy's comment up front because I'd also like to hear a straight answer from Stephen for this question and the best way to attract an answer from him is to write a blog post which he'll see in his feed reader.  Typically, he'll be the first to comment.

Of course, Tracy is trying to thread the needle by expecting a concise, clear, and (fully) supported answer from Stephen that answers the precise question asked.  Tracy expects this type of answer because that's how she would answer (I know this based on the numerous comments Tracy has left on many blogs).  Though there is a slight chance will provide such an answer (just to prove me wrong), it is far more likely that he will respond in one of two ways.

He will either provide a concise answer based on at least one re-defined term of the question.  In this case he'll re-define the term "content knowledge" to mean "only facts taught by rote." And, he'll provide an  answer along the lines of "Facts are almost never necessary for critical reading and thinking skills." Technically, his answer is accurate based on his re-definition.  The problem is that he often doesn't tell you the redefinition, leaving it implicit.  It took me two years to determine that he was operating under this particular re-definition of "content knowledge."

Or, he'll provide the classic shaggy dog answer.  I often get to the end of one of Stephen's lengthy blog posts only to find the conclusion doesn't necessarily follow from what's preceded it.  this doesn't mean that he's necessarily wrong; all it means is that I cannot follow his argument as written. That's a failure of advocacy.(My day job entails the ability to quickly understand and analyze arguments based on technical descriptions written by experts for other experts in domains in which I am not necessarily an expert, so if anyone should be make sense of Stephen's arguments it's me.)

We actually don't need to reinvent the wheel on this issue.  Stephen has already provided partial answers.

Let's start with this one.

[Y]es, you need domain knowledge to answer requests for facts within that domain.

That answer came in response to four specific questions whose answers depended on domain knowledge.

But notice the qualification: "requests for facts within that domain."  It's superfluous.  The questions did request a factual answer because that's the easiest and most common way of demonstrating one's knowledge.  That knowledge could have been demonstrated without the use of facts.  This is readily apparent in the last two questions. 

1. Jones sacrificed and knocked in a run. Where are Jones and the runner?


2. When John walked out onto the street, he nictitated rapidly. Where might John have just come from?

Now imagine that the reader has been presented with four pictures or diagrams for each question.  One of the pictures/diagrams is a depiction of a correct answer.  The other three do not depict the correct answer.  The reader is instructed to select the correct picture/diagram.

The reader can now demonstrate his knowledge without knowing any facts.  However, in both cases (the fact-based answer and the picture/diagram-based answer) the reader must still be able to successfully discriminate between an example of the higher order concept "sacrificed and knocked in a run" and "nictitate" and non-examples of those domain-specific concepts.  The reader doesn't even have to be able to articulate an accurate definition of those domain-specific concepts to perform a successful discrimination.  (And, to make sure we're clear, I'm using the term concept to mean something that has a defining feature.)

There was no need for Stephen's qualification.  The form of the answer is irrelevant to the need for the reader to possess domain-specific content knowledge when the reader is requested to discriminate between examples and non-examples (or to identify an example) of a particular piece of domain specific-knowledge.  That is the underlying salient point to the argument I presented to Stephen.

I made this same point (though less elegantly) in my response to Stephen.

That these answers can be provided in the form of declarative knowledge does not detract from the premise that the [reader] must understand the underlying knowledge to activate the declarative knowledge needed to answer the question.

Stephen responded:
But what happens is not that the person had the fact, but rather, that the person created the fact out of the complex mass of previous observations and experiences, and presented it to you.

I don't disagree, although it is possible that the answerer also knew the definition or fact associated with the concept needed to perform the discrimination/identification which underlies the answer.  And as far as that "complex mass of previous observations and experiences" goes, that's domain or content knowledge.  The reader, to use Stephen's words, must be able to use that "complex mass of previous observations and experiences" to perform the underlying discrimination/identification needed to answer the questions presented.  Of course, you might not have caught that since it was buried in a comment trying to refute my argument.  Compounding the confusion is that these accurate statements are tangled up with inaccurate ones, like this one:

I've said on numerous oc[c]asions, simply responding to a demand for some fact does not constitute reasoning. These questions may resemble high school tests - but they do not resemble inference or reasoning.

As I explained above, responding to a request for a fact (such as in a high school test) often does require reasoning.  Specifically, it requires simple deductive reasoning, something all preschoolers possess.  For example, in the reader's "complex mass of previous observations and experiences" he knows a general idea (concept, rule/proposition, routine), he uses the general idea to examine an unknown potential example using the information contained within the general idea, “decides” whether the new example fits within the boundaries defined by the "complex mass of previous observations and experiences," and “treats” the example accordingly by indicating an answer.

That seems to resemble reasoning to me even when the end-product of the process results in a bubbled in (c).  But, apparently, not to Stephen, so I provide a clearer example that required both content knowledge and reasoning.  Stephen responded by chiding me for repeating myself and then gets all cryptic on me.

When you represent 'knowledge' as 'answering questions' you get a trivial, stunted version of knowledge.


He leaves this unexplained.  It might be profound.  Or simplistic.  But, certainly it demands an explanation. 
 
So, like Tracy, I'm interested in getting a straight answer to Tracy's question, or better yet, a more precise question:
 
When is domain-specific content knowledge (i.e., "complex mass of previous observations and experiences") not needed for evaluating a proposition made in that domain once the proposition has been filtered for trustworthiness (i.e., derived from evidence not opinion; author/speaker is qualified, trustworthy, and not biased bereft of logical fallacies and faulty arguments)?
 
Then perhaps we can get on to the more interesting questions related to learning and teaching.

(Commenters feel free to further clarify my question for Stephen.)

October 7, 2009

What would you like it to say?

My first grade daughter, whose class is learning how to read this year, brought home a "helpful" faq sheet for parents to help guide their child along the path to literacy.

One of the faqs asked "what should you do when your child, while reading, stops and says 'what's this word'?"

According to the faq I'm supposed to say "What would you like it to say?"

I thought to myself how fortunate my school was to be primarily educating the children of academics and professionals and, as such, able to afford to be dispensing such bad advice.  Worse still, the dispensation of this bad advice comes after a year of kindergarten in which the school squandered the opportunity to begin formal reading instruction.

It's a guideline like this that confirms the suspicion that many educators charged with the teaching of reading do not understand the nature of reading.  Despite ample evidence to the contrary, many educators continue to believe that reading is just a matter of making sense of text.  The decoding of text is just a matter of guessing a word's meaning based on contextual clues.

This is also the primary reason why I taught my children how to read well in advance of when formal reading instruction would begin in their school.

Reading these guidelines is like going to the doctor's office with the flu and the doctor exclaiming that what you need is a good bleeding to get your humors back in balance.  They do not inspire confidence in the profession.

My daughter is still learning how to read, but is reading well enough that she can tackle children's books whose text is not so carefully controlled for decodability. As a result, she now occasionally comes to a word that she cannot read and that she has not necessarily been taught how to decode.

Many of these words are in her oral vocabulary.  She knows the word.  She knows what the word means.  Sometimes she even has an idea of the type of word that fits in the sentence based on the context.  Yet she cannot decode the word.  Decoding is the impediment to making meaning of the text, not the other way around.

My fourth grade son who is a skilled decoder will also occasionally come to a word that he cannot read.  but, his problem is the opposite of my daughter's.  He often can decode the unknown word correctly, but the word is not yet in his oral vocabulary.  He doesn't know the meaning of the word yet. Sometimes he is able to infer the word's meaning from the context.  Other times not and he has to resort to the dictionary.

Inferring the meaning of words from context is what skilled readers do.  (I believe that eye tracking research bears this out,  Dick must know.)  And, I have no doubt that words can be identified by unskilled decoders based on context clues, but this is a more cognitively demanding task--one with no commensurate benefits for doing it the hard way.  Yet educators seem hell bent on teaching reading the hard way practically ensuring that more kids will struggle learning how to decode and causing them to read less and fail to acquire all the knowledge that is primarily learned by reading.

I'm not overly fond of characterizing children as customers of education, but I'll tell you as customers they are getting the short end of the stick.  Reading is not taught this way because it benefits the children, it's taught this way because that's how many educators want to teach.  Their interests are paramount, not the students'.

I gauge the seriousness of any education reform by how well the reform addresses this issue.  Often it does not which is why most educational reforms fail.

October 5, 2009

Acquiring Knowledge

The Great Pondiscio has a good example of how your general reasoning ability gets short-circuited when you lack needed content knowledge (knowledge not merely facts (see the comments)).

“After failing to move a runner past first base for the entire game, the Giants sent Davis to the plate with the potential tying and winning runs in scoring position. Unfortunately, he hit into a 6-4-3 double play to end the game.”
  • How many outs were there when Davis came to bat?
  • To whom did he hit the ball?
  • Describe the kind ball he hit (pop up? Line drive? etc.)?
  • What was the final score of the game?
  • How many runners were on base?

Being able to answer each of these questions requires a few things.  You need to be able to decode English text.  You need to have some basic general reasoning ability (mostly deductive reasoning). You need to know the relevant baseball content knowledge.  And, you need know some declarative facts about baseball to answer some of the questions. (There's probably lots of other basic stuff (like basic math skills) you need to know as well, but I'm going to ignore those for simplicity sake.)

Notice how there isn't any baseball-specific reasoning ability needed to answer these questions.  There's a good reason for this: there is no such thing as baseball-specific reasoning ability.  Rather, the kind of general reasoning ability needed to solve these questions, as well as the vast majority of questions most people will encounter in their lifetimes, is well within the ken of a preschooler.

Now, don't get me wrong, sometimes more advanced forms of reasoning (such as formal logic reasoning) are required to analyze a problem and those more advanced forms of reasoning should be learned because they are occasionally needed.  However, those advanced forms of reasoning are hardly 21st Century variety as some would have you believe.

So where do the 21st Century skills come into play?

They answer is simple.  They mostly come into play when you lack the requisite baseball background knowledge because the first place you are likely to turn to in your quest to acquire missing knowledge is the Internet.  You'll google, twitter, blog, and facebook your way to an answer.  And, when those sources are lacking, you'll fall back on your pre-21st Century knowledge acquisition tools.

But, let's step back a second and discuss how knowledge is acquired generally.
First, go read my eponymous post on this topic from back in April so you'll know what I'm talking about.

Now, think about how you acquired your baseball knowledge.  If you're like me, you probably acquired your baseball knowledge using most, if not all, of the ways I set forth in the post.  For example, I played little league and softball (and many other baseball-like sports (wiffleball, stick ball, handball, halfball, baseball videogames)) which  predominantly involves learning many rule relations, procedures and underlying facts inherent in the game.   I also watched many baseball games (and listened to the announcer commentary) and read many articles about baseball and baseball games. This tends to be fact-heavy learning and involves inductive learning.

For example, I learned the meaning of the unknown concept of "in scoring position" by observing runners and listening to the announcer say the fact "in scoring position." Eventually, I induced that this term means a runner on second or third base, but didn't include a runner on first base or at home plate.  I could have learned the concept much easier by being told the relevant fact up front.

It's hard to say I inured any benefit from learning this concept the long and laborious way through induction and lots of examples, instead of the simple way of being told the fact directly and memorizing it.  In both cases there would have been at least one, and most likely many other, connection stored somehow in my brain linking fact/concept "scoring position (baseball)" and the fact/concept "runner(s) on second and/or third base." (Similarly, I just selected and used the word "inure" in this paragraph without being able to verbalize the precise definition of the word (before I Googled it to confirm), but being sufficiently familiar with the word and knowing it was appropriate in the context of the sentence with imperfect knowledge.  Who knows how I learned the connection of the word "inure" and the connection with the vague concept "to gain an advantage from"?)  There are many ways to skin a cat.  However some ways are more efficient than others.  Efficiency is the main difference between constructivist and instructivist pedagogies.

In contrast I had to Google my way to an answer for the "6-4-3 doubleplay" question, mainly because I never learned the position numbering scheme for baseball.  I knew such a scheme existed, I just didn't know how exactly the positions were numbered.  With this critical fact missing from my knowledge baseball I was unable to answer the question, until I acquired the knowledge by looking it up.  Now I know -- at least temporarily, and can use the connection.  I could have easily learned that knowledge by being more observant and thinking while watching televised baseball games.  But, of course, that's more cognitively demanding, and, I choose not to engage in such a cognitively demanding task and, as a result, failed to learn the connection.

There are a few take aways from this post.

There are many ways to acquire the same bit of knowledge.

The ability to reason generally is a trivial skill that most people know how to do.

Not knowing relevant knowledge (or not knowing it sufficiently) will often inhibit (or diminish) your ability to use your general reasoning skills.

Improving your general reasoning skills often won't compensate for a lack of knowledge when that knowledge is needed.

Many 21st Century skills are generally relevant only when you lack knowledge in the first place.