January 24, 2009

Nature vs nurture

Brian Caplan of Econlog has a good article on the value of parenting in the Chronicle of Higher Education. The article contains a good discussion of the nature/nurture argument and the adoption/twin studies.

There are two kinds of special families: those with twins and those with adoptees. If you want to disentangle the effects of nature and nurture, one approach is to compare identical twins, who share all of their genes, to fraternal twins, who share only half. Another approach is to compare adoptees to members of their adoptive families. If identical twins are more similar than fraternal twins, we have strong reason to believe that the cause is nature. If adoptees resemble members of the families they grew up with, we have strong reason to believe that the cause is nurture.

By using — and refining — these twin and adoption methods, behavioral geneticists have produced credible answers to the nature-nurture controversy. To put it simply, nature wins. Heredity alone can account for almost all shared traits among siblings. "Environment" broadly defined has to matter, because even genetically identical twins are never literally identical. But the specific effects of family environment ("nurture") are small to nonexistent. As Steven Pinker, a professor of psychology at Harvard University, summarizes the evidence:

"First, adult siblings are equally similar whether they grew up together or apart. Second, adoptive siblings are no more similar than two people plucked off the street at random. And third, identical twins are no more similar than one would expect from the effects of their shared genes."

The punch line is that, at least within the normal range of parenting styles, how you raise your children has little effect on how your children turn out...

Recent scholarship does highlight some exceptions [but] the fact remains that people tend to greatly overestimate the power of nurture.

If family environment has little effect, why does almost everyone think the opposite? Behavior geneticists have a plausible explanation for our confusion: Family environment has substantial effects on children. Casual observers are right to think that parents can change their kids; the catch is that the effect of family environment largely fades out by adulthood. For example, one prominent study found that when adoptees are 3 to 4 years old, their IQ has a .20 correlation with the IQ of their adopting parents; but by the time adoptees are 12 years old, that correlation falls to 0. The lesson: Children are not like lumps of clay that parents mold for life; they are more like pieces of flexible plastic that respond to pressure, but pop back to their original shape when that pressure is released.


Bleak news indeed for the SES warriors.

This is why we see substantial IQ gains for some preschool programs whose effects fade by the end of elementary school. This is also why time should not be wasted in elementary school doing "developmentally appropriate" nonsense. There is a brief window of opportunity that needs to be taken advantage of.

31 comments:

Anonymous said...

Caplan's initiative to extend his knowledge of the dismal science of econ to parenting is a "good article"? Looks like you need some more DI or Simpson's, Ken. And since when did the Chronicle of Higher Education start dispensing parental advice? Meltdowns everywhere.

The nature-nurture question is no more relevant to either parenting or to education than the chicken and the egg question.

You have to parent the kids that you've got. And if you think that "nature" absolves you from responsibility in how you parent, you've been reading too much Caplan.

The same holds for instruction. Both endeavors demand attention to prerequisites and to the structured experiences that enable "learning to use the tools that humanity has found indispensable."
[I didn't make that quote up. It's emblazoned above the stage of (Josiah) Royce Hall at UCLA. It's the best definition of "education" I've come across.]

RMD said...

This doesn't pass my "sniff" test.

If we took 2 twins, and put one in an abusive home, and the other in a very supportive environment, their IQ measures might be in the same range, but the chances that one will be successful is much greater than the other.

We really need to look at the book "Talent is overrated" to determine what makes someone successful. This book uses numerous studies to make the case for the role of diligent practice (i.e., practice with the sole desire to get better) in almost all achievement.

We need to get past IQ and start to look at what makes someone very good at something.

If these types of studies are quoted, people will get the idea that there isn't much they can do to help their kids (or children on the whole) become successful, despite a mountain of evidence that suggest that is not true.

KDeRosa said...

This is the wrong take away from the article.

No one believes that an abusive environment won't have anything but a detrimental effect on a child. As far as I know none of the twin/adoption studies had such a environmental condition. What there was plenty of was placing low-ses kids in high-ses households and observing that the higher-ses environments had little affect. Bear in mind that being placed in a high-ses household does not mean that the instruction was any better. If anything, we know that the typical school in a high-ses neighborhood does not produce better outcomes for the low-ses kids in that school. Most schools simply do not provide demonstrably better instruction.

Anonymous said...

"Most schools simply do not provide demonstrably better instruction."

Spot on. With the exception of a few instructional architectures, like DI, academic instruction is ad hoc, and ad lib.

Teachers try to "meet individual needs," which by and large means they're teaching to kids who don't need to be taught carefully.

With instructionally insensitive tests, the only signal that comes through the noise is "g", which by definition is innate, and SES and other such non-instructional cultural characteristcs, which can't readily be changed.

Because "g" and SES are salient, they are then treated as causal: and the interpretation becomes self-fulfilling.

As long as the role of the teacher is honorifically glorified and kid or family "defects" are viewed as the source of instructional failures, instruction will continue to be out of control.

The thing is, the professional special interests involved, "like it like that." It's a dysfunctional system, but dysfunctional systems are as stable as functional systems.

I rail ad nauseum against standardized achievement tests because they create the fog that sustains the stability.

Introduce a legitimate instructional program into the mix and you get functionality: kids reliably and transparently learn irrespective of SES. That will eventually occur, but a lot of unlearning will have to be effected to get "better instruction."

Malcolm Kirkpatrick said...

Ken: "...within the normal range of parenting styles, how you raise your children has little effect on how your children turn out...
...people tend to greatly overestimate the power of nurture. If family environment has little effect, why does almost everyone think the opposite?"

The key phrase is: "within the normal range of parenting styles...".

If environmental variation is reduced (i.e., standardize curriculum, compel attendance for a longer age span) heredity will account for more of the observed variation in behavior or morphology. If hereditary variation is reduced, (i.e., study a population which has been reproductively isolated for centuries), environmental variation will account for more observed variation in behavior or phenotype.

If only heredity matters, why send kids to school at all? Indeed, why feed them? If only environment matters, why not send dogs to school (they are cousins to humans, after all)?

Parry Graham said...

Malcolm with the line of the day: "If only environment matters, why not send dogs to school?"

I agree with jh. We need to be looking at a broad range of outcomes when discussing nature and nurture. Adolescent/adult IQ may appear to be mostly immutable, given "the normal range of parenting styles" (which is a huge given), but IQ is a single construct. Does this same pattern hold true for graduation rates, college completion rates, adult earning power, etc. (i.e., other indicators of academic or life success)?

Building on jh's point, do varying home environments result in varying life habits (e.g., hard work, emphasizing long-term success over short-term success, understanding and using social and political capital, etc.) that impact quality of life and social productivity?

I don't know the answers, but I think they're important questions that don't appear to be answered by Caplan.

Parry

Anonymous said...

"why not send dogs to school"

There are schools for dogs. And the instruction the dogs receive better "meets their needs" than prevailing instruction for kids "meets their needs." Moreover, the proficiency attained is transparent, not derived from arbitrarily-set cut scores on an ungrounded statistical scale.

Some dogs are left behind. But it's not due to the schooling. It's a choice made by the dog and its owner.

There are more genetic differences between varieties of dogs than there are among humans. Yet all can learn. It's in the instruction, not in dog deficits or talents.

Bow.Wow.

KDeRosa said...

Wouldn't this be a fair statement on the state of our knowledge: Within typical parenting styles and with typical school instruction, nature, not nurture, plays the dominant role in student achievement?

There is evidence that atypical instruction is capable of improving student achievement, provided background knowledge is controlled.

There is not evidence, however, that placing a low-SES student into a high-SES environment, and all that that typically entails, will appreciably raise student achievement at this time.

RMD said...

Ken,

I understand your point . . . high SES does not dictate, on its own, the abilities of a child(did I get that right?). So busing and other such "remedies" are fundamentally flawed in their attempt to remedy the perceived impact of lower SES. It's a valid point on it own, but I'm not sure that's the point that most people would take away from the article. In fact, I think most people would, after reading the article, say "child rearing really doesn't have an impact."

I mentioned the abusive household as an extreme, to point out that their conclusion (child rearing isn't as important as genetics or "nature wins") doesn't hold water. Household behaviors on the extreme (i.e., either extremely supportive, or extremely abusive) DO have an impact on children that may not be measured in IQ.

The book "Talent is overrated" is the best summary that I have seen on the value of nurture. they cite Mozart, who, despite the rhetoric, was given substantial benefits from his upbringing. His father was a prominent music educator and spent hours with his son teaching him music. Mozart's first symphonies at a very young age were "recopied" in his father's hand. Mozart didn't come into his own as a composer until he was in his 20s, after he had been composing for 20 years (since he started very young).

Here's my point: It's a disservice to emphasize these types of studies because they let people who raise kids (i.e., parents, educators) off the hook. These studies emphasize IQ rather than ability, which is affected greatly by instruction and practice and CAN make an immense difference in the eventual outcome of a child.

RMD said...

Oh, and one more thing . . .

I'm not convinced that the benefits of DI fade out if DI is continued.

Cheyenne Mountain Charter Academy, which uses DI almost exclusively, is able to get a very high percentage of their kids to the "advanced" rating on the CSAP in middle school, and outperforms other schools from much stronger demographics (e.g., Boulder, CO). There is no evidence that their kid's performance drops off.

perhaps the drop off in performance after earlier grades is due to the drop in quality of the teaching?

KDeRosa said...

jh, I'm not sure that the conclusion doesn't hold water; it's that the conclusions are qualified based on the experimental conditions (which didn't include extreme conditions or parental abuse).

Re Mozart, we shouldn't underestimate to role that success and motivation play in keeping the student willing to put in the practice. Talent is a factor.

These studies emphasize IQ rather than ability, which is affected greatly by instruction and practice and CAN make an immense difference in the eventual outcome of a child.

These studies also measure achievement and it appears that achievement and IQ are the same. Again with the proviso that instruction is typical.

Re DI fade, I don't think we have enough data at the middle school level and above to say for sure what's going on.

Re the CSAP, it could be that it is not a good testing instrument to measure achievement with enough accuracy. I don't know.

Anonymous said...

jh says "I'm not convinced that the benefits of DI fade out if DI is continued."

Right, jh. Benefits of learning don't "fade." They are either strengthened or weakened by future experience. The studies that purportedly show "loss" use untenable logic, methodology, and instrumentation.

When you have a handle on what was learned, you can show very reliable long term effects. But it requires a longitudinal study, and these are few and far between. See, however:

"The long-term effect on high school seniors of learning to read in kindergarten"

www.piperbooks.co.uk/documents/The_Long-Term_Effects.pdf

It's a lengthy doc and takes time to download, so be patient.

RMD said...

Dick,

thanks for the citation!

it's hard to find good research

jh

Tracy W said...

Parry Graham, there has been some work into adult earning power. Some research was done by a Bruce Sacerdote into the outcomes for children adopted in adoptions organised by Holt's International Children's Services. That agency, once it had accepted would-be parents, assigned them randomly to children.

Bruce Sacerdote found that the adult income of the biological children was correlated with their parents' income, but the adult income of the adopted children wasn't.

Now this isn't a complete study, would-be parents had to be accepted into the adoptive programme, so hopefully very bad parents were excluded (this being JH's point). Also, adopted children differ from biological children in their life experiences, many of these children were Korean by birth but not placed with Korean families, but no one appears to have come up with a plausible reason why adoptee-effects would offset adopted parental income.

See http://www.marginalrevolution.com/marginalrevolution/2004/11/nature_nurture_.html

Tracy W said...

Dick Schutz,

Your citation is fascinating. Although I can't help but notice that the research included applying "The AIMS Reading Test", described as a "standardized test of reading comprehension", and a "Reading Vocabulary Test". I am surprised that you recommend such a resource.

Malcolm Kirkpatrick said...

Schultz: "There are schools for dogs. And the instruction the dogs receive better 'meets their needs' than prevailing instruction for kids 'meets their needs'."

Missing my point. Dogs are cousins to humans, yet no amount of environmental equalization will yield Alg I classrooms where 10-year-old collies and 10-year-old Koreans are equally represented.

Who introduced the term "need" to this discussion, anyway? "Need" is a rhetorical trick, which disguises an idiosyncratic desire as an ostensibly objective fact. I believe it was Lao Tzu who said: "The wise man does not need to live."

RMD said...

Tracy,
What are the issues with the AIMS reading test?
jh

Anonymous said...

Tracy W is "surprised" that I would endorse a study that used a standardized reading comp test and a vocab test.

The investigators used every dependent variable they could think of to test the effects of the early learning. They found statistical and practical differences on every one of the 16 variables. I find this pretty amazing.

But let me try to clarify a point. Item Response Theory is VERY relevant and ALTOGETHER legitimate for SOME purposes. Vocabulary, for example, the SAT for another.

Standardized achievement test batteries before the 1960's regularly carried the warning "Not to be used to measure individual achievement."

When the Feds got into the act, the govt very rightly wanted to know the bang for the buck. The rest is history and it's not a happy history.

"Proficiency" is now being defined in terms of arbitrarily-set cut scores on ungrounded tests. That is flat out stupid, but it serves the purposes of several powerful interests.

Tracy W said...

Jh - Dick Schutz and I have had two long debates about standardised achievement tests. Dick maintains that standardised achievement tests are fundamentally flawed, in that they will always produce a normal distribution of outcomes with the outcome related to SES (is that a fair summary, Dick?)

I maintain that it is entirely possible to validate standardised achievement test results, and there is nothing inherently in them that requires a normal distribution of outcomes. I am prepared to believe that there are a lot of problems with standardised achievement tests in practice, but none of the problems are inherent to standardised achievement tests.

Tracy W said...

Standardized achievement test batteries before the 1960's regularly carried the warning "Not to be used to measure individual achievement."

Maybe in the USA. But in NZ and the UK, and I am reasonably confident that all the ex-British colonies, standardised achievement tests were used to measure individual achievement in the 1960s and before. In NZ, it was School Certificate and University Entrance, the equivalent of the UK's O-levels and A-levels. My grandmothers and one granddad went through those exams (the other one had to drop out at age 14 to work). The results of School Certificate determined for each individual if they could proceed to the next year, the UE if they could go to university and what courses they could take there. And indeed my parents and I and my brothers had to sit similar exams. The system was most clearly used to measure individual achievement, eg the universities would publish what results you needed to get assured entry to each course, and if you got those results it didn't matter if the rest of your school failed utterly. (The exams have been dropped, but replaced by NCEA levels, which are also individual assessments of learning).

Tracy W said...

Actually, if pre-1960s, the USA wasn't using individual results on standardised achievement tests for things like determining university entrance, what was it using? Standardised aptitude tests?

Anonymous said...

I don't want to high jack this thread with more tis-taints about standardized tests, but the thread hs about run out of steam (to mix metaphors) and the testing matter is fundamentally important.

Tracy and I have been on different wave lengths.

Tracy: "Dick maintains that standardised achievement tests are fundamentally flawed,"

Me: The flaw is in the inappropriate application of the theory and practice underlying the tests.

Tracy: "they will always produce a normal distribution"

Me: Item response theory SEEKS to produce a normal distribution

Tracy: "outcome related to SES"

Me: Standardized tests correlate highly with SES. This is commonly interpreted to mean that SES causes achievement; schools can't do anything until we change society.

When you administer tests that measure what has been taught the correlation with SES drops to near zero and the differences in instructional "treatments" are large.

It's easy to get lost in the mathematics surrounding IRT, and there is no reason to go there. The math is sound. It's the application of the math tht is doing real-world damage.

Tracy W said...

Dick - yes, it is possible we have been talking past each other. So now you are saying that a standardised achievement test could be validated, or grounded, and could identify actual instruction-caused improvements?

(NB, IRT doesn't seek to produce anything about the distribution. It requires assuming that there is an ability, and more of that ability increases the probability of getting the answer right. The actual underlying distribution on each question is to be estimated by data on the development population, the resulting distribution of test scores depends on the selection of test items. If the development population shows a negative relationship between ability and the probability of getting the answer right, IRT, or classical test theory, says chuck the question, it's badly written.)

Anonymous said...

"So now you are saying that a standardised achievement test could be validated, or grounded, and could identify actual instruction-caused improvements?"

The thing is, the entity "standardized achievement test" today is operationalized without any grounding. There is no way to ground the entity. The rigmarole that generates the test can't yield anything other than defining "proficieny" in terms of arbitrarily-set cut scores on a statistical scale.

Let's try coming at the matter from a slightly different direction.

Identifying reliable relationship between an antecedent and consequences--"if-then" statement demands a specified product/protocol for the "if" of instruction and a means of observing instructional accomplishments (achievement)that is as transparent as possible.

Filling in bubbles on items designed to "foil" students. and reporting a "number" as "achievement" is ludicrous on its face.

The test items are "confidential," and the means of generating the number involve arcane math that few other than the "computer" understand.

The fuzzy fog of Educationese has no difficulty incorporating the bogus "proficiency" into discourse.

Tracy W said...

Dick - You appear to be making two different claims here - in the first sentence you claim that today standardised achievement tests are done without grounding, a wording that implies to me that they could be grounded (or, to use my terminology, validated). But then you go on to make the flat statement that "There is no way to ground the entity."
To me there is a big difference in meaning between "something isn't being done" and "something can't be done". What do you mean about grounding, or validating, standardised achievement tests?

Anonymous said...

Tracy says:"in the first sentence you claim that today standardised achievement tests are done without grounding, a wording that implies to me that they could be grounded (or, to use my terminology, validated)"

The wording does NOT IMPLY the "could." The next sentence makes that clear and the rest of the post tries to explain why.

"Grounding" and "validation" are NOT synonyms in TestTalk. There is much talk of validation but little walk of the talk. And there is no talk of grounding. Why not? Because IRT is grounded
only in a "latent trait."

As I stated in the very beginning in what has turned into a tedious exchange. Academic achievement is NOT a latent trait. The whole matter is really as simple as that.
But the fact is hidden in gobbledygook and arcane
mathematics.

I've stuck with the exchange because standardized achievement tests are unwittingly an instructional hazard of toxic proportion.

I say this as the founding editor of the Journal of Educational Measurement and former president of the National Council on Measurement.

The field lost its way. I've explained why and how this happened during the course of the exchange.

Tracy W said...

Dick, thanks for making your meaning clear. Now when you say that something can't happen, I expect a mathematical proof, or at least a vast amount of experimental evidence similar to that backing up the laws of thermodynamics. You have never supplied a mathematical proof of your claims, and I can see nothing in the mathematics to support your claims, which is why I don't believe your claims.

I was using "grounding" as you appear to be using it like I use "validation" about tests. If you don't think that grounding is equivalent to validation, please supply your own definition.

As for academic achievement not being a latent trait, if you don't like calling it a latent trait then call it something else. The definition the psychometricans use though is quite clear. The fundamental problem causing the use of standardised achievement tests is that academic achievement is not something that can be measured like height or weight can. If you know I'm 5'2" you know I can walk under a bar that's 6' high and one that is 7' high, but would have to bend to get under a bar that's 4' high. But if you know I can read and comprehend the front page story on a major English-language newspaper, you don't know if I can read and comprehend Robert Burns, you have to test that skill separately. I have explained to you the reason why academic achievement is called a "latent trait" before, I have repeated this explanation here. Which part of this explanation involves gobbledygook or arcane mathematics? I have been stating heights in feet and inches on the basis that this blog generally has an American audience, if you find imperial units "arcane mathematics" I can re-state in metric.

Incidentally, IRT is not grounded in a latent trait, or a "latent trait". IRT is designed to deal with those attributes that can't be directly measured like height or weight can, but it could perfectly well be applied to height or weight, if someone really wanted to waste their time and do things the hard way for no benefit. I described in our earlier debate how that could be done.

I say this as the founding editor of the Journal of Educational Measurement and former president of the National Council on Measurement.

If you can't back up your argument with mathematics, I'm not going to believe appeals to authority. If you adopted this attitude of just expecting people to believe your assertions about what can and can't be done in your career as founding editor and president I think you really harmed the Journal of Educational Measurement and the National Council on Measurement - science and mathematics are not be based on authority but on demonstration. Authorities have shown themselves to be very wrong in the past.

The field lost its way. I've explained why and how this happened during the course of the exchange.

The trouble is that your explanations consist of assertions about what standardised achievement tests can do, that fly in the face of my understanding of the mathematics, and indeed in the face of the descriptions of it in the textbook you provided a link to. I don't see how a standardised achievement test consisting entirely of reading comprehension questions that could easily be answered by a reader of a given newspaper would not result in scores piling up at the top end if it was given to a test population of readers of said newspaper, but that is what you expect me to believe. I have occasionally been surprised and convinced by extremely counter-intuitive ideas (eg that 0.9999.... = 1), but at high school and at university my science, mathematics and engineering lecturers never expected me to believe their outlandish assertions, they always either provided mathematical proofs or lab demonstrations, or both. That experience is now too engrained in me; extraordinary claims require extraordinary evidence. I do not regard the tendency of a random bloke on the internet to say something repeatedly as extraordinary evidence.

You have also implied that I was being dishonest in your abstract statement about cherry-picking quotes, and you have neither backed up this implication or withdrawn it. And now you expect me to believe your assertions? The conclusion I draw from this is that not only do you not know anything about mathematics, you don't know anything about human nature. If you want me to believe you unquestioningly, I advise flattery. I know I shouldn't fall for flattery, but hey, the flesh is weak, and at the moment I'm emotionally biased against you as believing you without any good reason on IRT would require me to believe you without any good reason on your implication about my arguing techniques.

Anonymous said...

Let me restate my original point. "Proficiency" today is being defined in terms of arbitrarily-set cut scores on ungrounded statistical scales. This is not OK. If anyone thinks it is OK, let's hear the argument.

This is not the place to resolve differences of opinion about the history or theory of educational measurement. It was not my intent to get into that and I'm not going to get into it any further.

Tracy W said...

"Let me restate my original point. Proficiency" today is being defined in terms of arbitrarily-set cut scores on ungrounded statistical scales.

Dick, I am slightly less likely to believe your assertion here than I was before I started arguing you, but this is because I now have serious doubts about your honesty in debate. However, the points I really disagree with are when you assert things like that standardised achievement tests *can't* be validated, or grounded, to use your terminology, this being a very different statement to saying that they currently aren't validated.

This is not the place to resolve differences of opinion about the history or theory of educational measurement.

Dick, as far as I can tell, the reason we are not resolving our differences is that your idea of a debate is to make implausible assertions and continue making them ignoring any arguments to the contrary, and on my side, I don't believe your assertions just because you made them. I don't see how changing the place will change any of this. For example, you haven't answered my question from earlier. Which part of my explanation of the difference between what psychometricians call "latent traits" and directly measurable events did you find gobbledygook or arcane mathematics?

As for your intention to stop getting into that, that is of course your choice. When you actually stop making statements of standardised achievement tests I will perforce stop arguing back.

Anonymous said...

I happened to stumble across an article that is relevant to several D-Ed Reck threads:

"Exams in Algebra in Russia:
Toward a History of High Stakes Testing"

It's in a relatively new, free, on-line journal, "The International Journal for the History of Mathematics Education"

If you want to access the article (and JHME has several other good articles for the math-inclined) best to start by googling. The only way I could get the article to download was by first registering.

Anyway, here are a few excerpted paragraphs:

"The education system changes extremely slowly... The country changed, society changed, education was faced with
entirely new challenges, but the writers of problems continued trying to do everything as in the good old days.

"Centralization was often seen as a rapid means of overcoming that which had become out of date, and more generally, of creating order (among other things, by offering everyone equal opportunities, the lack of which had been deplored
by Zimmerman). It turns out, however, that centralization and
micromanagement can also increase confusion, even to the point of provoking people to break with and deviate from established procedures, in ways that the
Ministry of Education simply could not control, even if it wished to do so.

"Directly using exams (with students’ grades and the standards by which their problems were graded) to assess the effectiveness of education at a school may,as we have seen, lead teachers “not to notice” cheating—which calls into question their use for this purpose as well as their use to assess individual
students. In general, how exam results can be used and what kinds of information can be derived from them is an important and persistent problem.

"Centralization vs. local decision-making is one of the fundamental oppositions that we observe in the history of exams, with each side having its own strengths and shortcomings.

Directly using exams (with students’ grades and the standards by which their problems were graded) to assess the effectiveness of education at a school may,as we have seen, lead teachers “not to notice” cheating—which calls into question their use for this purpose as well as their use to assess individual
students. In general, how exam results can be used and what kinds of information can be derived from them is an important and persistent problem.
--------
It all sounds very familiar doesn't it? If we can't learn from our own experience in the U.S., maybe we can learn from Russian experience. Sounds like a better bet to me thtn repeating their experience.

Tracy W said...

Okay, this teacher cheating thing in serious exams always sounds insane to me. Who were the supervisors who were so stupid as to not even think of the possibility that teachers might cheat?

When we did the big national exams at high school (which were individual assessments of learning, but were also published by school), independent examiners came in to administer the exam, from distributing the papers to watching us, and then to collecting the papers and taking them away. We only wrote numbers on the exam papers, not our names, and the papers were sent to different parts of the country for marking.

This system may not have been immune to cheating, but at least it made life more difficult for any dishonest teacher.

The American system of administering exams from descriptions I have read strikes me as profoundly stupid. Not merely the general absence of protections against the odd dishonest teacher, but the failure to publish exam questions afterwards, who knows how many badly-written questions get through?