Doesn't put me to sleep -- Jay Mathews
Why the belief that SES is causal is so deep and wide is perplexing and astounding. The only explanation I can come up with is that it lets publishers, professors and other "authorities", who ARE causally responsible, off the hook.
I think the discussion of SES is simplifying things. SES is just a proxy for a whole host of stuff: parental levels of education, nutrition, access to healthcare (e.g., eye care, dental care), likelihood of one or two parents in the home, neighborhood (e.g., likelihood of gang activity), peers, background experiences, neonatal exposure to drugs or alcohol, vocabulary exposure, access to high-quality schools, etc., etc. To suggest that none of those things can have any type of causal impact on a child’s education strikes me as silly.Here is a thought experiment. Let’s say that 1000 children were born at the same instant to single mothers living in abject poverty, and another 1000 children were additionally born at that same instant to two-parent, upper-middle-class families. As the result of an unexplained cosmological anomaly, the two sets of babies were switched at the moment of birth: the 1000 babies born into poverty were suddenly in the hands of the upper-middle-class families, and vice versa.Do you believe that both sets of children would have identical educational outcomes to what they would have achieved had they never been switched (i.e., the two groups don’t have identical outcomes to each other, but to themselves in their original circumstances)? Clearly their native cognitive abilities (i.e., the way their brains were programmed to be wired from conception) would play a large part in their ultimate educational attainment, but do you really believe that their circumstances (as described broadly by the SES proxy) would have no causal impact whatsoever?Parry
Everything is related to everything else. Of course "bad conditions" have a causal impact. The issue is whether or not they are "deal-breaker" obstacles. And they aren't. So long as the orientation is to try to "fill in deficiencies" it can never be done. The Matthew Effect will go into play every time. The only way to get on top of instruction is to focus on the minimal prerequisites a student needs to make the instructional feasible and to build on those assets with transparent accomplishments. With very few exceptions the "poorest and sickest" kids enter school with the assets necessary to teach the kids how to read, compute, and accomplish the other academic aspirations that they, their parents, and the citizenry share. Are these other societal matters of concern? Most certainly. But the reverse also holds. Education isn't going to solve the problem of drugs, crime, nutrition, or even international economic competitiveness. From the National Defense Education Act early on to today's Race to the Top, fed ed expenditures have been based on these quasi-phony rationales. That also protects the "unaccountables" at the top of the EdChain from having to justify the expenditures on terms of their educational merit.
Agreed. Low SES children are "teachable", just as are children from higher SES backgrounds (i.e., SES is not a deal-breaker).Nevertheless, low SES students often bring with them challenges that schools have to address in order to help them achieve.And I differ with you in your belief that K-12 educational success is unrelated to international economic competitiveness. The Race Between Education and Technology by Goldin and Katz convinced me that educational attainment levels and economic competitiveness are intertwined.Parry
Parry, we do have some evidence from various adoption studies and it appears that the benefits that accrue from placing a low-SEs child with a high-SES family are much less than one would expect from simple causality stemming from the various poverty factors you list. Why might that be? Let me suggest the following.high iq + tooth ache = low student outcomes. Remedy the tooth ache and you improve student outcomes.low iq + tooth ache = low sr\tudent outcomes. Remedy the tooth ache and student outcomes remain low because of the low iq.The low iq factor is swamping the other factor(s), so remedying the other factors doesn't improve student outcomes. ANd, this assumes the other factors are readily remedied. In reality, many of the other factors are slo intertwined with the iq factor.Now, I'm not saying that this is the way it definitely is. We don't know yet. But, what I do know is that the extant evidence does not preclude this theory. That's why no one can presently claim that simple causality between povert/SES and student outcomes.
"The low iq factor is swamping the other factor(s), so remedying the other factors doesn't improve student outcomes. ANd, this assumes the other factors are readily remedied. In reality, many of the other factors are slo [also]intertwined with the iq factor."That's pretty close. The thing is, technological matters override these human differences.An example. Kindle and kin can read now. Books can read to kids, even if kids can't read to books. In the foreseeable future "machines" will be able to translate text in any major language and an individual can choose in what language to read or listen to the communication. In view of this inexorable (short of killing the planet) or ourselves) future, it's pretty silly aiming to "Race to the Top" or arguing whether SES is related to economic competitiveness.
Dick, firstly, even if a Kindle or some other tool can read in any known language, that only solves the decoding part of reading. If the vocabulary used is sufficiently unfamiliar to the person listening then, if the cognitive scientists like Dan Willingham are right, the person is still not going to understand what they hear. Secondly, even once you've read something, there's still the question of how you make use of that information.
True, on both counts, Tracy. But these are not "reading" or "comprehension" matters. They are background information/Technical Lexicon matters, which can and should be dealt with instructionally, independently of SES.
Dick, why have you put reading and comprehension in quote marks? Secondly, why are you making a distinction between comprehension and background information/Technical Lexicon matters? I thought the results from cognitive science and KenDeRosa's own informal tests here were pretty clear that in order to comprehend a text you need a lot of background information/technical lexicon. Is my understanding of that literature wrong? I agree that instruction should be done at whatever level the individual student is at, independently of their SES. I'm just not following how this is relevant to what KDeRosa was arguing that IQ matters.
The language of education makes for fuzzy communication. Let's try to de-fuzz some terms.The popular definition of reading is "extracting meaning from text." This confounds intelligence, (essentially "background information") and "comprehension" (also essentially "background information) into reading and reading instruction.What Willingham and E D Hirsch have been trying to convey is that "comprehension"= background information.Lexicon = "vocabulary" can be divided into General Lexicon and Technical Lexicon. GenLex expands expands at a very rapid rate, through at least the time a person finishes formal education. Formal instruction in GenLex is a waste of time. The instruction cannot possibly increase at the rate it increases with just staying alive.Technical Lexicon is a different matter. TechLex includes all terminology that isn't in a person's GenLex. So for little kids TechLex is an extremely important consideration.The "gap" in the TechLex of low SES kids entering K can be eliminated in a few days or weeks of instruction. That is, if a kid doesn't know primary colors, common shapes, concepts like first-last and so on, the kid won't understand what the teacher is talking about. The TechLex isn't "hard," it just hasn't been within the experience of some kids.Each school subject, academic, professional, and technical discipline has a structure of TechLex. Specifying the TechLex involved is an operational way to get at "background information."
Ken,Some of the research I've read on neuroplasticity and brain development suggests that poverty (which really means the factors that commonly accompany poverty) can swamp IQ and academic achievement. Kind of like taking a group of 100 kids, throwing 50-pound packs on their backs, and then saying "Play basketball". With the added weight, it's tough to distinguish between the skill levels of the players. Remove the weights, and natural skills are able to show through.A couple interesting articles I found on the subject:Summary of Turkheimer reportPoverty goes straigh to the brainPG
"And though some details remain incomplete, she said, evidence of connections between poverty and neurobiology are strong enough to justify real-world testing."Don't hold your breath.
Dick, I am still entirely lost as to why you put the words reading and comprehension in quote marks. Furthermore, you are now also putting quote marks around the words hard, gap, vocabulary. Why?Your definitions are also unclear. You say that GenLex instruction is a waste of time (I note that you provide no supporting evidence for this claim). But then later on you say that the "The "gap" in the TechLex of low SES kids entering K can be eliminated in a few days or weeks of instruction. That is, if a kid doesn't know primary colors, common shapes, concepts like first-last and so on, the kid won't understand what the teacher is talking about. "But if things like common shapes, concepts like first-last, primary colours, are not part of GenLex but instead are TechLex, then what is there left to be part of GenLex? And given that you appear to agree with Willingham and Hirsch that comprehension is a matter of background knowledge, including what you call TechLex and GenLex, I don't see on what basis you think a Kindle will get around the comprehension problem of reading.
Parry, Turkheimer's study, aside from being too early (at age 7 before the shared environmental component has gone) appears to be an outlier.In Britain, the exact opposite of Turkheimer’s result was found in over 2,000 pairs of 4-year-old twins (N = 4,446 children), with greater heritability observed in high-risk environments. A re-analysis of the Hawaii Family Study of Cognition also found contrary results to Turkeimer’s. Nagoshi and Johnson found no reduction in the relationship between parental cognitive ability and offspring performance in families of lower as opposed to upper levels of socioeconomic status. In the 1,349 families they studied, the relationship remained the same across tests, ethnicity, and sex of offspring.Finer grained analyses show the same within- and between-race heritabilities for Blacks and for Whites. Rowe and Cleveland.Asbury, K., Wachs, T. D., & Plomin, R. (2005). Environmental moderators of genetic influence on verbal and nonverbal abilities in early childhood. Intelligence, 33, 643-661.Nagoshi, C. T., & Johnson, R. C. (2005). Socioeconomic status does not moderate the familiality of cognitive abilities in the Hawaii Family Study of Cognition. Journal of Biosocial Science, 37, 773-781.Rowe, D. C., & Cleveland, H. H. (1996). Academic achievement ...Are the developmental processes similar? Intelligence, 23, 205-228.
Tracy, I over-use " marks because that's the only way, short of FULL CAPS to get a typographical distinction. Oops. It is possible to get typo distinctions. A statement below the Leave your comment box clearly states this. But until a few minutes ago, the term HTML tag was not within my TechLex, so I ignored the statement.I find that HTML tags aren't complicated. That's true of a lot of terms. Now the term is in my GenLex, although at the moment I'm a bit shaky with it.The reference for my statement GenLex instruction is a waste of time is Diane McGuinness, Early Reading Instruction, But a thought experiment should convince you. It takes at least 5 or 10 minutes to teach a term, and if one doesn't have occasion to use or encounter the new term, it will be forgotten. Spoken communication uses a much smaller lexicon than written language. So unless one is writing one really doesn't need a huge lexicon. And if one is reading, the meaning of new terms can be acquired through the context if it isn't defined in the text, or through looking it up, as I did with HTML tags. But if things like common shapes, concepts like first-last, primary colours, are not part of GenLex but instead are TechLex, then what is there left to be part of GenLex?GenLex and TechLex are individual specific. Teachers assume that all the kids know what they or the text is talking about, but that's not so a good deal of the time. Kids don't know enough to know what to ask or they don't want to appear stupid, so the instruction goes right past them.Kindle won't get around the comprehension problem in reading, because there is no such problem. The problem is a background information problem that can best be addressed via TechLex.
What about schools like City Springs or other full DI models?Don't they help address these issues?
Yes. As Ken has shown, DI can override SES. There are a few other architectures for reading instruction that also do this. The architectures are being used in the UK and France, but not prominently in the US>
The School District to keep an eye on is the Gering School District in Nebraska. They have implemented the full-immersion ELA program with NIFDI's help districtwide.This school year we should get data from the first cohort which has gotten DI since K. They are now in grade 5.
I think you're confusing two separate, but related, arguments.1.) SES influences school performance.2.) We can end poor school performance by ending poverty.The first one is true, the second one may not be. There's really no question that living in low-SES family negatively impacts a child's performance in school to some degree. To argue otherwise is silly.But to make the leap from the first statement to the second may also be silly. You're right to question 1.) to what extent we can ameliorate the effects of low SES, and 2.) how well it would work if we did. Research on such attempts is decidedly mixed. It may well be the case that non-school factors more heavily influence a child's school performance but that in-school reforms are more efficient at remedying such problems.In short: By all means, debate people who use argument #2, but don't conflate that with argument #1.
Ken,We are starting to range way outside of my area of expertise, so I'm struggling to interpret all of the study results, but what I have found suggests that Turkheimer's study is not really an outlier, but also not definitive.In this study, there is a nice summary of the existing research, which suggests that there is evidence supporting GxE interaction in both socio-economic directions. The author mentions the Asbury study, but also points to multiple studies -- Turkheimer, Harden et al., Rowe et al. (1999), and Kremen et al. -- that suggest that the higher you go up the SES ladder, the more you see manifestations of cognitive heritability (and vice versa). That makes Turkheimer look less like an outlier and more like a data point, and it also suggests to me that the scientific evidence in this area is not definitive at this point.PG
Corey, I think it is safe to say that both nature and nurture (both in school and out of school) play a role in school outcomes. And, since nurture is a function of SES, then I think it's safe to conclude that SES plays a role in school outcomes. The more important issues are how much of a role does it play and how effective are interventions relating to the poverty aspect of SES effective in mitigating the resulting effects.The existing research, such that it is, gives us a hint. In recent studies, SES correlates with school outcomes at about 0.3. So, the amount of variance attributable to SES (under the most generous of interpretations) is, at best, low; therefore, the improvement we'd expect for programs directed to improve SES will be small and educationally insignificant.That's the main reason why I remain sceptical on SES related interventions, the predicted effect size will likely be small for any intervention.
Parry, I agree that the Turkheimer study should not be dismissed as an outright outlier. But, I do think the results are not fairly representative of SES effects.The research on the heritability of IQ as it relates to SEs is a fruitful area, but the results are far from conclusive at this point. Turkheimer's results most likely under-estimate IQ heritability because the research subjects were young and genes start to express later in life.
For anyone interested in reading a good overview of IQ and SES, I suggest Jensen's and Rushton's Race and IQ: A Theory-Based Review of the Research in Richard Nisbett’s Intelligence and How to Get ItThat's where I pulled the quote from above, but the cite got mangled/excluded somehow. Jensen and Rushton have their biases and so does Nisbett, who they are critiquing. So read both and form your own opinion.
I'd second Ken's response to Corey's Points 1) and 2).Re the Jensen-Koretz divergent positions it begs the question of instruction. We teach the kids that parents give to us, not what we'd like them to give to us. They're not holding back any better kids.Re Gering. Fifth grade is too late. If a district still has kids with reading problems after grade 3, the "problem" is in the earlier grades. Purported "reading comprehension" tests aren't tapping reading, they're tapping background information by means of manipulating the "foils" of multiple choice questions.
Ken,Agreed. The relationship between environmental and genetic factors is complex, with no definitive answers. Genes clearly play a large role in determining cognitive capabilities and educational performance, but impoverished conditions can throw an extra weight onto the back of any kid as they enter and move through the school system.Especially for those students coming from impoverished backgrounds, high-quality learning experiences are critical.Good conversation and information.PG
Dick, like it or not, we are stuck, at least for the time-being, with Reading tests that implicitly define reading as decodong ability plus background knowledge. By now, most of us understand the inherent problems with this definition.NIFDI thinks it can largely overcome the obstacles presented by the definition by teaching decoding well and quickly so that students start getting practice reading difficult text, by hitting language development hard and early (TechLex and GenLex frpm what I gather), and movingthe students into content areas as quickly as their reading ability permits.Whether they are able to compensate for the deficient background knowledge of their low-SES students by the middle grades remains an open question and Gering will hopefully represent a good data point for a well-implemented DI implementation with a dedicated school district (both leadership and teachers) and with a poor, but non-inner-city, population and a middle-class population.
we are stuck, at least for the time-being, with Reading tests that implicitly define reading as decodong ability plus background knowledge.Not exactly. (and I'm not referring to the typo in decoding). The tests are constructed in ignorance of the Alphabetic Code which is the technical foundation of "decoding." The incremental difficulty of the tests by age/grade is obtained by manipulating the foils of the multiple-choice test items. The tests tap background information (more precisely declarative knowledge which is a form of crystallized intelligence).By now, most of us understand the inherent problems with this definition..The limitation is not in the definition. The limitation is in the tests. And very few understand this.NIFDI thinks it can largely overcome the obstacles presented by the definition by teaching decoding well and quickly so that students start getting practice reading difficult text, by hitting language development hard and early (TechLex and GenLex frpm what I gather), and movingthe students into content areas as quickly as their reading ability permits.Well, although the terminology is garbled, but I understand what you are saying. When kids have been taught how to handle the Alphabetic Code and the other linguistic conventions involved in reading they can read and "comprehend" any text that they would understand were the text read to them. The thing is, if a kid hasn't been taught/learned how to do this by the end of grade 3, a lot of instruction is going to go right past the kid, and the kid is highly likely to have acquired maladaptive reading techniques and broader behavioral syndromes that are very difficult to extinguish.Whether they are able to compensate for the deficient background knowledge of their low-SES students by the middle grades remains an open question.The tests being used in Gering and the reporting methodology are inadequate to answer the question.If the upcoming Gering results are in line with past Gering results, the instruction will have nudged the normal distribution upward, but will not have turned it into a lazy-j shaped distribution. When all kids have been taught to read, the lazy-J distribution is what one gets. The dispersion (standard deviation) in previous Gering results indicates that kids have been left behind. Similar variability in this years results is inevitable. In part, the test forces a normal distribution. In part the distribution is due to the fact that the test is tapping "g," not reading. But in part, very likely some kids have indeed not acquired reading expertise. None of the three considerations have anything to do with SES.I strongly agree that Gering offers an ideal (N=1) opportunity to untangle these matters. They are matters of the utmost importance.
Look at Dick using his fancy new html coding tricks.I agree with much of your comment Dick.The thing is, if a kid hasn't been taught/learned how to do this by the end of grade 3, a lot of instruction is going to go right past the kid, and the kid is highly likely to have acquired maladaptive reading techniques and broader behavioral syndromes that are very difficult to extinguish.I agree that not learning how to read proficiently by third grade will be detrimental. We have a fair amount of data up to the third grade level from other studies. We'll get more from Gering. However, what I'm most interested in, is seeing how these students fare in later grades. There we have little data.I'm not crazy about existing reading tests either, but to the extent they are largely testing background information that's ok since background seems to be THE factor for general reading ability and general reading ability is what we care about (or at least what we think we care about).When all kids have been taught to read, the lazy-J distribution is what one gets. The dispersion (standard deviation) in previous Gering results indicates that kids have been left behind.Give it time, Dick. In the real world, there are many factors that need to be adjusted before a curricular intervention really starts producing significantly increased results for at-risk kids. For example, though fifth grade teaches at Geringhave been teaching DI for about five years now, this is the first year they've gotten students who've had DI for their entire years of schooling. They should be able to do more with these kids, but since they are the first cohort, the teaching is unlikely to be maximized for a few more years.You might recall from CIty Springs, some of the teacher-leaders were expert DI teachers with many years of experience. It takes time to get novice teachers up to speed.
I'm not crazy about existing reading tests either, but to the extent they are largely testing background information that's ok since background seems to be THE factor for general reading ability and general reading ability is what we care about (or at least what we think we care about).This is muddled, Ken. General reading ability is independent of background information. A capable reader will not "comprehend/understand" text that has a large proportion of Tech/Lex the reader is unfamiliar with. When the measurement of "reading ability" is thoroughly contaminated in this way, we don't get a measure of reading expertise. We get a measure of "declarative knowledge/crystallized intelligence." g is very difficult to change.Confounding g with reading instruction gunks up both the measurement and the instruction. The serious measurement begins in grade 3, which is after serious reading instruction ends. And the measurement continues on through high school, making deficiencies in all academic domains appear to be reading deficiencies. In the real world, there are many factors that need to be adjusted before a curricular intervention really starts producing significantly increased results for at-risk kids.That should read in the minds of many educators and persons concerned with education. . .The effects of an instructional product/protocol=curricular intervention should and can be immediate and transparent. What we're looking for is replicability of instructional accomplishments. If the accomplishments aren't there to be seen transparently and immediately now, they won't be there later. That's a prime fallacy of NCLB 2014, but let's not go there.Sure, if a district sticks with DI, the personnel get better at it over time. But they don't get that much better. And over time, other things also happen. A new Superintendent comes in and wants to change things. Or the Feds change their mind. With Reading First funding removed, it will be interesting to see how Gering adapts.
When the measurement of "reading ability" is thoroughly contaminated in this way, we don't get a measure of reading expertise. We get a measure of "declarative knowledge/crystallized intelligence." g is very difficult to change.Understood, but it appears that lots of declarative knowledge plus lots of practice reading is what makes a good general reader.To the extent that declarative knowledge correlates highly with g I'd suggest that the reason is 1. the teaching is bad (the materials are capable of being put into meaningful relationships and the instruction has either failed to display the relationships or has given an explanation that is hard to follow) and/or 2. the vocabulary and background knowledge has not been taught.At least that's the theory set forth by Becker et al. In the Gering grades 4-8 the theory is now being put to the test.The effects of an instructional product/protocol=curricular intervention should and can be immediate and transparent.That depends on your definition of immediate. Intervention implementations need some time to stabilize. In PFTY, I believe it was the 4th cohort that was measured adn the subject of the study. Gains should start coming quickly in K, but it will take four years before these kids reach fifth grade. Before this cohort reaches fifth grade prior cohorts will only have had access to a partial intervention. And, this is only the first cohort. I wouldn't expect stabilized results for at least another year or two.The prime fallacy of NCLB was thinking that 14 years was enough time to fix all twelve grades and send a cohort through the entire 12 years and expect proficiency. Up until this first cohort showed up, teaching in the latter grades would have been inremediation mode for many (though hopefully increasingly smaller) students.
We're getting tangled in terminology, Ken. "Declarative knowledge" is the technical term for "background information" and "crystallized intelligence" is a theoretical term for the same thing. All these are one and the same, commonly abbreviated as g. it appears that lots of declarative knowledge plus lots of practice reading is what makes a good general reader.This view contaminates "reading" with "declarative knowledge" It takes more than "lots of practice reading" to reliably teach kids to read. This is the discredited "Whole Language" view that you've knocked down so often.Intervention implementations need some time to stabilize.That depends on the intervention. I'm using "immediate" and "transparent" in their common usage. The mandates of NCLB wouldn't result in teaching all kids to read before hell freezes over. But the fault is with the mandates, not with kids, teachers, parents, or the citizenry.
Dick, I don't think those three terms are synonymous. I'd say that declarative knowledge is the verbal version of background/content knowledge and crystallized intelligence is the ability to use background/content knowledge.
Dick, thanks for explaining your use of quote marks. In my experience quote marks around single words mean one of two things, the author is trying to talk about the word itself, eg the word "cats" rather than cats as in felines, or the author is doubtful about the use of the word, this second use is known as scare quotes. (Obviously quote marks around multiple words can just be a quote as well). I am still not sure what you think is in GenLex, given that you put things like first-last, primary colours, etc, in TechLex. It sounds to me like GenLex as you define it is an empty set, in which case I agree that no time should be wasted teaching it. As far as I know I agree with you that reading comprehension is a matter of background knowledge - we can even define decoding as a matter of needing the background knowledge to link the symbol "A" to the sound a. General reading ability is independent of background information.I don't think it is. For example, I can read English very well, but I only read French poorly, and I read Arabic not at all. If there was a general reading ability, independent of background information, surely I could read any language with equal ease? A capable reader will not "comprehend/understand" text that has a large proportion of Tech/Lex the reader is unfamiliar with. When the measurement of "reading ability" is thoroughly contaminated in this way, we don't get a measure of reading expertise. What you call contamination I call the essence of reading. When I think about the gains that come from reading, I think of things like the ability to comprehend a newspaper written in the country the student is living in. Decoding is necessary to have, but if a kid can decode the English alphabet in the sense of turning letters into intelligble words, but doesn't have enough background knowledge to comprehend say an article in their local newspaper about the latest state elections, and they can't understand the instructions on a bottle of medicine in any language, they can't read, by my definition. And what's the point of defining reading as only the ability to decode words the student already knows? We get a measure of "declarative knowledge/crystallized intelligence." g is very difficult to change.If by g you mean "declarative knowledge/crystalised intelligence" then surely g is very easy to change? If g is hard to change, how come you figured out how to use html tags so quickly? How come babies normally learn to cope so well with whatever culture they are born into, if g is hard to change?Intelligence may be hard to change, some people have learning disabilities that make some or all learning hard, and some forms of learning are hard work for everyone (eg times tables because it's not enough to get the gist right but the exact details), but for normal human beings learning things like vocabulary, stories, relations between people, etc strikes me as quite easy.
Your distinctions are close enough for school use, Ken. Let's see if we can clarify the terminology without further entangling. I'm not making these terms up. My reference is the Cambridge Handbook of Expertise and Expert PerformanceKnowledge is categorized into 2 parts declarative and procedural.The declarative doesn't have to be verbal. It can be tacit or implicit. It's the knowing part. The procedural usually involves motor performance of some sort. It's the doing part.Intelligence is also categorized into 2 parts: fluid and crystallized. To oversimplify, Fluid is the innate part and crystallized is the learned part.Tracy: Applying these distinctions may or may not help you straighten out your remaining confusion, but it's the best I can do.One more time on General vs. Technical Lexicon. GenLex is anything but an empty set. It's what most people refer to as "vocabulary." The reason for dividing Lexicon into General and Technical is that these differ for individuals. Most kids have the Technical Lexicon of Kindergarten in GenLex. But for kids who don't, it's TechLex and has to be taught. Of course, there is no general reading expertise that extends to all languages. Each language has its own code. If you can handle that Code and if you have the cognates in another language and syntax you'll be able to comprehend what you read. There are some terms, like savoire faire that have no strict counterpart in English. They constitute TechLex that may be converted int GenLex.All of these matters can be addressed instructionally, independent of SES. But if the distinctions are made, lower SES kids will unintentionally but inherently be disadvantaged instructionally.
Oops. Make that read: "if the distinctions are not made..."
So Dick you are saying that a word always starts off in someone's TechLex, and then with familiarity moves into GenLex? If that's your definition, then I can agree with you that there is not point in teaching GenLex. In which case I can understand your definition. It just appears to have absolutely no implications for teaching. If you can handle that Code and if you have the cognates in another language and syntax you'll be able to comprehend what you read. I don't believe you, and I notice that you don't bother trying to provide any evidence for your assertion here. The reason I don't believe you is that I couldn't answer the baseball questions in the core knowledge inferencing test, even though I knew every single word taken individually and the grammar was right. Again, if there was a general reading ability that was independent of background information, I would be able to answer the baseball questions easily. Instead I run into the same problem as with French or with Arabic (translated into the English alphabet), I don't have the background information. And this is the experience of cognitive psychology too, see for example this summmary of the research by E.D. Hirsch. Can you explain why you think the cognitive scientists have got it wrong? Or has Hirsch mis-summarised the material?
Dick you are saying that a word always starts off in someone's TechLex, and then with familiarity moves into GenLex?Unfortunately, that's a very awkward way of putting it. A General Lexicon for aggregate kids can be compiled. It's the everyday language kids acquire at home, in their everyday experience, watching TV, having books read to them, and so on.Technical Lexicon can also be compiled by analyzing the communication of teachers and textbooks.These Lexicon compilations are "there."For any individual kid, a given term or concept may or may not be "there." By using high frequency words and then by adding a rule of thumb such as "speaking in full sentences" we can be sure (+/- a tad) that kids will have the minimum GenLex prereqs on K entry.But even K has a TechLex. If the terms aren't in a kid's GenLex the terms don't start in the TechLex and move. They have to be taught. and then they are a part of the kid's GenLex.As instruction increases in complexity, we can be more sure of the TechLex that will require instruction. This TechLex has been compiled for each elementary school subject and is accessible on the Social Science Research Network:http://ssrn.com/author=1199505 I've further suggested that focusinh on TechLex is the best way to get at the operational definition of "background information."I'm not contending there is a general reading ability. I'm contending, in the words of the cognitive psychologist D B. Elkonin, "He who, independently of the level of understanding of words, can correctly recreate their sounds, is able to read."I'm also contending in the words of the philosopher/linguist/cognitive psychologist Steven Pinker, "any successful reading technique has to begin with an understanding of the logic of our alphabet."My writing on "comprehension" over the years has been consistent with the current position of Dan Willingham and Don Hirsch. So you are barking up the wrong straw man in that regard.
Dick, I think it's safe to say that I am entirely confused by your definitions of GenLex and TechLex. As we are apparently agreed that kids who enter school not knowing concepts or words like primary colours or first-last should be taught them by the school, I don't see any more point in trying to understand any further.As for your other points, your quote from Elkonin is illuminating - it appears that you and Elkonin are defining the word "read" distinctly differently from how I use them, and I think how much of the general public defines it, given how come so many reading tests are fully pencil-and-paper rather than the oral ones that would be necessary to test whether kids can correctly recreate their sounds. What's more, I think that schools should be teaching reading comprehension as well as Elkonin's reading, most of the benefits of reading appear to come from being able to understand something not just correctly receating sounds. Thank you for saying "I'm not contending there is a general reading ability.", I appreciate an honest arguer who is willing to change their mind in the middle of debate. I probably don't do that enough myself, so I am impressed by your intellectual honesty on this point. I notice that you still have provided no explanation of how come I could fail the baseball inferencing test, if, nor have you explained why what the cognitive scientists say about reading conflicts with your claim that "If you can handle that Code and if you have the cognates in another language and syntax you'll be able to comprehend what you read." Instead you have simply added another unsupported assertion that your position is consistent with the current position of Dan Willingham and Don Hirsch, which I also don't see any reason to believe.
Post a Comment