The post just before cites an example of a score of 228, and as -28 is not possible, the distribution cannot be normal. There is at least one counter example.
Granted that the vegetative coma was perhaps a wrong Gedankenexperiment, I understand if they don`t count.
When Vos Savant took an I.Q. test as a child, there was a different definition (at least for some researchers) for what an I.Q. score meant. It was literally a quotient. It was the mental age divided by the actual age, times 100. That definition is no longer used. The normal curve definition is now used. It’s ridiculously impossible to have an I.Q. of 228 using the normal curve definition. If you were the smartest person among all 108,000,000,000 people who have ever lived up to now, your I.Q. would be about 200 (which means you’re six and two-thirds standard deviations above the mean. 228 would mean you’re eight and a half standard deviations above the mean. That would mean that if the human race continues to exist for gazillions of years with gazillions of people living at any one time, you would then be the smartest person ever among them.
That is what I was claiming, only not just for that reason but also because of the impossibility of measuring it. But keeping the debate at the extremes of the curve, limiting it from 0 to 200: how can you distinguish/measure if someone has an IQ of 7 or of 11? This person would be dumber (if that is the correct term in this context) than a chimpanzee (postulated above as having a score of about 20-25, I believe). How can you assign a relevant value above 160 or below 40 with any resemblance at precission?
You can’t. The general rule is that you should never assign anyone an I.Q. above 160 or below 40. Those scores are already four standard deviations away from the mean. You would have to give the I.Q. test you’re using to hundreds of thousands of people even to claim that one of the people who took the test has an I.Q. of 160 or 40. It’s just too difficult to distinguish the I.Q. for people who are supposedly above that or below that.
What is any IQ score supposed to be reflecting with precision, aside from a person’s performance relative to other people who took the same test in the same year?
In a lot of ways, these tests become self-fulfilling prophecies. 6-year-old Bobby shows talent for basketball and, as a result, spends the next 12 years getting lots of extra basketball attention to make sure he has the chance to nurture those skills. Then we nod sagely and say, “You see? That test was right. Bobby is a truly gifted basketball player.”
That’s not to say that some people aren’t naturally inclined to be good at sports, but these tests are often used to gatekeep enrichment opportunities which would benefit anybody who had access to them.
The IQ sorts people from less to more IQ points, so if somebody has an IQ score of 160, and the next person is “better”, his or her IQ score must be higher. So you have to put a value on that, at least 161. My hypothesis is that the usual tests are not subtle enough to do that reliably at the extremes. There are too few questions on them and too few people have taken them to be able to distinguish between the very smartest (and also between the very low scorers). Therefore I doubt that the IQ distribution curve is a normal curve at the extremes.
I am not doubting the usefulness of the IQ test for most of the people, one or two standard deviations away from the mean. And the IQ is sure useful for diagnostic purposes for values below 60 or 70. Call them “extremely low” and help those people as well as you can. But putting a numerical precise value for their IQ makes no sense. Same for people above 160 or so, only instead of therapy and help you give them a pat in the back and call them geniusses.
Thing 1) There are batteries of tests which can be given to measure a cognitive ability / intelligence. You sit down with a psychologist and do lots of different things which measure lots of different skills and traits. You receive individual scores in each of these categories.
Thing 2) An “IQ score” represents a composite of all those different categories. So the IQ score is not at all useful as a diagnostic because it is, itself, the reflection of diagnostics.
The problem - and misunderstandings - come from assuming that Thing 2 reflects the same amount of information as Thing 1. Thing 2 can be useful in acknowledging that most people who approach certain points of the bell curve are increasingly likely to exhibit certain characteristics. That’s it.
But the thing is that you have it backwards in the quoted portion. The diagnostics (Thing 1) are at their most useful for individuals on either end of the bell curve because they’re almost always going to need different educational approaches. If you only tell a school that Johnny has an IQ of 70 or 75 or 80 or 85, that’s not categorically useless information but it’s pretty close.
The meat of the diagnostics that informed the score are going to have lots of useful information, although even that information needs to be combined with many, many other observations and assessments to get the full picture.
Where I wrote “usefulness” in the fist line of your quote I meant precission, I got ahead of myself: the usefulness comes in the second line. So I am saying that the IQ scores and ranking are most precise in the middle, where it matters less, as a person with an IQ of 95 or 105 will be equally at ease in society performing most tasks, and more fuzzy at the extremes, where it has the most value (therapeutically), as the person below 70, be it 67 or 63, needs the most help (and special classes too for those above 140, be it 142 or 153, so they don’t get bored and mistakenly diagnosed ADHS). I don’t think we disagree that fundamentally, although you seem to be better informed about the theory and methodology behind IQ tests than me. I just dabbed in them as a hobby for a while because I liked to solve them (the word solve probably is revealing and a professional might not use it).
And I still suspect that the IQ distribution is not normal at the extremes, but that is not a matter of life or death, just an observation I made, a suspicion.
In the linked video she says that is her real name (at 1:35), and does not even translate it very accurately (“it’s supposed to mean wise man or something like that”. Duh). But yeah, I thought it was her artistic name too to begin with.
When I was in 5th grade (1959), we took IQ tests in school. The score was supposed to be secret, but I was a teacher’s pet and snuck a look at my file. Fast forward to 1977. I was in graduate school taking Ed Psych. We took a bunch of tests in a testing class, including an IQ test. My number was exactly the same, 143. So there’s that.
My similar story: A school teacher once lambasted me with “What’s a kid with an IQ of 1xx doing fooling around like this?” A few retests over the years were remarkably consistent.
A man who is a genius and doesn’t know it, probably isn’t. (Stanislaw Jerzy Lec)
IQ does have extremely high test re-test reliability. That is one of the things that makes it useful in psychometrics. Scores still change over time, but there is a high correlation across time points.
Originally IQ tests were specifically designed to predict “success.” Any questions that weren’t predictive of success were discarded. Of course that also meant questions that favored white men of means were retained.
This is statistics, so it deals in probabilities, not exacts. A distribution can still be normal even if there isn’t exactly the same number of cases at the corresponding +/- values. Normal distributions also can often be truncated. Extreme cases of this can be things like mental health or drug use. Number of symptoms of depression may look like half of a normal distribution, because the lowest value on the test is zero symptoms, and resistance to depression is not measured, just symptoms of depression.
I think in cognitive psychology there is a subtle difference between IQ and g factor. When many tests are given that measure different things such as fluid intelligence, crystallized intelligence, memory, etc. the g factor comes from a factor analysis and represents the latent trait that is responsible for the positive correlation between the measured things.
IQ is often measured directly on a test like the Stanford-Binet or WAIS. Being a latent factor, g cannot be measured directly by definition.
IQ and g are highly correlated. This can be important when it comes to the practical aspects of testing. A brief 20-question or so IQ test can capture a tremendous amount of the meaningful variance in IQ, or g, without burdening the subject (and researchers) with hours of testing. The purpose isn’t to tell the difference between 105 and 108, but between 105 and 120.
You’re right, now I wonder what I was remembering. The history goes back to tests originally developed by Alfred Binet to screen “slow” children for the French education system.
I agree that IQ scores of doctors are not going to be a random distribution, but I also think that it’s at least possible that someone with a 90 IQ could be a doctor. The abilities that an IQ test measures and the abilities required to get into and succeed in med school are pretty highly correlated but not perfectly.
The one that I’m aware of is that IQ tests generally don’t require any particular ability to memorize things, but medicine does quite a lot. Someone with a really astounding memory and lower-than-average IQ could plausibly lean hard on their memory to be a successful medical student and probably a pretty good doctor too.