What if there is some sort of intelligence drift that makes people progressively dumber like Idiocracy? Or smarter? The “average” IQ would not be fixed then. Wouldn’t you have to tweak tests so the average score was 100, which would not then be a real measure of absolute intelligence over time, creating a chicken and the egg scenario?
The average score is by definition 100. Give the test to lots of people, look at the average, that is what 100 is.
BTW, IQ tests are at best a pseudo-science and at usualy racist classist total claptrap. (A great book on the topic.)
They do tweak things. When the update the tests, they test a sample of people and standardized the scores so the median is 100 and the standard deviation is about 15.
Unstandardized the average IQ has increased over time. This is known as the Flynn effect.
James Flynn himself gave an informative TED talk about this.
A wide variety of very different sorts of tests give results which are very well correlated with each other. Clearly, there’s some trait that those tests are actually measuring. An individual’s score on those tests at different ages, before, during, and after schooling, correlate very well. So whatever it is they’re measuring isn’t just a reflection of education. And scores on those tests correlate fairly well with various measures of life success, so whatever it is they’re measuring, it’s something that we probably should care about.
Except that it’s not clear what it is, nor what can or should be done with whatever we do know about it.
Yes, it measures something, we just can’t really tell what for sure, except for how one performs on IQ tests. The pseudo-science is in the people who think they know what that means and what should be done with it. Otherwise, it’s a little unfair to call it a pseudo-science overall because they use sound procedures for collecting and evaluating data. They do a very good job of measuring how well people perform in an IQ test.
It’s a sobering thought that, by definition, half the people test below 100 on an IQ test.
But Chronos noted that these tests do equate to things like life success. So it is more than just being born a good test taker. It seems to suggest how well you may perform in other tasks.
Of course, we all know idiots who are successful and really smart people who are failures (at least, I have personally seen both). But these things are broad-stroke measures.
I will say, if you look at the most successful people in the US as measured by fame and/or fortune they are almost never the smartest people out there (or if they are, they hide it well).
Not sure that is true. By definition about 3% people will get an **exact ** score of 100. The rest 97% would get equally divided in strictly lower or strictly higher category.
Sure it is, and it’s straightforward: there is a strong positive correlation in performance for different types of cognitive task. It explains roughly half the difference in performance on a specific cognitive task between individuals, and it’s not remotely controversial or pseudoscientific that it exists, that it’s highly heritable, and that it’s well correlated with various social outcomes.
Of course, how to measure it and what you might choose to do with the information are far more controversial subjects.
I don’t dismiss that, except that it may be measuring nothing but how good people are at taking tests in general, which would lead to better performance in school, which would lead to better success in life under current economic conditions. There are plenty of other ways that could suggest how someone will perform other tasks, and those may not correlate with IQ at all. I don’t deny that IQ is measuring something, it’s just not clear what outside of the context of the test. Clearly if you do well at the test problems involving spatial relationships you’ll probably be able to use that ability in other ways, but it doesn’t necessarily translate to any other skill.
The fact that new I.Q. tests have to be created (or the old ones have to be renormed) every 10 to 15 years means not just that it’s necessary to find a new average score on them so that it’s possible to say what score is equal to an I.Q. of 100. It also explains why it’s impossible to say that someone has been measured on an accurate I.Q. test as having an I.Q. well above 160 or well below 40. To norm a new test or renorm an old test, you first give the test to a large group of people whose intelligence seems to be spread randomly over everyone in the overall population you’re designed the test for. So if you’re designing a test for all American adults, you have to have a large group of American adults that you have some reason to believe that their intelligence is evenly spread over what you’d expect for American adults. To design a test for American 10-year-olds, you have to have a large group of American 10-year-olds that you think their intelligence is evenly spread over what you’d expect for American 10-year-olds. And so forth for any other population.
How do you get such a population? You have to give the test to a large group that are willing to take it. Suppose you have 100,000 people in your group. You give them your new test. You find the score such that 50,000 people score above that number of right answers and 50,000 people score below that number of right answers. By definition, that gives you what an I.Q. of 100 is. You then find the number of right answers that only three or four people in your group of 100,000 people score above that number. Those people by definition have an I.Q. of 160 of more. That’s because it’s expected that one person in about 31,000 will have an I.Q. at least four standard deviations above the mean. Four standard deviations on I.Q. is four times 15 points above 100, which equals 160. It was an arbitrary choice long ago that one standard deviation is 15 points. (Look up standard deviations if you don’t know about then.)
Being able to give a new test to 100,000 people is already hard enough. Expecting that those three or four people taking the test are really as good as you’d expect on average in a group of 100,000 people is already shaky enough. There’s no way to get a group large enough to be able to use an accurate test to say that someone in the group has an I.Q. of 200. You will frequently find claims that someone has an I.Q. of 200 or some such. That is nonsense. It’s impossible to measure such an I.Q. 200 is six and two-thirds standard deviations above 100. Six and two-thirds standard deviations above the average is expected once in 100 billion times. That’s about as many people as have existed in the entire history of mankind.
There are many flaws in the detailed implementation of certain tests, stipulated. But the correlation is observed so universally that it’s beyond credulity to suggest that all tests of cognitive ability in scientific studies are totally unrelated to some other kind of “real” cognitive ability.
And your last sentence is simply the straw man fallacy that if correlation isn’t 100% then it doesn’t matter at all. A roughly 50% correlation means just what it says - half of people’s performance in specific cognitive tasks is explained by other factors.
I recall some claims that this is not the case for a small rural town in central Minnesota.
I didn’t mean it to sound that way, I only meant to suggest it can be misinterpreted, not that it would be to such an extreme extent. We can break down the problems of the test and see many ways they correspond individually with specific real life skills, it’s projecting how they apply otherwise which is less clear.
Well yes, it did sound that way, fair criticism. Other means of measuring ability don’t nullify the results of IQ tests.
Aside: It is actually possible to have a distribution of some variable such that every specimen is above-average in that variable. Though it won’t happen with a Gaussian distribution (which is what IQ is calibrated to be), and it won’t happen with the average of a population.
Aside: It is actually possible to have a distribution of some variable such that every specimen is above-average in that variable. Though it won’t happen with a Gaussian distribution (which is what IQ is calibrated to be), and it won’t happen with the average of a population.
Fascinating video. So an “average person” today taking a test from 100 years ago would score 130, and an “average person” from 100 years ago taking today’s test would score 70. And apparently this IS due to education (which shouldn’t figure into an IQ test) becase the average person back then was not as educated in the practice of abstract thought as we are.
I have watched a lot of old game shows recently (60s and before) and was rather shocked at how dumb they seemed. That probably WAS related to education, and maybe they purposely were selecting dummies to make it funnier. But it did get me thinking about this.