The Wikipedia page proposes a number of possible explanations. Education is one of them, but they point out that the effect exists even when comparing groups who had similar education levels - that is, take people who earned four-year degrees in 1920, and people who earned four-year degrees in 2020, given both groups the same test, and the latter group will score higher.
I think there’s been a cultural shift over the years and decades in how people think, and how much people think, and it’s made us better at identifying patterns than we used to be. The Wikipedia page notes that “the general environment today is much more complex and stimulating,” and people today do a lot more abstract intellectual work than people of the past.
Nutrition also seems like a pretty compelling factor. If you grow up malnourished because your parents couldn’t find or afford a nutritious diet for you, then your brain is less likely to be be capable of great intellectual feats. In this way, poverty really is a trap.
Just because the average (mean) is, by definition, 100, doesn’t mean that this is also the median.
If I understand correctly, IQ scores tend to be normally distributed, which would imply that, yes, half the scores would be below average. But this is not true in general.
Another objection: Just because 100 is the mean for IQs in general doesn’t mean that 100 will be the average on “an IQ test.” It depends on who is taking the test, and why. A test to get into Mensa, for example, would probably self-select for people with higher-than-average IQ.
But making a strictly true statement, with all qualifiers, would ruin the brevity and the punch. Tom Weller did this in his Science Made Stupid when he worded it as “Half the people have below-median intelligence”. It’s more accurate, but it doesn’t work as well as “below average” as humor.
And he ignores the case of special testing, as you imply! Just like me!
He or I shoulda said "It’s a sobering thought that half the people in any representative sampling of the population ought to have a below-median intelligence, which would be a real thigh-slapper.
Chronos this is clearly impossible barring infinities in which case the average may not be defined. It might be possible if you are speaking of some strange type of average. And every that I know of has the proeprty that it weakly exceeds the minimum value.
Suppose I offer to play a game with you: The jackpot starts at $1, and you flip a fair coin. If you ever flip a heads, the game is over, and you win the current value of the jackpot. If you flip a tail, then you double the jackpot and repeat the process. What is the expected value of this game?
There you go; a distribution where every outcome is below average. Now just take the negative of it, and you have one where every outcome is above average.
On another topic: IQ does not “tend” to be a normal distribution. It is one, by definition. The test score results, whatever they are, are mapped onto IQ numbers in such a way as to make the distribution normal.
Standardized IQ tests measure both crystallized (school-based or learned knowledge) and fluid (application of strategy to novel tasks) intelligence. Some subtests are timed and some are not. Some IQ tests include more tasks that include a kinesthetic component. The idea is to use a range of subtests that load for g, which is a statistical construct for general intelligence. The variety of subtasks helps to statistically balance for unusually great or poor performance on a particular subtest (for example, people who quilt do better better on a subtest that uses shapes to make patterns). Renorming has a culture-based component (which is also a reasonable critique of IQ test construction in general). Therefore, I would not expect to score 30 points (3 SD) higher than an “average person” from 100 years ago, because some of the items would be inexplicable or not reinforced by my education or culture. In fact, I would score lower than a British version of the WAIS than on the US version because some of the questions differ and reflect British culture and linguistic differences.
Exactly.IQ tests ability with vocabulary, reading, and especially reasoning and problem solving. The thing that tends to expand the human brain’s power is a variety of experience.
People who are widely read have encountered more and more varied situations - ditto for people who have watched a lot of media (and what’s the one thing modern humans do a lot of?) Of course, what that content is, also matters. you won’t get as smart reading nothing but romance novels and watching sitcoms.
I thought of this one day watching a movie and realizing “I’ve never been to a major racetrack horse race. In fact, it’s a dying business. But I cannot count the number of times over the years I’ve seen horse races on TV and in movies.” This applies to so many of life’s experiences - parachuting, rock climbing, Formula 1 races, roaming many of the classic cities of Europe or Asia, visits to a myriad of UNESCO World Heritage sites. Talk shows, news shows, and even trivia game shows develop our store of knowledge and how it is applied to life. The internet takes it up a notch - you can watch some guy on YouTube make all sorts of objects from mud and other material found in the jungle, for example. Simple math is just the most basic of these ;earn-and-apply processes.
There’s a process in our brain that takes information and integrates it. A good IQ test measures how well you apply this integration to novel questions. A bad one is (like SAT) simply a measure of your memory retention.
Yes, a real IQ would be like a strength test - some people can run marathons, others bench press hundreds of pounds, some people have amazing grip strength. Similarly, mental capability involves many tasks, and a blend of those measurements is only an approximation of “how smart are you?”
And finally, success in life depends on things other than smarts, or problem solving plus memory. Humans are social animals, and successful manager - who get paid the most - have skills over and above technical.
And let’s not forget Freakonomics - you can’t argue with statistics. The best indicator of your probably success in life is how educated your parents were - I assume an indicator of (a) the genetic component of brainpower and (b) the chance you will be properly pushed and motivated to a good education and resulting good career.
OTOH - “odds” is not a guarantee. Some very successful people came from humble roots. Obama’s grandparents were farmers, his mother was a Phd and he was also educationally outstanding. Bill Clinton, Rhodes scholar, by his autobiography came from almost White Trash. Ronald Reagan came from humble roots. Jimmy Carter’s dad was a peanut farmer. Etc.
Most IQ tests don’t measure vocabulary at all. Many of them are completely nonverbal, beyond the minimum amount of language (any language) needed to give the instructions, which can be given entirely using language a barely-speaking child would understand.
To be more precise, vocabulary does correlate with the other various measures of IQ. And it’s standard practice in the sciences, when you have a bunch of tests that all correlate with something that you can’t measure directly, to do all of those tests and take a weighted average of them. So if you want to measure IQ, you do a whole bunch of tests, including vocabulary tests, and you average them together. But if you have some reason to expect that, in one particular case, that vocabulary won’t correlate well with IQ (for instance, a person whose native language is one for which no standard vocabulary test is available), then you can leave the vocabulary components out for that case, or weight them very low in the averaging, and still get a good measure.
Also the WAIS and Stanford-Binet. There are a number of other IQ and IQ-like measures that use vocabulary and math, such as the Wonderlic. In fact, a quick proxy for g is to use just the vocabulary and block design subtests of the WAIS.
ETA: The WAIS breaks out factors as well as generating various IQ groupings; it’s not just a matter of averaging raw scores.