Is intelligence distributed in a normal distribution?

We have been having the traditional every so often spate of IQ threads lately but this is a narrow focus question for a factual answer.

IQ tests all operate on the assumption that whatever they actually measure (call it “intelligence” for the duration of this thread) is distributed in a Gaussian normal manner. IS that so? Was it ever so?

My suspicion is that it is not currently so for most Western populations if for no other reason than the Flynn effect which has moved the lower half upwards while effecting the upper half to a lesser degree. The result of such a transformation is a very non-normal curve.

Please no discussion about race, religion, ethnicity, genetics, culture, or nationality. The question is merely asking if there is any evidencary basis that the distribution of “intelligence” is now or ever was actually normally distributed or if that has alway just been an assumption made for statistical convenience.

Thank you.

That is, the IQ number is scaled so that there is a Gaussian distribution of people along the scale.

But in order to achieve this, the scale has to be “distorted”, so that one point at the tail does not equal one point at the centre.

According to the “old” meaning of the term “IQ”, the scale was constant, and an the measured result was that the distribution was not exactly “normal”.

In the original definition, the intelligence of a person with less intelligence than a new-born babe would be negative, less than zero. In the current definition, a person with a negative intelligence would not be scaled in relation to zero-year-old person. But either way, it’s quite meaningless.

By both definitions, the zero end of the scale refers to values that can’t be measured by the standard test, and are meaningless in comparison to the central value. (This is different than just saying the central values are meaningless in comparison to other central values)

However, to the extent that is was measurable, the original scale was measurably non-Gaussian at the tails. Using the current definition, to the extent that is measurable, the values are scaled, or the test is designed, to give a Gaussian distribution.

If there was an absolute measure of “intelligence” then you wouldn’t be able to choose which scale you wanted to use for your distribution. The fact that you can choose any distribution you want just points to the fact that there is no agreed, or even proposed, absolute scale of intelligence.

I can picture that for a raw score distribution that is mostly bell-shaped but skewed towards on end or another … but it seems like there would be a limit to how much you could distort a scale to fit a distribution that started to vary widely from that bell-shaped form. Perhaps that merely represent the limits of my thinking about geometric transformation of data.

Anyway, thanks. Not a normal distribution and possibly never was in terms of raw scores and only in terms of the reported score with lots of transformation to make it so by best possible fit. If I got it right.