Do IQ tests "Top Out"? If so, how?

I was reading this IQ related thread and the thought occured to me, how high do those durned things go anyway?

In practical experience, just noodling about with the online tests, I scored a 138. After that I took the test again, using a cheat sheet to make sure the answers were all as correct as I could make them and it seemed that the test itself topped out at 138.

How do I know that I would (should) not have scored higher than that?

How can a potentially infinite level of intelligence be measured with a test that is restricted to a finite number of right answers?

Or, in the case of more advanced (mensa) testing, how can we be sure that the scale is the same?

(I can claim that 6" is really 10" but that don’t make it so :wink: )

They use a normal distribution for scoring, with 100 being the 50th percentile. Since it asymptotes there isn’t really a maximum/minimum, but getting higher scores becomes extremely unlikely, with 150 being around .1%. The ability of the test to distinguish at extreme ranges also becomes a problem - the validity, or the ability of the test to measure what it intends to measure - tapers off, as there is no one else to compare you to at the extremes. There is a very standardized way of interpreting the results. I haven’t taken one since 8th grade but everything I did was either timeable or based on accuracy etc…not at all like a rorschach (which i’ve also had, and they are all butterflies, ftr)

If you take a real one and suddenly realize you are incredibly intelligent you may enjoy a High-IQ society (they have one for the top 99.9999%'ile, enjoy your 26 friends =).

Thanks, but I think I’ll avoid the Hi IQ societies, they seem more than a bit elitist, with a central focus more on polititics than anything else (at least the ones that break off and form another competitive faction do).

Thanks again

This is actually a good question to understand because it comes up so often in life. Inflated IQ scores are one of the greatest lies of our time. I have seen it many times on this board and in real life.

Most IQ test have a standard deviation of other 15 or 16 and a mean of 100. You can see what I mean by this chart.

To find a percentage for a given IQ score you need to calculate what is called a z-score (this is a fundamental and common concept in statistics).

Let’s say that someone has an IQ score of 130 and you want to convert that to a percentile score. Note, the tests were built so that you can do this within a large range of values.

Z-Score = (130 - 100)/15 = 2

where 130 is the score, 100 is the test mean and 15 is the test standard deviation.

You can look up that z-score in charts or statistics books or the link above.

Z-score = 2 is already at the 98th percentile.

That is what most people don’t realize when they talk about z-scores or other data that is normally distributed. There is a huge clump in the middle and things drop off very fast.

You will often hear that someone has an IQ of 130, then another has 145, and another has 160. It doesn’t sound that bad to add thirty points but you have just gone up two whole standard deviations from a score that was very high to begin with.

Once you get up past 3 standard deviations, those scores get extraordinary rare very fast. We had a person hear once that claimed an IQ of 180. The chance of that - one in several billion.

Once you get beyond 150 or isn’t the score pretty meaningless anyway? Is there going to be a meaningful real world difference in useful intelligence between someone who scores 150 and someone who score 170 or higher?

link

There won’t be much emperical data on levels of “giftedness”. Hard to round them all up.

Forgot the particularly relevant part:

The other thing that’s important to remember is that the IQ score *by itself * is meaningless. You also know which test was given, when it was given, and how it was normed. The only place where all IQ tests (theoretically) match up is 100.

You could make any kind of IQ test that you like but I am pretty sure the standard ones don’t go anywhere near that high.

Even if you somehow could design a test that could measure that high you would need to norm it somehow and have the scores follow the normal curve.

z-score calc:

(200 - 100)/15 = 6.7 SD

That z-score errors out as 1 in infinity in all of the calculators I tried. I think it is safe to say that z-scores of 200 would be nonexistent if the test had the same standard deviation as the more common ones (and you can’t normalize because you can’t find people that can score anywhere near that high).

To clarify, the paper is discussing the Stanford-Binet scoring method as opposed to the standard deviation scores. I just used the quote to show that anecdotally there are differences between levels of giftedness.

Achieving an SD score of 200 is perfectly plausible, just highly unlikely. What I said in my first post still goes - the results don’t mean much if you have no context to place them in. The only context here is other human’s who score similarly to you.

Marilyn vos Savant was put in the Guinness Book of World Records for an IQ of ~186 (SD scoring) which is at the 99.999997 percentile. I think that means there’s around 200 humans who would score that high.

True but those decimal points get to be pretty sensitive once you get up that high. A person goes from being the smartest person in the county to the whole world very quickly if the normal curve is adherred to.

If the standard deviation were converted to a variable and equalized with the z number (z=sqrt(score-mean) & sd=sqrt(score-mean)) the ratio of the difference tops out at with a score of about 182.12452, which coincides (actually just a little lower) with the highest “verified” score of Marilyn vos Savant. I would think that anything higher than that would have to alter the mean. Considering that 50% of the population is always 50% of the population (and assuming that a single person is represented as a 1:1 ratio, which is why I did not try to go beyond the lowest whole number), would any score with a deviation from the mean greater than 82.12452 be relevant (possible)?

I’m focusing strictly with the math here, not the psychology involved. I may have missed something in the math or in the translation, I’m usually pretty good with the beginning and ending bits but the work in the middle too often eludes me ;).

About the online test: I think they have a ceiling of 140. There has been a certain incidence of 11 bored students in a gifted class taking the test one afternoon. All of them had IQs demonstrated to be in the 99.9th percentile but nobody scored above 140. :wink:

… and a floor of about 85. If you intentionally get every question wrong, you can drop below 100, but if you guess randomly, you have a decent chance of being “above average”. :rolleyes: Ask yourself this question: would you forward an IQ test to your friends if you scored a 65 on it?

For a normal IQ test, there’s certainly going to be a small disparity between people who have a 150 IQ and people who have a 170 IQ, as more likely than not, they’ll both be scoring perfect on the test. High-IQ societies will just make up harder tests so that people with IQ 150 score 10% and people with IQ 180 score 90%.

Aaackkk…Thbbbtt… :smiley:

Seriously though, this pretty much addresses the last question in my OP. If High-IQ societies create a harder test (roughly per your estimation of 150-10%/180-90%) then essentially they are either stepping outside of the accepted guidelines and scoring systems, adjusting the standard deviation until the z-number computes (which would be like designing a scientific study around the end result instead of the subject of the study) or establishing a variable equilibrium between the deviation from the mean and the z-number essentially rendering the stability of the mean irrelevant (which is more or less the same as remarking the numbers on a scale to fit the needs of whatever is to be measured). Changing the standard of measurement makes any compairison with anything previously measured (eg. the other 99.9999% of us) irrelevant.

What does that mean? I’ve never seen the z-score and the standard deviation defined in this manner. If you start with a false premise, you can conclude anything.

I was fiddling around and found out that my IQ can be expressed as (sqrt(weight in pounds) * pi * sqrt(age in years)) / (sqrt(height in inches - length of feet in inches)).

Try it yourself. It is a lot faster and cheaper than taking one of those tests.

With the added effect of making fat, old, tall people with giant feet feel really intelligent.

Oh did I get that wrong? I meant short obese centenarians with tiny feet.

I think the flu was kicking my head around this morning otherwise, if I had been a little more lucid, I would simply have asked why the deviation from the mean needs to be set at a standard level across the board, wouldn’t a variable deviation set at a standardized rate of variation better serve to evaluate things at the extreme ends of the spectrum?

So, if I eat this bag of Cheetos it will lower my IQ? I always suspected as such.