What is the qualitative difference between IQ scores

So if IQ follows a bell curve distribution with an average of 100, what does that mean regarding how much ‘intelligence’ is gained or lost based on how far you deviate from the average?

How much ‘smarter’ is someone with an IQ of 130 compared to someone with an IQ of 100, how much smarter is someone who is IQ 160 compared to 130?

Can you quantify the degree of pattern recognition, problem solving, working memory, reasoning, spatial visualization, etc and compare them to each other? Or are IQ scores basically just to signify where you fall on a bell curve distribution?

Yes. That quantification is called an “IQ score”.

OP, you might find the original Stanford–Binet categories helpful.

More interesting information here.
http://www.wilderdom.com/intelligence/IQWhatScoresMean.html

If someone does better than 50% of all other test-takers and worse than 50% of all other test-takers on an I.Q. test, then their I.Q. is 100. If they do better than 75% of all other test-takers and worse than 25% of all other test-takers, then their I.Q. is 110. If they do better than 91% and worse than 9%, their I.Q. is 120; if better than 98% and worse than 2%, their I.Q. is 130; if better than 99.6% and worse than 0.4%, their I.Q. is 140; if better than 99.96% and worse than 0.04%, then their I.Q. is 150; if better than 99.998% and worse than 0.002%, then their I.Q. is 160. (Yes, I had to do some rounding.)

And that’s all the scores mean. Those words like:

Genius or near genius
Very superior intelligence
Superior intelligence
Normal or average intelligence

are just arbitrary ways of labeling people’s intelligence. The reality is that they scored above a certain percentage of people and below a certain number of people. These percentages of how well they did were then converted (using something called “standard deviations”) to I.Q. scores. There is no permanent measuring stick for intelligence. I.Q. is just a way of reporting how well someone does on one I.Q. test.

IQ scores are calibrated such that they follow a normal distribution, the mean is 100, and the standard deviation is 15. What that means is basically what Wendell said.

Yes but what does it qualitatively mean to have an IQ 130 vs IQ 100? Are those numbers just based on the concept that 1 standard deviation = 15 points?

Does someone with an IQ of 130 have 30% better spatial visualization, reasoning, etc. 20% better, 100% better, etc? Is it a linear improvement or does it change with IQ changes?

For example, the level of qualitative improvement going from 130 to 160 is bigger than the improvement going from 100 to 130.

Is the number ascribed to IQ actually tied to how well your intellectual skills change, or just where you fall along a bell curve distribution? Could a person with an IQ of 130 have various cognitive skills twice as qualitative as someone with an IQ of 100, despite having an IQ of only 30 points higher?

Yes but what does it qualitatively mean to have an IQ 130 vs IQ 100? Are those numbers just based on the concept that 1 standard deviation = 15 points?

Does someone with an IQ of 130 have 30% better spatial visualization, reasoning, etc. 20% better, 100% better, etc? Is it a linear improvement or does it change with IQ changes?

For example, the level of qualitiative improvement going from 130 to 160 is bigger than the improvement going from 100 to 130.

I know people with an IQ of 75 and people with an IQ of 130. Based on the numbers, the higher IQ is 73% larger. However the degree to which the higher IQ person can reason and see patterns seems larger than a 73% difference.

If you look at this chart in that article, people who earn a bachelors degree in education have about 1/4 the visual, math and verbal aptitude of people who earn a PhD in the physical sciences or engineering. However I don’t know how that is gauged. But the IQ difference between the groups in numbers is not that huge. I’d wager the IQ of people who earn a B.A. in education is about 110, the average IQ of someone who earns a PhD in engineering or physical science is probably closer to 130. However that implies the people earning the PhD have 4 times the specific aptitude and 2.6x the general aptitude despite having an IQ only 18% higher.

But I don’t know if I’m reading that chart correctly. It may just be the scores relative to each other, probably not raw aptitude in those fields.

Wesley Clark writes:

> Are those numbers just based on the concept that 1 standard deviation = 15 points?

Yes.

> Does someone with an IQ of 130 have 30% better spatial visualization, reasoning,
> etc. 20% better, 100% better, etc?

No, there’s no way to convert I.Q. scores into statements about one person being X% better than another person at anything.

> I know people with an IQ of 75 and people with an IQ of 130. Based on the
> numbers, the higher IQ is 73% larger. However the degree to which the higher IQ
> person can reason and see patterns seems larger than a 73% difference.

Again, the number 130 being 73% larger than 75 says absolutely nothing about anything to do with a person of I.Q. 130 having 73% more of anything than a person of I.Q. 75 has. I.Q. doesn’t work that way. I.Q. scores are not measurements of a substance that a person has. The score tells you where the person is on a normal curve and nothing else.

Can’t those scores be used to quantify people’s abilities that go into IQ? Can’t someone say that an IQ of 130 means having pattern recognition abilities that are 200% better than someone at IQ 110? Since humans are the ones creating and grading the tests, can’t they quantify ability relative to other IQs rather than just saying ‘this person has a higher IQ than 99% of the population’, actually show what that means regarding the quality of their cognition relative to other people in the distribution curve?

The key about IQ is that it is often a necessary condition, but never ever ever a sufficient condition.

For instance, if you take a group of eminent theoretical physicists, some of the most famous and respected in their field and still at the top of their game, and you gave them IQ tests, they would all score off the fucking roof. You wouldn’t see any scores below the 90th percentile, and probly not any below the 99th. (This has actually been done, although I think a long time ago.) What we’re looking at here is IQ as a cutoff. People below a certain point just can’t hack it in certain intellectual fields.

But that does not mean that people who score above such a level will succeed at any given intellectual task. There are many, many ingredients that go into success. Admissions officers at elite universities, for instance, use test scores as a cutoff (nobody in the 50th percentile is going to Harvard), but past a certain score, success is just a crapshoot. They absolutely have to look at other things, because IQ is never ever ever sufficient for demonstrating future success.

And the tests are re-normalized regularly. 100 is the center of the curve. So if you’re looking for practical, real-world results, you’d have to specify a score for a particular year on a particular test. And once you had that, you’d have to ask other more specific questions, like “If a student scored 115, how quickly could they work their way through this particular calculus textbook without any assistance?” If anybody wanted to spend a bazillion research bucks, they could probly answer that question, so that one additional point on the IQ test predicted – on average! – two fewer days of self-study necessary, or something along those lines. But even then it wouldn’t be perfect. You could very easily have extremely high-scoring people be unable to complete the task because of other issues. Never a sufficient condition.

But those kinds of questions? They aren’t much researched. They’d be hellaciously expensive. Much easier to give a test one day, then check income five or ten years later. That’s plenty expensive enough as it is, since you have to ensure that you hunt down accurate data points later in life. Any more detail than that, like how much more quickly – on average! – a person does on a particular long-term mental task like learning a language or whatever is just beyond the scope of what social science research can generally manage.

Different tests measure somewhat-different abilities. No matter how hard the test-makers try, those abilities are trainable and get inadvertently trained in different ways in different cultures / schools /families; people who were never taught to read a map have more difficulty following squiggly line-bolt-dot-dot-dot-squiggly line than those who were taught as soon as they were reasonably fluent reading placenames.
Two people may be able to achieve the same results given enough time, but the tests have a time limit. Someone who does not run out of time will score higher than someone who does, even though their actual reasoning ability is equally good in terms of final results.
One of the biggest factors in terms of how long it takes someone to pick a particular result is self-reliance/insecurity. Someone can be able to achieve the best result but then waffle; others may know that “my first idea is usually my best” and will go faster. This seems to have more to do with nurture than nature - it’s not a matter of natural ability but of what has been rewarded (or simply less disencouraged): doing your thing, or making puppy eyes at the teachers because “oh gosh I am not sure”?
That (in)security thing will also influence how likely someone is to change their initial response.

My results in terms of exams have often had more to do with what kind of responses did teachers want than with my actual ability to learn and remember things. The immense majority of my exams have been essays, problems and demonstrations: when teachers wanted the response to be an exact regurgitation of the material taught, I did badly; when they wanted to see understanding, I did well. That’s because my learning is logic-based. People whose learning is memorization-based, do well the other way around. We’d score very differently in psychological tests, but give us an even distribution of both types of teachers and our GPAs would be the same.

I took “the Mensa test” back when it was the Raven. Very quickly. The proctor asked me what had I answered to one of the questions; I said “x but it’s ugly”. “Ugly?” “Yes, it’s the only logical answer but it’s ugly. The series would have worked better with the next item, which would have been [description].” He gave me a funny look and said “ok, everybody who answers x passes, so my money is on you being in, but what’s funny is that all of us find it ugly!”
Are the aesthetics of dots and bars and squiggly lines a measurement of someone’s pattern-recognition ability? Could we have skipped the majority of the questions and have just popped that one, if the person makes an “ugh” face he passes?

Wesley Clark writes:

> Can’t someone say that an IQ of 130 means having pattern recognition abilities that
> are 200% better than someone at IQ 110?

No. Intellectual abilities (like pattern recognition or whatever) are not quantities of some substance. You can’t weigh them or find out how long they are or find out how hot they are or find out how much area they take up or measure how bright they are or measure how loud they are or whatever. They are not a substance that can be measured. There are no units of intellectual abilities to be measured. All you can say is that one person is better at some particular intellectual ability than another person is.

$$ >@#years = IQ?
I’m a genius! :eek:

Obviously there’s a point where things get ridiculous, but I don’t think this is really true. In my experience, with enough training people with a very average IQ can do pretty high level stuff.

The example of high level science isn’t the best one as this also requires a lot of creativity, which is not the same as intelligence.

It’s a type of intelligence, and frankly some of the Chemistry PhDs I’ve known were about as creative as a flower pot without the flowers. Are you also going to discount all other STEM fields, plus anything requiring the ability to express one’s thoughts clear and understandably? Because if we discount all those, there aren’t many fields left.

I’d have to trust the data that I’ve seen (even old as it was) over your impression. Something like modern GRE scores aren’t a perfect proxy, but they correlate and they say similar things.

And high level science is extremely important to my point for two related purposes. The first is how stark the cutoff is. This emphasizes the nature of the necessary condition vs the sufficient condition. Your point about “creativity” perfectly backs up my own post. You are making my argument for me. There are, exactly as you say (and exactly as I said), many more things involved. And despite all those many more things… IQ remains a stark cutoff. Which implies it’s a necessary, but not a sufficient condition.

The second and related point is about the direction of causality that the necessary condition implies.

When you look at IQ as a predictive variable for something like individual income, the effect is present but very weak. When you aggregate people into larger groups like deciles, the effect of IQ on income is much stronger. (Larger groups wash out some of the individual variation.) Now, we know for a fact that culture plays a hugely important role in IQ. The Flynn effect is proof of that. So when we’re looking at a relationship like income and IQ, we might wonder which way causality works. Maybe higher skilled jobs, which pay more, make the people who do those jobs do better on tests?

Probably no, or at least not primarily, for a couple reasons. First is that one of the best predictors of performance on a cognitive skills test is previous performance on a test – despite the undeniable cultural influence, numbers are still fairly stable over time for adults. It’s the culture in general that’s pushing up scores, not specific jobs. And second is that stark cutoff mentioned above. That necessary condition simply would not exist as strongly as it does if causality went primarily the other way. That’s why we can learn so much from an extreme example in a field of pure intellectual work. We get a better feeling for which way the wind is blowing.

What would it even mean to be “twice as good at tasks of type X”? Does it mean that the person who’s twice as good is twice as likely to be able to accomplish the task? But then, suppose that one person already has a greater than 50% chance of accomplishing it: What can the twice-as-good person do? Does it measure how quickly a person can accomplish it? There are some tasks that I can perform that most people can’t do at all (and vice-versa, of course)… Am I infinitely better than them?

Of interest:
James Flynn offers his thoughts on the Flynn Effect.

I’m not entirely sure.

With physical strength, I’m guessing humans follow a bell curve distribution. Take an easy to measure metric like the bench press for adults, and the numbers probably go from 10 lbs up to 800 lbs. So people on the extreme ends can bench press 80x as much as each other.

I do not know if there is some kind of metric for IQ that can do the same thing. Since we (humans) are the ones writing these tests, why can’t we determine some kind of linear progression in how efficient people are at the skills that make up IQ? Why can’t someone determine that an IQ of 130 means you are 300% better at pattern recognition or reasoning than someone with an IQ of 100?

To my knowledge it hasn’t been done, but what do I know.