Sounds like you are saying it. You also haven’t actually presented an argument, neither originally nor now in your apparent retort, instead having simply made assertions without presenting evidence. This is hardly likely to convince anyone who isn’t already convinced.
I do not know my IQ but I am confident that I am no genius and yet I can tell when I am speaking to someone who is as smart or smarter than I am versus someone who is still within the broad definition of normal IQ but not all that bright. Can’t you? I’d assume that the “typical” individual who has managed to become well educated and is intellectually curious enough to hang out here has to be around 125, at least on average. No, I do not think most Dopers are 130 or higher, but 125, sure. Still within 2 S.D. of average. 95 is pretty average, definitely adequate life skills, not “retarded”, but as a brighter normal you’d know in conversation that they are not as bright as you. It seems irrational for me to assume that someone 30 IQ points brighter than I am wouldn’t be able to tell that I am not on the same level with the same ease.
Perhaps I hang out with the wrong people, but that is an unfamiliar phenomenon to me, to obtain such an impression merely from random conversation, rather than in the discussion or viewing of some kind of academic work or puzzles or so on. How do you know it tracks test-measured IQ, at any rate? If all you are saying is “Some people seem to me markedly more intelligent than myself, while others do not”, well, fine, sure, I have no reason to doubt that you feel that way, but that’s no evidence regarding anything concerning IQ.
I am assuming more than a small talk conversation. And assuming that we are discussing IQ as a (flawed) proxy for real intelligence, whatever that is.
And no I do not know by testing results that my impression that this guy is a bit of an idiot or that this gal is a lot brighter than I am is at all accurate; even if I knew their IQs it wouldn’t matter as I do not know, or have any desire to know, mine. (I do the best with what I got and that’s all that matters to me.) If, in the course of your routine existence, professionally and socially, you do not experience a range of people, all of who can cope with the demands of day to day life to some adequate level, some of who you realize are not very bright and some of whom whose intellects impress you, then that is your experience. I am surprised but not for the first or last time. If you do have that experience, and if it is correlative with actual intelligence which in turn IQ does indeed track, then you have the same experience as I do.
No one said some people aren’t smarter than others, nor did anyone say we can’t tell that some people are smarter than us or less so. What I am specifically questioning is the idea that the difference between 130 and 160 is the same, qualitatively, as the difference between someone at 100 and someone at 70. I’d say the gap is much smaller between the higher two than the lower two, simply because at a certain point the ability to demonstrate “intelligence” becomes more and more difficult – you can’t do it with parlor tricks, math speed/skill, vocabulary, etc, because those things, at high levels, don’t correlate with intelligence as IQ defines it.
I’m sorry, Sister Vigilante, but the more of your story that you tell the more improbable it sounds. So there was supposedly one student in your class with an I.Q. of 160 (yourself), another with an I.Q. of 165 (your friend), and another of 170 (the top student in the class)? I really think that you are misremembering or that these I.Q. tests that your school had administered were no good.
You suggest that the way the I.Q. scores converted to probabilities on this test were different from other I.Q. tests, but that makes no sense. That would be like measuring someone’s height and then telling him that the feet and inches that were used weren’t the same as the ones that everyone else used in measuring heights. Look, here are the probabilities of some I.Q. scores:
A score of 130 or above________about 2.28%_________about 1 person in 44
A score of 145 or above_______about 0.135%________about 1 person in 741
A score of 160 or above_____about 0.00317%______about 1 person in 31,500
A score of 165 or above____about 0.000745%_____about 1 person in 134,000
A score of 170 or above____about 0.000151%_____about 1 person in 662,000
I can believe that you scored 145 on the I.Q. test administered by Mensa. I can’t believe that three people in a single class scored 160, 165, and 170. If your school told you that those scores were what some I.Q. test that they had administered gave, I can only suspect that the I.Q. tests were worthless or deliberately jiggered by the school administration.
I’m skeptical. Maybe things have changed since he joined American Mensa, but today they do not reveal the test score or I.Q. to the test taker, only whether he or she qualifies for membership (I.Q. of 130 or higher).
The difference is equal by definition; that is what the numbers mean.
Try looking at it this way – A person with a low normal IQ of 90 could easily see the difference between two people having IQs of 60 and 30.
An IQ of 60 used to be called Moron and 30 was Imbecile; these days they are referred to as Mildly and Severely retarded – the differences are so noticeable, the category Moderately retarded is between them.
Differences between mental abilities of retarded people are pretty easy for someone of even dull normal intelligence to measure and distinguish; can he speak, can he count, can he tie his shoes, can he use the toilet, can he feed himself … these things are observable and quantifiable.
Since the IQ curve is a normal distribution, there are plenty of people around who are capable of devising ways to test and categorize the low end of the scale up to Average … 75% are Normal or lower, by definition.
Higher IQs, particularly the very high end, are harder to measure because there are so few people who are smart enough to measure and categorize them, but that doesn’t mean the differences aren’t there or aren’t qualitatively equivalent to the differences on the lower part of the scale.
What IQ do you assign to somebody who scores perfect on the SAT and answers all the questions on an IQ test correctly? What if he correctly points out that none of the multiple choice answers were actually correct so he chose the one he figured the test makers were looking for?
I guess what it comes down to is, and this is not meant in any way to be snotty, haughty, pretentious, condescending, derogatory, or in any way personal or unkind, is that it is not very difficult to see differences in intelligence of people who fall lower on the scale than oneself, but it is very difficult to categorize people higher on the scale … except for those who stand out as being really noticeably smart.
So yes, the difference between 60 and 90 is equivalent to the difference between 120 and 150, or for any other specified spread, subject, of course, to some degree of measurement error.
Walloon: I can verify that in the 1960s Mensa did indeed give you your score. Actually, I can’t imagine that they don’t do so now; seems silly to me.
A minor nitpick to your post, Turble: The SAT test is not an I.Q. test, and no one, including the people who created it and administer it, claim that it is. It measures how well you have acquired certain mathematical and verbal skills at that point in your life (just before applying to college). It’s claimed that the SAT test, along with high school grades, is a passable (but not great) predictor of college grades. No one thinks that good scores on the SAT test are a measure strictly of one’s intelligence. It’s a measure of some combination of one’s intelligence, how good a high school one went to, how much personal reading one has done, and probably several other things.
'Tis true. From the American Mensa website:
The equality of the numerical difference spanning two IQ score-intervals means only that there is an equal number of standard deviations contained within the two score-intervals. This is a statement about the frequency with which certain scores are achieved; it says nothing about whether the two kinds of differences would be equally noticeable in any way beyond in this particular calculation; in particular, it says nothing about the ease with which such differences could be noted in conversation or the effect they would have upon such interaction or even whether they actually are determinative of or even correlated to the ability for one to carry such interaction out.
Let’s get this straight: I.Q. is not a quantity that arithmetic can be used on. Consider the difference between a quantity that arithmetic works on, like height or weight, and a pure ranking system like temperature or I.Q. where arithmetic doesn’t make any sense: One can add quantities, so a stick that’s three feet long and a stick that’s five feet long can be placed end-to-end, making it the same length as an eight-foot stick. A three-pound weight and a five-pound weight can both be put on a scale and will weigh the same together as a eight-pound weight.
On the other hand, that doesn’t work with temperature or I.Q. If you combine something with a temperature of 30 degrees Fahrenheit and one of 50 degrees Fahrenheit, you don’t get an object of 80 degrees Fahrenheit. The same is true of Centigrade or any other temperature scale. If you combine someone with a 70 I.Q. and someone with a 80 I.Q., you don’t get someone with a 150 I.Q. There’s simply nothing to be said of the comparison between the difference between a 60 I.Q. and a 90 I.Q. with the difference between a 120 I.Q. and a 150 I.Q. You can’t do subtraction with I.Q. scores and say that a 30 point difference means the same in one case as in another case. An I.Q. is a ranking system and not a quantity that arithmetic can be done on.
Sigh. Yes, IQ is normally distributed. But IQ is supposed to be a measure of intelligence. IQ on its own means nothing except you scored in a certain percentile on a certain test which was intentionally normalized. You are confusing the two concepts, unless you truly believe that IQ is an accurate measure of intelligence (and unless you have a valid working definition of intelligence, which my bet is you don’t, since that’s a very difficult question). And that begs the question of whether intelligence (whatever intelligence is) is normally distributed. Just because the test is normalized, it doesn’t mean that what it measures is.
Look at the LSAT, for example. To move from a 150 (the “average”) to a 151 you need to answer 4-7 more questions right, IIRC. If you want to move from a 170 to a 171, you need only answer one or two additional questions correctly. There is less qualitative difference between a 170 and a 180 than there is between a 150 and a 152. IQ works the same way.
Both Wendell Wagner and Indistinguishable are making this same point.
The other question I wonder is what IQ does with age? (Other than when you hit 90…)
I look back on myself at 15 or 20 or even 25 and wonder “how the heck did I ever manage to get by thinking things that stupid?” Also, while I found myself about average for smart university-science level crowds, when I was working in a non-high-tech industry, everyone thought I was really smart. Partly this was because I have a really good education in programming, and a decent amount of experience, and was interested in computers- so I excelled at my job; partly, it was because I was a voracious and omnivorous reader, so I probably knew more minutae that others in wide range of topics.
For example, remember those stupid tests - “Read all the instructions first…” followed by stupid stuff like say your name out loud, then the final instruction was “do only question 2”? Or the one where they tell you about an elevator or bus, how many people got on or off at each stop; people are tracking how many passengers, then the punch line is “how many stops?” Or what letter comes next after “OTTFFSSE…?” I had encountered a lot of these puzzles before leaving high school; anyone who tried to spring them on us, I knew the “catch”. Was I smart, or just a good memory?
When my wife had to take an aptitude test, we searched the internet for the pattern questions of the “which one follows next…?” type; plus math sequence questions; plus those “hand is to glove” type questions. Hopefully, the verbal ones she needed no coaching on; but seeing and working through a few of the others showed her how to find patterns. Certainly a person familiar with the test and what it’s looking for will be quicker to solve the puzzles.
Presumably most IQ questions don’t rely on you never having seen a relatively common trick before…? Or give you more than one example (with or without explanation…?).
Wendell Wagner, you seem to know a lot about IQ tests and I know absolutely nothing other than what I read on some web site about celebrities and their IQ. On that page it mentioned that children’s IQ measurements were not the same as adults and that a person can have a very high score as a child, and that score will come down some when retested as an adult (scale is different apparently). Seems like this would support Sister Vigilante, wouldn’t it?
Children don’t always take the same IQ tests as adults – there’s a Weschler test for children called the WISC (Wechsler Intelligence Scale for Children) and one for adults called the WAIS (Wechsler Adult Intelligence Scale) – and it’s my understanding that a child’s score is determined by comparing their performance to that of other children in the same age group. That is, a perfectly average 6 year old will have an IQ of 100, and the same perfectly average kid will still have an IQ of 100 at age 12. If we compared the 12 year old’s performance to his performance six years earlier then he’ll have improved, but if he’s still perfectly average for his age then he should still receive a score of 100.
In my experience with taking IQ tests, the tasks were not things that could be completed using “tricks”. For instance, on the WISC there’s a task where the tester reads you a series of numbers and you have to repeat them back in reverse order. (This is to test your short-term memory and ability to focus on a task.) There’s a task where you’re given cards with pictures on them and have to arrange them to form a coherent story. There are some general knowledge questions like “What are hieroglyphics?” I think there’s a task where you’re shown a made-up word like “lorg” and asked to read it aloud, but that may be on another test.
*This seems like a strange thing to say, but I had a lot of experience taking IQ tests growing up. My mother was a grad student in Psychology for much of my childhood, and I served as a practice subject for her and other grad students who were learning to administer the tests. For official purposes my IQ was tested twice when I was in early elementary school, but informally I must have had at least another four or five testing sessions between the ages of about 9 and 15.
ivn, someone who has an IQ of 120 is likely smart enough to get through college and succeed moderately in most usual jobs but will in general be a regular person. Someone with an IQ of 150 is a very rare bird. You don’t think that these rare birds can, in some sense, recognize that they are in the presence of each other compared to being in the presence of someone with the realtively more bread and butter 120 or less and I do. I doubt either of us will be able to bring any cites to bear on the question so we will have to just leave it at that.
RaftPeople, if you want to know more about I.Q., start by reading the Wikipedia article on it:
It is not true anymore that children’s I.Q.'s are measured differently from adult’s I.Q.'s. Before the 1960’s that was occasionally true but it isn’t any longer. Unless Sister Vigilante is a lot older than it appears, her I.Q. as a child was measured in the same way that adult’s I.Q.'s are. Let me explain how the old scoring was done: A test was created for each age of children. It was given to many children of that age and the average score for those children was calculated. If a child of age X scored about the average on his age’s score, then by definition he had a mental age of X. So now we have many children who score about average. The 3-year-olds who scored about average on the 3-year-old test is said to have a chronological age of 3 and a mental age of 3. The 10-year-old who scored about average on the 10-year-old test is said to have a chronological age of 10 and a mental age of 10.
But what about the children who score much higher or much lower? What is done is that a child who scores much higher is then given the tests for older children. Suppose that a 10-year-old scores high on the 10-year-old test. He is then given each of the tests for older children until he can’t pass any higher test. Suppose he passes (i.e., gets at least the average score) for 11-year-olds, for 12-year-olds, for 13-year-olds, for 14-year-olds, and for 15-year-olds. However, he doesn’t get the average score for 16-year-olds on the 16-year-olds test. He is then said to have a mental age of 15, even though his chronological age is 10.
Similarly, if a 10-year-old child does much worse than average on the test. He is given the tests for younger children. If he doesn’t pass the tests for 9-, 8-, 7-, and 6-year-olds but passes the test for 5-year-olds, he is said to have a mental age of 5, even though his chronological age is 10.
Now let us define the ratio I.Q. The ratio I.Q. is (Mental Age/Chronological Age) times 100. The average 10-year-old who just barely passes the 10-year-old test thus has an ratio I.Q. of (10/10) times 100, which makes their ratio I.Q. be 100. The very smart 10-year-old who just barely passes the 15-year-old test thus has an ratio I.Q. of (15/10) times 100, which makes their ratio I.Q. be 150. The very slow 10-year-old who just barely passes the 5-year-old test thus has an ratio I.Q. of (5/10) times 100, which makes their ratio I.Q. be 50. These thus sound (at first impression) a lot like the modern I.Q. scores.
The problem is that when I.Q. is measured this way, it tends to vary a lot over what age the test is given. So a better way of measuring I.Q. was developed - the deviance I.Q. An I.Q. test was developed for each age of children, plus an I.Q. test was developed for adults. These tests were given to large groups (probably something like 100,000 people for each age). For each age group, the scores were placed on a normal curve. The average and the standard deviation was calculated for each age group’s test. Someone who scores at the average score was said to have an I.Q. of 100. Someone who scores at one standard deviation above the average score was said to have an I.Q. of 115. Someone who scores at one standard deviation below the average score is said to have an I.Q. of 85. In general, the average is defined as 100 points and each standard deviation is 15 points.
Notice that this means that it’s completely impossible to have an I.Q. above about 195 and it’s also completely impossible to have an I.Q. below about 5, since each of those numbers is 95 points away from 100. To have such an I.Q., you would have to be six and one-third standard deviations away from the average, since 95 is six and one-third times 15. But there are only six and a half billion people alive, which means that the highest and lowest scores are only about six and one-third standard deviations away from the average. (There are tables where you can look up what proportion falls within a given number of standard deviations.) Furthermore, the tests are only normed by being given to a group of about 100,000 people. This means that you can only really give scores in the range 40 to 160, since 100,000 people only allows for about four standard deviations at each end. This is why you shouldn’t believe people who claim to have an I.Q. above 160.
Notice that this means that although the ratio I.Q. and the deviance I.Q. both have averages at 100, their spread is different. The reason that there are claims that Marilyn Vos Savant has an I.Q. of 228 is that her I.Q. was measured back in the 1950’s when she was a child. They still used the ratio I.Q. then for children. When she was seven, she passed all the tests up to those for sixteen-year-olds. That made her I.Q. be (16/7) times 100, which is about 228. (At least the story is something like that. There are various contradictory claims about her I.Q., and I can’t be bothered to figure out which one is true.) But on the modern deviance definition of I.Q., she couldn’t have possibly scored so high. I presume that Sister Vigilante’s I.Q. was tested sometime since the 1960’s, so a deviance I.Q. was used.
Would it be untoward of me to say that none of the jokes offered in this thread as “high IQ jokes” are very sophisticated? They seem pretty pedestrian. Like the my newt or Descarte jokes. Those are very simple word play that an average 15 year old would get. The only thing less complicated would be fart jokes or slap stick.
I know that the SAT is not an IQ test. Given similar educational levels, however, there is a correlation between IQ and SAT scores. Those who score, say 1100 on the SAT will tend to cluster around a certain IQ (given similar educational backgrounds) and those who score 1400 will tend to cluster around a higher IQ number. My point was that you can’t say much about those few who score a perfect 1600 (I’m old, considering 1600 as perfect … I know it’s changed now.) except that they probably have some specific (and likely discoverable) minimum IQ; those with an IQ of 110 simply don’t score perfect on the SAT regardless of the quality of their education.
I’m not talking about spotting the differences during very brief contacts. You likely wouldn’t know if the guy who served your dinner last night had an IQ of 95 and was beginning his life-long career or if he was a Philosophy major with an IQ of 155 working his way through school, but if you worked with them for a while you would come to realize that they were of very different levels of mental agility, not just from the subjects they talk about but in the way they use language when they speak, and in the way they accomplish certain tasks, with the more intelligent person learning the subtleties of the job much more quickly and likely exhibiting that he has put more thought into mundane things like how to load and carry his tray or how to more efficiently get his side work accomplished or how he handles the tantrum of the tempermental chef to get what he needs to please the customer even if the customer doesn’t actually know the difference between medium and medium well.
Retardation levels are categorized by IQ scores as 50-69 = Mild, 35-49 = Moderate, 20-34 = Severe, and below 20 = Profound. These categories are distinct and less than the 30 point spread we’ve been discussing, yet I don’t hear anyone disputing that the differences are real and observable.
There are also named ranges for above average IQs. I really think the problem is that it fairly easy for someone of average IQ to recognize the differences in the lower ranges, but it simply may not be possible for someone of average intelligence to recognize differences in the higher than average ranges, other than a recognition that a certain person is very smart … in much the same way a person with an IQ of 45 might recognize that his friend with an IQ of 65 is smart.
I maintain that the differences between IQ levels are just as distinct between above average ranges as they are between below average ranges but the distinctions between higher levels are more difficult for most people to see; the difference between ‘can he tie his shoes or not’ is easily observable, while (discounting quality of education factors) things like the level of complexity of language skills (which become evident before formal education begins) and the level of ease in grasping concepts which people of lesser ability typically find difficult are nonetheless just as real and distinct, just harder to observe and/or measure.