This is SCIENCE?? RO: Stupid media and IQs

Google news is all atwitter with Moron Media - um I mean Mainstream Media - reports on newly published study claiming that firstborn children have higher IQs than their later-born siblings.

I should pit this because I think IQ is bullshit - at best it’s an extremely coarse filter to identify children who need academic help in grade school. At worst, well read Stephen Jay Gould’s The Mismeasure of Man.

But I’m pitting this because the grand conclusion of the study is that the IQ difference between oldest and youngest children is … drum roll please …

2.3 points.

What the fuck?!?!

I spent ten seconds looking up IQ on Wikipedia, and it confirmed what I knew from my one psych course twenty seven years ago. IQ test raw scores are normalized to a mean of 100 points and a Standard Deviation of 15 points.

So anyone who is educated enough to know that “Standard Deviation” refers to statistics and not oral sex should immediately understand that 2.3 IQ points difference with an SD of 15 points is not significant, not worth talking about, not worth the ink it’s printed with, not worth the paper it’s printed on, not worth your time, and certainly not worth the electrons I’m expending now.

2.3 points! I could have a bigger difference between two tests taken on successive days.

It’s bad enough the Moron Media repeated this story. Every election cycle when the MM reports that candidate A has a “firm” lead of 2 points in an opinion poll with a 4 point margin of error, I give up anew on their understanding of, and their ability to explain, statistics. It makes the baby Carl Friedrich Gauss cry. The Mainstream Media is dumb. I know this. It still pisses me off.

But then I saw the study was published in Science magazine.

This is Science? This got past a peer review at the American Association for the Advancement of Science?? You have got to be fucking kidding me! I’m not saying psychology isn’t, or can’t be scientific. I am saying that researchers who claim that a 2.3 point IQ difference shows anything are kidding themselves, and wasting everyone’s time. At best the study can claim there is NO significant difference in IQ due to birth order. At fucking BEST.

Instead, between the exaggerated claims of the researchers and Moron Media’s endless parroting, millions of younger sibs are going to feel a bit stupider tonight. They aren’t. This “study” that supposedly shows they are dumber than their elder sibs shows exactly the opposite - they are every bit as smart, well within the ability of the tests to measure.

Dopers, if even Science magazine is not immune to this, fighting ignorance is going to take a LOT longer than we thought. Or feared.

I don’t understand. Are you saying that their sample size wasn’t large enough to draw any significant conclusions from an observed 2.3 difference on a measure with a 15 point standard deviation in the population at large? Because you haven’t mentioned anything about sample size at all in your OP, or anything else like that. And it’s certainly possible that there exists a genuine, significant difference of small magnitude compared to the standard deviation.

Lacking any older sibs, I can personally attest that the oldest child is marginally smarter than the younger ones. (I think the second child gets the marginally better propensity for wealth and the third child gets the marginally better looks.)

I once read a newspaper article about some “study” where they found that kids who watched MASH on T.V. got better grades in school. The “conclusion” made in the article was that MASH made kids smarter. :smack:

BTW, I don’t think everyone’s getting what “R.O.” means. An R.O. thread is anything along the lines of “Some random dude did some random fucked-up thing and I’m outraged about it”. Extra R.O. points if the random act was done to a puppy or kitten. Even more R.O. points if the O.P. describes his fantasy acts of retribution in graphic detail.

<snip>
I love it.

“Oh, Honey, would you like to engage in some “standard deviation” this evening?”
“+ or - 15?”
“No, I was thinking along the lines of half an hour at least”

This is what I was going to say. If the sample size is large enough, a 2.3 point difference can certainly be statistically significant.

I will note that most psychologists and those employed in psychometric testing agree that when it comes to IQ a difference of about 2 points results in no functional difference between people. Meaning someone with an IQ of 125 will not do measurably worse than someone with an IQ of 127 on anything other than an IQ test.

Ah, but this can’t hold transitively forever, or even for very long, can it? Where IQ 123 is functionally equivalent to 125, and 121 is functionally equivalent to 123, etc.

It’s not a matter of sample size. Not even the makers of IQ tests would claim that their tests can detect a 2-point difference.

For what it’s worth, one of my younger brothers scored 1 point higher than me on his IQ test. He’s never let me forget it.

Do the makers not suppose that, over two large populations with some particular difference in mean IQ, the mean test scores will reflect this difference? If it’s really impossible for multiple tests/repeated tests to show any difference at all between IQs X and X+2, then, applying this sort of reasoning inductively, it becomes impossible to test for any difference at all between any arbitrarily large-sized IQ gaps.

I think you are a bit mistaken here.

It was scientifically shown that there is a small correlation between birth order and IQ. Assuming that they did the math and experimentation properly, that is undeniable. And it’s a perfectly valid conclusion for a scientific paper to draw. There is nothing faulty about their logic. They’ve shown that first-born children do slightly better on IQ tests.

What you seem to be worked up about is the question of whether a 2.3 point IQ difference really amounts to increased intelligence, or whether it is due to some other factor. The authors of the paper did not attempt to address that question at all. They took the IQ test as given, and said “What pattern can we see in the results of this test?”

Sure it can.

123 isn’t much different than 125, but 121 IS different than 125, and 119 is different, and 117 is different. That doesn’t mean that the 117 person is a moron and the 125 person is brilliant - it just means that on some measure OTHER than an IQ test the 125 person will score higher than the 117 person.

If you’re trying to suggest that functionally there’s no difference between someone with an IQ of 70 and someone with an IQ of 130, I’m going to go right out on a limb and tell you that you’re wrong.

It wouldn’t be significant if two people were tested and one scored 2.3 higher than the other. It might be significant (though not very) if two groups of 10,000 people were tested and one group averaged 2.3 points higher than the other.

So chill out, beanbrain.

I’m just saying, it can’t be that 123 and 125 are exactly functionally equivalent, and same for 121 and 123, and same for 119 and 121, all the way down. We’d have a soritical problem. If any IQ gap can make a difference, then it must be that even very small IQ gaps can make a difference, albeit only very small differences.

No, what he’s very clearly worked up about is that the IQ difference is significantly smaller than the standard deviation of the IQ itself. So you really can’t use the tests in any predictive capacity, and it makes him wonder if the effect is even really present:

That doesn’t seem right to me. If I have a ruler that is only marked in whole inches, and I measure two group of items, both containing a million items, and all the items in one group are 1/4 inch longer than the other group, my measurements will not reflect this, no matter how many items I measure. This does not mean I can’t measure anything at all, such as using my ruler to tell the difference between an item that is, say 3 inches as opposed to an item that is 11 inches.

Perhaps it is time for a coin-flipping analogy. :slight_smile:

Well, your ruler is a quantized measurement system, and gives almost entirely error-free quantized measurements (“This thing is almost certainly between 3 and 4 inches”, “This thing is almost certainly between 1 and 2 inches”).

I was assuming IQ tests worked differently; that they reported fairly precise measurements, but the problem was that the reported measurement might be off by an error term from the actual value, with the probability distribution of the error term having mean 0. In this case, repeated testing or testing over a large population mitigates (eventually, to negligibility) the effect of the error terms.

Or, rather, another better response would have been this: Your ruler can, theoretically, detect some very small differences (between 2.9 inches and 3.1 inches, say), which is key to its ability to detect larger differences. If all small differences are well and truly perfectly indistinguishable, then, adding it up, we get that larger differences are perfectly indistinguishable as well.

Would have been ETA (but missed the window): Now, as it happens, with your ruler, there’s only a differing effect with very particular small differences (ones near special border values). But, presumably, IQ scores act more uniformly than that.