Well what I didn’t understand about what you wrote was the notion that if you can’t measure X+2, then you can’t measure anything.
[And I know that’s oversimplifying what you said, but that’s due to the fact that I don’t understand what you meant]
Well what I didn’t understand about what you wrote was the notion that if you can’t measure X+2, then you can’t measure anything.
[And I know that’s oversimplifying what you said, but that’s due to the fact that I don’t understand what you meant]
No. IMNSHO, they’ve shown that if there is any difference between the IQ scores of first-born children and younger children, it is so much smaller than the measurement error that it cannot be significant. We’re talking about a difference of one EIGHTH of the measurement error.
Allow me to repeat the experiment, but use the height records for the children instead of their IQ scores. (Notice I’m not measuring the heights myself but using pre-existing records. I find it hard to believe the researchers gave 250,000 kids brand-new IQ tests under controlled and identical conditions.) Even after compensating for age and gender differences in height, if my study showed a height difference between eldest and youngest siblings of less than a millimeter, it would not be legitimate for me to claim that older sibs are taller than their younger sibs at any given age.
I would be bad enough for me to make this ridiculous claim, and worse if I let the media run with it. It really ticks me off that a peer-reviewed scientific journal let something through with the exact same error. An error a college sophmore should be able to catch.
Since there is not a single aspect of this kerfluffle that affects my life in any way, I put the RO (recreational outrage) label in the title when I started it.
Will anyone think of the children?
Suppose height had been measured instead of IQ. Suppose the average height was 178 cm, with a standard deviation of 7 cm. Then suppose the study found that the difference between oldest and youngest children is 0.5 cm. Would you dispute that? Would you say that’s so far away from the sd as to be meaningless? Maybe you would, but that does not mean it doesn’t exist. I think you’re right though, in that the media seize upon the term “significant”, then change its meaning. In psychology it does NOT mean a major difference. It refers to whether the difference found is due to a real difference in the population (as opposed to the sample). If a psychologist states that a difference of 0.5 cm (or 2 IQ points) was found to be significant, it doesn’t necessarily mean that it it important, but rather that it is real, and not merely a fluke due to mismeasuring, etc.
Typo Knig, what makes you think standard deviation and measurement error have anything to do with each other?
Furthermore, as noted before, the effects of measurement error in a case like this become suitably small with a suitably large sample. Do you have reason to believe the sample was not large enough?
I’m not a scientist, but I always thought what Typo Knig is saying is right. I always thought that you discarded anything that’s below the margin of error. But some people seem to be arguing that if you add up enough data that’s below the margin of error, that it becomes statistically-significant. Is that really true? I didn’t think that it was.
Between this post and the OP, it’s pretty clear that you need a refresher course in statistics. Standard deviation has nothing to do with measurement error. It’s just a measure of the width of the distribution that has some nice properties. With respect to IQ, what it means is that roughly 68% of test takers will fall between 85 and 115, roughly 95% of them will fall between 70 and 130, and almost everyone will fall between 55 and 145.
You also need to be aware that the word “significant” is jargon, and you’re clearly not aware of the specialized meaning. See here for a quick overview.
My apologies for being unclear. What I was saying could perhaps be put clearest like this: If no small difference can affect test behavior in any way, then neither can any large difference. Your ruler thing is no counterexample, because some small differences (between 2.9 and 3.1) do cause different test behavior.
I am rusty on my statistics, but the key assumption here is that measurement of an individual’s IQ is kind of like drawing from a normal distribution, where the mean of the distribution is the true IQ (presuming there is such a thing), but the measurement may be from somewhere else in the distribution. Now we can take sub-populations (first born, second born) and given that these sub-populations are also normally distributed, what the study says is that the one distribution is centered slightly higher than the other. So you have these two sets of data and now you can test the hypothesis whether they truly are different with any probability. Something like a t-test or F-test (again, I am way out of practice). This is NOT the same thing as whether or not the error of the test negates the claimed idfferences between the two sub-groups.
The 15 point standard deviation of the normalized IQ scores has nothing to do with how accurately IQ can be measured. In fact it seems to me that the OP does not understand IQ and measurement statistics very well. Confidence interval has a few key parameters.
The assumption of a normal distribution. (you can use other distributions but most times people are talking about a normal distribution)
How good your confidence needs to be. Typically people use 95% but others can be used.
sample size
A given test can have a certain amount of accuracy in it. For example say I take an IQ test and get a score of 99.5 and the makers of the test say that it is accurate to ± 1.2. They are saying based on this test we are 95% confident that the real IQ is between 98.3 and 100.7.
I must be compensating. My older brother left college, and I’m working on my second master’s.
In all seriousness, I was always much better at school and “book learning”, and I’m fairly sure I score higher on IQ tests by a significant amount. But he has always been “street smarter” and more at ease among groups than I am, so I think it balances out. How this sort of personality or adaptiveness differences play out on IQ tests I don’t know.
I would just reiterate what others have been saying, in that much of your ire is due to the fact that you don’t understand statistical analysis. Given a sample size of 240,000, which is what the researchers had, it is certainly possible to demonstrate that such a small difference is statistically significant. If this appeared in Science, you can bet it’s been thoroughly peer-reviewed, and the analysis itself is OK. I have seen other scientists who complimented the Norwegian scientists who did the work on their “elegant analysis.”
In any case, you are missing the real significance of this study (which may also be being misrepresented in the mainstream media). That first-borns have higher IQs than younger siblings has long been recognized and is not particularly controversial. What has been uncertain is why this effect exists. What this study established is that mere BIRTH ORDER is not the critical factor; rather it is the SOCIAL RANKING within the family. Individuals RAISED as the eldest in the family (because older siblings had died, for example) have higher IQs regardless of birth order.
On my hypothetical ruler, 2.9 and 3.1 inch objects would both be measured as 3 inches. I guess you’re thinking of a ruler where each inch mark is placed exactly at 1.0 inches, 2.0 inches, etc… Maybe a better analogy would be estimating distance with the second segment of one’s pinky finger, which is approximately one inch, but doesn’t have exact demarcation points. Differences less than one inch would be indistinguishable, but differences greater than an inch would be distinguishable.
I guess it’s a moot point, because my understanding from what others are saying is that standard deviation has nothing to do with measurement error, which I did not know.
Was there any attempt to understand if conditions peculiar to Norway (or Northern Europe, or…) might be at play? Did the authors caution about saying whether these conclusions would tell us anything about IQ in, say, China or Japan?
Pardon me, I wasn’t clear enough in what I posted.
Psychologists/psychometric testers generally agree that functionally there is no difference between two people with 2 points different of IQ. What I mean by that statement is that if there are two identical people (in mythical LaLa land where I’m allowed to clone people to clarify a point) and one has an IQ of 123 and one has an IQ of 125 the way they function from day to day is going to be the same. They will typically be able to learn things at the same rate, assimilate new material at the same rate, etc. “No functional difference” is not a statistical term - it’s a psychological term. Mathematically the two will still be different, but experientially (meaning how they experience life), they will essentially be the same.
Of course this post assumes that each of these clones has been tested multiple times and that their reported IQ scores are an accepted amalgamation of a series of tests, and not based on a single test, the 2 point difference is real and not an artifact, etc. etc.
I, too, have probably not been clear enough in my posts. My point is simply academic, but I’m not sure if I’ve made it yet. Off in La La Land, we make a chain of point-clarifying clones, of IQ 125, 124, 123, 122, 121, 120, …, all the way down to, say, 50. I trust you’ll grant that the two ends of this chain are quite noticeably functionally different; therefore, at at least one link of the chain, there must be some sort of functional difference. Presumably, the difference is spread somewhat uniformly throughout the entire chain, so that each particular link involves a tiny, minute, but nonzero functional change, these small ones eventually adding up to the big ones. That’s all my point is; you can say that IQs of 125 and 124 are very, very similar, that they are so close to functionally equivalent as makes no difference in daily life, but there must be some tiny difference between the two, so as to add up to a larger difference, eventually, between, say, 125 and 100.
Of course. However, what I’m saying - and the point that psychologists/psychometric testers are making - is that for the average shmoe, the difference of 2 IQ points is nothing. If Suzie has an IQ of 125 and her brother Timmy has an IQ of 123 that doesn’t mean that Suzie is going to be the princess of the world and poor Timmy is going to be stuck digging ditches all his life.
It’s a bit like the BMI argument - while BMI is useful for predicting trends for a group it does very little for an individual other than serving as a very crude guideline. IQ is the same. This study is interesting because it illustrates “nurture” in action, meaning that being treated as the eldest increases IQ for a group.
It means very little the average person. If my IQ is 2.4 points higher than my brother’s, or vice versa it means very little in how we live our lives, or what our overall level of success is going to be.
Ah, yes, I agree with everything here. Having a few more IQ points than your brother is like being a centimeter taller or a pound heaver; it’s a difference, but it means very little; it’s almost swamped out for predictive power by everything else in your life. But it is there, exerting its tiny effect.
(Actually, I’m somewhat of a skeptic of the usefulness of the concept of IQ in the first place [in the sense of there being something like a numerical scale of intelligence], but I’m not nearly well enough acquainted with the matter to make an educated proclamation. At any rate, nothing I’ve said previously in this thread is related to this skepticism.)
I’m not sure that this is relevant. As I understand it, when you take the IQ test and get a result of 125, we’re 95% confident that your true IQ lies within the interval (125 - e, 125 + e). If the confidence intervals for an IQ of 123 and 125 overlap, then the test can’t distinguish between them, even though there is a (small) difference.
Well, in that post, the numbers were meant to refer to the actual, underlying IQ value, rather than that reported by tests.
At any rate, though, certainly the test could distinguish, in some sense, between an IQ of 123 and one of 125, in that the probability distribution on reported values will be different when the underlying value is 123 than when it is 125, such that repeated testing allows us to shrink the size of the confidence interval, eventually removing the overlap between those for 123 and 125. (As before, it seems fair to assume the test is such that, for any particular underlying value, the expected reported value is equal to it).
The blurb in my local paper today on this claimed that IQ difference correlated to a 20-30 pt SAT difference. Now, that correlation must have some errors associated with it, and I don’t know if they meant 25 +/- 5, or what. So, does that give this result a more tangible aspect? That’s not a huge increase, but it is an increase, and I believe it has been shown that SAT scores correlate with earning power over the years (which doesn’t imply causation, btw).