If you can first learn to drive at age 16 and many people take lessons at the first opportunity, then a Libran would start learning to drive in late September. Now at that time of year the sun tends to be low in the sky during evenings (the most common time for driving lessons, due to the learner being in school for the daytime) they will be learning to drive whilst squinting inot the sun whenever driving towards the west, and will have the sun in their rearview mirrors when driving to the east. Obviously this will effect how well the learner drives, and possibly also damage their eyesight.
Hmm, as a Libra (insofar as much as the stupid astrological signs mean anything…) i guess that explains my totally spotless driving record, and my aversion for talking on the bloody cell phone while driving…
okay, but lets be fair here, the Dodge Aries WAS a pretty poor car to be honest, and any still on the road today would be rather unsafe, they’re almost as bad as the Ford Taurus…
Nope, it’d be the opposite. Generally speaking, the larger the sample size the smaller the likelihood you’d find statistically significant correlations between variables when there isn’t actually one. IOW, the less likelihood that there would be evidence of false correlations (i.e. you’d have a smaller sample error).
Having said that, after looking at the article and who commissioned the study I’m pretty sure if we looked at the actual data we’d find a number of methodological errors. Call it a hunch.
I was about to say the same thing until I realized what he was getting at. The larger the sample size, the more trivial a difference will be statistically significant. A 0.01% difference in accident rates is irrelevant in the grand scheme of things, but with a large enough sample, that 0.01% difference will be statistically significant.
It is conceivable that there is a miniscule (but significant, given a large sample size) difference based on the sign you were born under. Bippy the Beardless has already provided one possible reason. Others may relate to your early upbringing (being a winter or a summer baby). If you are only looking at miniscule differences the reason for the difference may not be immediately evident.
No. .01% or any similar number would not be statistically significant. It wouldn’t come close to covering the margin of error for any reasonable size sample (and the larger the sample size the smaller the sampling error).
0.01% was just an arbitrary choice. So, call it 1% and it still remains pretty irrelevant in the grand scheme of things. Did you read the article? It’s not often you see a sample size of 100,000 used in a statistical analysis!
One: that is why I said I think we’d probably find other methodological errors if we looked at the study. For example: if you wanted to determine the opinion of black people by the average white person in the United States and only sampled members of the KKK you’d wind up nearly 100% of respondents having a negative opinion. While this study may be valid for determing the average opinion of KKK members it is not valid for the general white population, no matter the sample size.
I’m saying that a similar mistake (or another methodological error) is the reason for the results of the study mentioned in the OP. But if done properly the larger the sample size the more valid the study.
Two: 1% still wouldn’t be statistically significant in most cases as it still wouldn’t cover the margin of error. And even if it did it wouldn’t necessarily mean anything. If 51% of all speeding tickets were given to men and 49% were given to women with a 0% margin of error or sampling error would that really say men are worse drivers than women?
You go first… ::bats eyelashes
Anyhoo, I am a Libra and while I have had no tickets, I have made three claims for repair of my right front corner. In six years of driving. And that’s just the accidents that I claimed. :smack: :smack: :smack:
Me, I’m a big fan of the Volkswagen Virgo. The only downside is that the first time you fill the tank, inserting the pump nozzle makes the car bleed all over you.
With a large enough sample size any number of statistically significant phenomena could be located. I forgot that statistically significant doesn’t automatically mean anything in the social world (and ironically I said that in point 2 of my last post). In my defense I’d like to mention that this makes a difference in the soft sciences (my background) versus the hard sciences. In soft sciences outside factors can more easily explain away statistical significance.
I still stand by my assertions regarding methodology though.
You have a valid point buried somewhere in there but not the way you put it. Several people, not just you, are making some invalid statements about how statistics work. It is 100% false that differences between groups become easier to find as the sample size grows. As stated earlier, it is just the opposite which will play into your point in just a second. Saying that something is statistically significant already takes all these factors into account to decide if the result is significant or not. Significant in this context means that the differences are unlikely due to chance and the most common thresholds are that the statistical test says that the differences are 95%, 97%, or 99% certain that the difference reflects the population as a whole.
However, to your point, statistical significance tells us nothing about the size of the difference. As the sample size increases, the size of the difference between groups required to meet each of the significance thresholds will decrease. In fact, that does mean that statistical tests can pick up tiny differences between groups and have the test report them as real group differences (and they may be) even though the difference may be a tiny percentage. That can be very misleading and is actually a problem in the way that much of science is reported in the journals and even in popular press.
There was a point earlier about correlation not equalling causation which is a valid criticism in the vast majority of cases but not this one. Insurance companies are in the correlation business. Their actuaries care what group you fall into, not the deeper subtleties of your being. If they find that red cars get into more accidents, they have a right to charge an accident-free driver of 30 years more if he drives one because they work on groups.
There are some easy ways that a result like this could be incorrect even if most of the methodology was done correctly. If you notice above, statistical tests determine significance at a certain level (e.g., 95%, 97% or 99%). If a researcher sets the threshold at 95%, that still means that 1 in 20 significant results do not reflect a true difference between populations. That is why research is replicated several times before announcing conclusions like this. A related issue is that a sneaky statistician can grab a huge data set and start analyzing it just on whims. Eventually, the statistical test will report a difference based on chance just by sheer volume. That is a poor research technique and it is one reason that good researchers form good hypotheses before they test things.