This Just In: Teenage Virgins . . . NOT! Elsewhere, Pope Suspected to Be Catholic.

Apparently, “virgin” is the new “skank.”

I think the data is a bit misleading. Firstly, there is in fact a difference between those who pledged and those who did not - just not a statistically significant one. There is a fundamental misconception among many people that a statistically insignificant difference equates to no difference, but this is not the case. It merely means that the study has failed to establish a difference, not that the study has established that there is no difference. In this case, the STD rates are clearly lower for pledgers than for others (other than among Asians, incongruously) but I guess the sample size wasn’t enough to make it meaningful.

Also, according to this link:

This is superficially at odds with the rest of the data which show a lower incidence of STDs among pledgers. What this suggests to me is that there is a disproportionate number of pledgers in areas which have higher rates of STDs in general. IOW, that it’s not the high rate of STDs among pledgers that causes the high STD rates, but rather that the pledge is being more heavily (or successfully) promoted in areas that already have higher incidences of STDs. If this is correct, then the entire purported point of the study is invalidated, because pledgers will be people who already more predisposed (on average) to have STDs then the general population - ergo the fact that they even drew even would be a success for abstinence.

(One other point - it is unclear to me that all the pledgers were virgins - and free of STDs - at the time they took the pledge - if they were not it might seriously skew the study).

Besides for all of the above, it would seem that at the very least the study shows that abstinence is at least as effective as other methods. With one caveat - it may be that some kids are exposed primarily to abstinence teaching but do not take the pledge - they may be at the most risk of all. Some data here would be helpful. Another caveat is that I don’t know the slightest thing about sex education (no data needed here, though :))

You seem to have a tenuous grasp on statistics, my man. There are two options: that the sample size wasn’t large enough, or that there isn’t a difference. You make it sound like there is always some difference, but sample size just isn’t big enough to catch it.

If you don’t show a significant difference at a reasonable level of confidence (i.e. a=.05), excepting gross bias or inadequate sample size, there is no difference.

Be careful of drawing conclusions without data! :wink:

I can’t read that article without a New York Times account. Could you post a couple of the salient points? (Post small enough quotes that you don’t violate Fair Use laws, of course!)
Oh, wait, this is the Pit. ahem Could you post a couple of the fucking salient points?

Here’s a link to the version of the story posted on Yahoo news.

But, the sample size is accounted for when you set the level of confidence.

We are not really talking about this right? Given the divorce rate in this country, do we really expect kids to live up to a promise not to do what comes naturally? Marriage is supposedly one of the most solemn pledges humans can make, yet over 50% of adults who make the pledge later change their minds. It seem pretty simple Sexually Transmitted Disease is transmitted by sexual intercourse, it does not matter what some kid pledges at the behest of some group with an agenda. Its all well and good to sit around with some adult fanatics with a mission, to sing Cum by ya, and drink kool aid in a brightly lit room. It is quite another to be face to face with a willing partner in the back seat of a car parked in an out of the way place. So no I am not a bit suprised that the difference is negligible. What would suprise me is if the difference was significant.

Not confidence intervals, probability levels.

Just a point about statistics…

Even if it were established that there was a significant difference in incidence of STD’s between TeenAgers who have and have not made pledges of virginity (and this has clearly not been established, so it’s a hypothetical, but anyway…) - this would not prove that the pledge of virginity helped some TeenAgers to avoid STD’s. Because there is always the option (and, IMHO, quite likely, too) that those who made the pledge were more likely to remain virgin anyway (for religious or social reasons)! That’s the problem with trying to use comparative statistics on self-selected populations.

Dani

IMHO the “pledging abstinence is a bad idea because you don’t stick to it, and then have ignored all lessons about contraceptives and STDs” idea is probably right, as that article seems to suggest. However, the statistics they report seem of no use in deciding that - you’d have to see the original study.

Interesting, but knowing how many times would be useful - else for all we know they could have had sex once as opposed to twice a week for non-pledges. That’d be a good risk in my book.

Very interesting. What might be useful to know further is (1) is the difference statistically meaningful and (2) which group was higher :open_mouth: Also, it has to be viewed in terms of the number of times you have sex.

This seems fairly unambiguous, however.

You see what I mean?

Not here to argue about statistics, but rather to point out that one of the ways that the pledge is pushed is that condoms don’t stop STDs, and aren’t reliable as a form or contraception.

Now, if my pastor had been telling me every weekend for the past several years that condoms have “pores” in them that are big enough to let viruses through, I wouldn’t put using one that high on my list of priorities should I “slip”.

Also, you have to have some pre-meditation to get hold of condoms, those who break their pledge are much more likely to justify it to themselves as a “heat of the moment” thing, and again, be less likely to practice safer sex.

Thus, I would expect the STD and pregnancy rates to be higher in those teens who had broken their pledge, rather than those who had sat down, got the facts and made an appointment with their local doctor, free clinic or even bathroom vending machine.

No, actually you have a tenuous grasp on language, my man (or woman, hermaphrodite, whatever). When someone says - as I did - that “the study has failed to establish a difference”, it does not imply that “there is always some difference” - if there was then the statement would not be true. What it does mean is that it cannot be known (based on this study) if there is a difference, since no difference has been established by this study. What is significant though is that it has not been established that there is no difference - there may well be one that was not captured by the study since the difference was too small for the sample size.

Good point. If you were paying attention to my post, or capable of understanding it, you would note that I based my supposition on the incongruity of two sets of statistics emerging from the study. Not conclusive, of course, but suggestive, as I said.

An NYTimes “account” is free and is without strings. I misspelled my name, on purpose, when I signed up NINE YEARS AGO and have NEVER gotten spam with that spelling in it, nor have I ever gotten a single piece of email from the NYTimes.

It’s really one of the great deals of the century.

I’m not quite sure what you are trying to say here, hombre. You seem to be implying that if a study finds no difference, you can’t ever say that there is no difference because the sample size may not have been big enough?

The sample size was 12000. The difference was not significant at a reasonable alpha level. The alpha was most likely .05. So there is a 5% chance that the difference was statistically significant. You cannot state based on that data that there is a difference, just because it looks like a difference is present. You can’t just assume that the difference fell in the 5% range that wasn’t captured. Are the statistics available on that page? How do we know they didn’t use alpha equal to .001, leaving a .01% chance that a significant difference was present?

Bullshit. If you don’t have data, you cannot make those assertions. Well, I found that black people are clearly a different species, but I didn’t find a significant difference. I guess my sample size wasn’t big enough.

Well, your posts suggest that you are Karl Rove. If you don’t have data, don’t draw conclusions.

Sometimes you can, sometimes you can’t. If a study finds two groups to be identical with regards to some parameter, you can assert with various degrees of confidence that the difference cannot be greater than certain levels (these depend on sample size & variance). If it finds a difference of X, the confidence levels apply to X plus or minus these levels.

This is either an error or a misuse of terminology. There is no chance at all that the difference was statistically significant - statistical significance is a measure of this particular test, and clearly the difference was not statistically significant. What I am discussing is the likelihood that there is in fact a actual difference between the two populations (as opposed to samples), and the extent to which the test is a valid predictor of this.

Exactly, as I’ve said repeatedly.

This repeats your error above. And frankly I am inclined to think that your remarks are based on an ignorance of statistics rather than a misuse of terminology (a bit ironic, in light of your opening remarks).

You may have misunderstood me here. I did not assert that the rates for the population at large were lower for pledgers - that is the whole point that I am saying remains unknown. What I said was that “in this case”, meaning in this particular study and sample, a difference was found, from which is follows that the lack of statistical meaning is due to a relatively small sample size.

Your posts suggest that you have a very weak understanding of the subjects about which you comment. Again, my conclusions were drawn from data.

Hey, I read about this back in 2001.

How often do they do these studies? I keep reading about them every few years. And they find the same things.

"The average delay incurred by the virginity pledge, reports the study, tends to be about 18 months – marriage appears not to be a factor. And then there’s the part about how the pledge works best among 15- to 17-year-olds (not so well among 18-year-olds) and that it helps if the pledger is religious, of Asian ancestry, in a romantic relationship or less advanced in pubertal development. (Pause here for the adolescent – pledger or non – to utter, “Duh.”)

And finally – whoops! – when pledgers break their pledges they have a tendency to have unsafe sex. Researchers suggest that since the pledgers promised not to have sex, when they finally do, they haven’t done much planning and are unlikely to use contraception. (Another favorite footnote here: “That pledgers who have sex are likely to be contraceptively unprepared is to be expected, for it is hard to imagine how one could both pledge to be a virgin until marriage and carry a condom while unmarried.”) "

Misuse of terminology. My point is that the likelihood of a difference being present is only as large as the level of certainty set a priori, which is very small.

:confused:

Completely incorrect - actually you’ve just confirmed your complete ignorance of the basic fundamentals of statistics. As such, it would be worthwhile for you to pay close attention to the following.

Suppose a test is run comparing two populations, using a 95% confidence level and finds a difference of 50 points. Suppose the statistician informs us that the 50 point difference is not statistically significant. This does NOT mean, as you claim, that the likelihood of there being a difference between these two populations is 5% or less. What it means is that ASSUMING THERE IS IN FACT NO DIFFERENCE BETWEEN THE POPULATIONS, the likelihood of the 50 point difference appearing in this particular test due to random fluctuation is greater than 5%. For this reason the difference is said to be statistically insignificant, and the test cannot be said to show that a difference exists between the two underlying populations - because there is a greater than 5% chance that the 50 point difference between the two samples came about through random fluctuation.

However the test, which measured a 50 point difference, can obviously not be said to show that there is no difference with any degree of certainty at all. Understand this.

(Conversely, if the test showed that there was no difference at all between the two samples, this would not typically show that there is no difference at all between the two populations. It would merely say that we can apply some confidence level to the likelihood of the difference being greater than a certain amount. For example, there might be a 95% confidence level for the difference being less than plus or minus 20 points. This means - similar to the above - that assuming that the real difference is in fact greater than 20 points there would be a less than 5% chance of this particular test showing a 0 point difference due to random fluctuation.

But that is not the situation here - in this case the test did in fact measure a difference).

I’ve already addressed your subsequent post.

Indeed, quite right. If you care to take my word for it, I’m not completely ignorant of basic statistics. I actually have a concentration in biostatistics, I just don’t deal with them on a theoretical basis that often. It is inexcusable that I so egregiously confused the basic principles. :o

Does not the confidence level indicate the probability that the sample is representative of the popuation? In other words, how certain you can be that your data is representative of the population?