You selfish fuck! (Man with possible HIV cure in his veins refuses to undergo tests)

You could use PHP tags, <PHP></PHP>, with brackets instead.

As for everything else, I apologize, because I am not attempting to hide from the debate. I’ve been busy, but will address the arguements expressed in due time. However, I do not feel like I need to retract my stance, but if this is necessary until I can better compose myself so be it. Sorry again, all.

By the way, this tangent is just fine. I certainly don’t mind it when you’re in the Pit and a debate breaks out. :wink: I’m trying to find the time to really digest this info, but I still can’t wrap my mind around it. We’ll see if I have something to offer other than a " :confused: " in the next few days, if not, I’ll just read what y’all have to say.

Let’s look at a few examples.

Suppose the accuracy rate of the test is 99% both ways…the test will give you correct results 99% of the time.

Suppose we run the test on 1000 people and 500 people in the group are positive. That means of the 500 positive people, 495 are correctly told they are positive, 5 people are mistakenly told they are negative. Of the 500 negative people, 495 are told they are negative, but 5 are mistakenly told they are postive. So, if you are told you are negative, what are the odds that you aren’t really negative? 500 people were told they were negative (495 + 5), 5 of those are mistakes, so 5/500, or .01. If you’re told you’re positive, what are the odds that you aren’t really positive? 5/500, or .01.

Now, lets look at test group of 1000 people where 100 people are positive. That means of 100 positive people, 99 are correctly told they are positive, 1 person is incorrectly told they are negative. Of the 900 negative people, 891 are correctly told they are negative, 9 people are mistakenly told they are positive. So if you are told you are negative, what are the odds that you aren’t really negative? 892 people are told they are negative, there was 1 mistake, so the odds are 1/892 or 0.00112, or 1/10 of 1 percent. If you’re told you’re positive, what are the odds that you aren’t really positive? 108 people were told they were positive, there were 9 mistakes, so the odds are 9/108, or .0833, or 8 percent. Even though the test is 99 percent accurate, we have a false positive rate much greater than that, 8 percent.

Now let’s see what happens at the extreme case, 1000 tests, only 1 person is really positive. That person is told they are positive (well, .99 people are told they are positive, we could make it come out an integer by testing 100,000 people but that’s not important). There are .01 people told they are negative by mistake. Of the 999 people who are negative, 999 * .99 = 989.01 people are correctly told they are negative, 9.99 people are mistakenly told they are positive. If you’re told you’re negative, what are the odds that you’re really positive? 989.02 people are told they are negative, .01 is really positive, so .01/989.02, or .0000101. Vanishingly small. But what about the false positive rate? 10.98 people were told they were positive, but only .99 actually are. So the false positive rate is 10.98/.99, or 11.1. Meaning, if you’re told you’re positive, only 1 time in 11 is that actually true. This test that’s 99% accurate in determining whether you’re positive or negative gives far more false positives than it does true positives!

We’ll also get different numbers if the test isn’t symetrically accurate, the test might be 99.99% accurate detecting positive people and 99% accurate detecting negative people, or vice versa. But if the number of positive people is small relative to the sample size then we’re going to get more false positives than true positives.

This is the reason you see science/health stories arguing about whether early screening for things like prostate cancer are worthwhile. If the incidence of disease is really low in the tested population, and the test is only 99% accurate, you’ll have lots and lots of false positives. You can’t treat everyone who tests positive, even with a 99% accurate test because you know that most of those are false positives. You need to do a second completely independent test to detect the true positives in the sea of false positives, and if that test doesn’t exist (or is extremely expensive/invasive) then screening for rare diseases is useless even with very accurate tests.

Here’s another point that might make it even more clear.

Suppose the test is 99% accurate, and you give the test to 1000 people, but you know in advance that none of them are really positive. Then you’d get NO true positives, but 10 false positives, and you’d know that each person who got a positive test was actually negative. Even though the test is 99% accurate, if you were the one who got a mistaken test you have a 0% chance of having had an accurate test.

Since the test is already administered and we know by other undefined means the actual status of the test subjects, we know exactly who had an incorrect test. So everyone who had a correct test is 100% guaranteed to have had a correct test and everyone who got an incorrect test is 100% guaranteed to have had an incorrect test.

Knowing the incidence of the disease in the sample before we administer the test gives us extra information that lets us identify which test subjects are more likely to be in the set of subjects that got an incorrect test.

This should do it:



Screening test result                    True status                       Total
---------------------------------------------------------------------------------------
                                          Diseased      Non-diseased
Positive                                  *a*             *b*                   *a + b*
Negative                                  *c*             *d*                   *c + d*
Total                                     *a + c*         *b + d*               *a + b + c + d*

*a* = true positives
*b* = false positives
*c* = false negatives
*d* = true negatives

Screening test result                    True status                       Total
---------------------------------------------------------------------------------------
                                          Diseased      Non-diseased
Positive                                  *a*             *b*                   *a + b*
Negative                                  *c*             *d*                   *c + d*
Total                                     *a + c*         *b + d*               *a + b + c + d*

*a* = true positives
*b* = false positives
*c* = false negatives
*d* = true negatives


Hurricane hit Florida Orange growers perked up when they heard this news.

On that note: (I presume you read about the same nutcase scientist down here trying to research this stuff)
Doesn’t the practise also damage the vaginal wall increasing the chance of contracting HIV, or more likely, other nasties that aren’t killed by acid?
A lot more than lemon juice will kill HIV in vitro, the problem is, what will kill it in vivo?

Thanks for the coding help waterj2.

I think there may be a problem with people looking at sensitivity from within a contained population sample. In fact, it’s measured against the general population. Anyway, the points given, when you’re dealing with the samples noted, are correct, but not relevant to when we discuss false positives in general. In that case, we’re talking about those being measured against the total population with the disease (or more likely, the estimated total population with the disease) and studies to determine sensitivity are done on only positives.

All of this is really irrelevant, anyway, because as far as the tests done on this guy are concerned, they were guaranteed to have been confirmed by a further WB test, which pretty much eliminates any doubts. That’s why this one is so strange.

As for the citric acid issue, that sounds like Nonoxynol-9, which is a spermicide that is/was commonly used to avoid pregnancy (check a condom box that has spermicide listed on it, chances are good it’s N-9 they’re using). The problem is that N-9 breaks down the vaginal walls, so while it may initially kill sperm, prolonged use (say, by prostitutes) can cause vaginal breakdown that facilitates transmission. Best not to try oranges and just stick to condoms and selective partnering.