# Is the logic behind this HIV myth actually correct?

I remember reading a while back that the chances of a positive hiv test result for a straight male being false were actually 50%, despite an overall false positive rate of something 1 in 20,000*.
Since:
A: The chances of a false positive on an HIV test are 1 in 20,000*.
and
B: The chances of a straight male actually having HIV are 1 in 20,000*
therefore
C: You have an equal chance of either A or B describing your positive test result.

• I’m assuming the numbers from this myth are either outdated, or were incorrect from the start.

Sounds reasonable. Taking the figures as right for the sake of the argument, if you tested about 20,000 men you would expect to randomly produce a false positive for one of them. You would also expect one of them to have HIV and produce a true positive for him. No-one else will test positive. It therefore follows that when someone tests positive for HIV, there is a 50% chance the test is wrong. It’s a Bayesian thing.

Suppose that 400,000,000 straight men are tested for HIV. Let’s assume that your numbers are correct. Of those 400,000,000 straight men, 20,000 of them will be HIV positive and the rest will be HIV negative. You don’t give the chances for a false negative. Let’s assume that the probability of a false negative is also 1 in 20,000. Of the 399,980,000 who are HIV negative, 399,860,001 will show up on the test as true negatives and 19,999 will show up on the test as false positives. Of the 20,000 who are HIV positive, 19,999 will show up as true positives and 1 will show up as a false positive. So indeed half of the the 39,998 who test positive are indeed true positive and the other half are false positives in that case.

But there’s no necessary reason for a test to necessarily have the same probability of false negatives as false positives. Suppose instead that the probability of false negatives is 1 in 2, while the probability of false negatives remains the same. Then there will be 10,000 true positives as compared to 19,999 false positives. So the probability of a positive being a true positive is a bit more than one-third, not one-half. So it’s important to know both the probability of false negatives as well as false positives.

Definitely plausible, depending on what the actual numbers are. See this article on positive predictive value for more information, or this article which discusses the concept in relation to HIV testing. The actual likelihood of being HIV positive after a positive screening test may well be much lower than 50%. Bear in mind that this only works if you randomly selected and tested someone from that population. If someone was tested because they were symptomatic or their partner had a positive test, their pretest probability is much higher.

I don’t think that’s a good assumption. Screening tests are deliberately designed to have a very low false negative rate - they want to catch each and every real case that is tested. The tradeoff is usually a relatively high false positive rate. That’s why screening tests are usually followed up with a diagnostic test that has a better false positive rate.

I think this is important to repeat, ‘lest people start thinking that all their doctors’ ordered testing is for naught. Screening and diagnostic testing are two different things. Screening tests are designed to be cheap, easy to use and as non-invasive as possible, so that as many at-risk people as possible will be screened. The trade off here is usually in accuracy - that is, you get quite a few false positive results from a screening test, but very few false negatives.

***Diagnostic ***testing is what your doctor does if you’ve got a positive result from a screening test. The diagnostic testing is usually more expensive, harder to do or have done to you and may be more invasive and/or uncomfortable. But it’s also much more accurate.

It’s like if I wanted to find out if you watch Game of Thrones but I wasn’t allowed to ask directly. To create a screening test, I might say, “Winter is coming!” and see how people react. Very nearly *everyone *who watches Game of Thrones will recognize the phrase, so there’s a very low false negative rate. If you’ve seen it, you know it. It’s quick, it’s easy and it’s noninvasive. But some people know the phrase and haven’t seen the show - they’ve heard it from their friends, or seen it mentioned in a review, or maybe they’ve read the books, but not watched the show. Those people will also test “positive” with my screening test. Those are our false positives. So now I take all those positives - true and false - and I do more specific *diagnostic *testing. I may ask if they have HBO or know someone who does, and that will help me determine if that person could have watched Game of Thrones. I may ask them to describe a scene in the show that’s not the book, and that will weed out those who have only read the book. All that is more specific, and less likely to give me a false positive, but it’s also more “invasive” and more time consuming/expensive.

> I don’t think that’s a good assumption.

Which was exactly the point I was trying to make. Knowing what the false positive rate is doesn’t tell you what the false negative rate is.

Rate of HIV infection = P(H) = 0.00005
Probability of not being infected = P(not H) = 1 - P(H) = 0.99995
Probability of true positive = P(+ | H) = ???
Probability of false positive = P(+ | not H) = 0.00005

Probability of positive test = p(+) = P(+ | H) * P(H) + P(+ | not H) * P(not H)

To calculate probability of not having HIV given positive test = P( not H | +)

P(not H | +) = P(+ | not H).P(not H) / P(+)

If assume P(+ | H) is very close to 1 (few false negative results) then indeed P(not H | +) will be close to 0.5 or 50%

I love this analogy, and am copying it down to show my colleagues. Just so you know.

Thanks Wendell for explaining the logic, Schmendrick for the references and Ticker for showing the math.

So, the conclusion I’m getting is that since we don’t have the rate for false negatives, whether that 1/20,000 person who IS positive will test as positive, the logic is faulty or incomplete. Sounds kind of counterintuitive, but one of the links Schmendrick provided shows a logic table that helps for figuring out the possibilities (2-dimensional table, with test results on one side and actual condition on the other)
Incidentally ticker, there’s a typo in the second-to-last line: " P(+ | not H).P(not H) / P(+)" Is that period supposed to be a / or a multiplier dot (used in algebra textbooks instead of x).

I’ve also heard this argument, but when I heard it, it specified “person not in any risk group for the disease”. In which case the numbers are probably more or less correct, too.