A doctors Livestream said that serological tests are no better than a coin flip if the overall infection rate is 1 percent, assuming the test is 99 percent sensitive and specific.
I can understand in the context of Nationwide studies, (over 300 million plus subjects) but does that mean my individual results would not be 99 percent accurate as the test is supposed to be?
Your test will still be 99% accurate. If you are infected, there’s a 99% chance the test will correctly report that you are infected. But we don’t know that you are infected. Most people (99% of us) aren’t infected. So a positive test of you (absent some other indication - like exposure or symptoms) doesn’t give much confidence that you actually are infected.
Look at it this way. If there are a thousand people in the population and everyone takes the test, 10 people will be detected correctly (I’ll assume there are no false negatives) and 10 people will be false positives. So for those 20 people who had positive results, there’s only a 50/50 chance (a coin flip) that they actually are infected.
TL/DR - check out the wikipedia page for Bayes’ Law (eta: Oops. Please check out this page on Bayes’ Law Bayes' Theorem )
Imagine that ten thousand people are each issued a card, labelled either ‘T’ (true you have corona) or ‘F’ (false). If one percent of the population have it, then 100 people have ‘T’ on their card, and 9900 people have ‘F’. You can’t see what’s on your card. But, with no other information, it’s pretty likely that it’s got an ‘F’ on it.
Now, someone comes along and scribbles either ‘P’ or ‘N’ (positive or negative test) on the back of it. They’re pretty good at face-down-card-identifying, but not perfect. So 99% of the ‘T’ cards have a P on them, and 99% of the ‘F’ cards have an N on them. This is the full distribution of cards.
FN - 9801
FP - 99
TP - 99
TN - 1
You see that your card has a P on it. What that means is … your chance of having a T on the other side of your card has gone WAY up. It used to be only one in a hundred - now it’s 50/50. Big jump! But … not *enough *of a jump to get the chances any higher than 50/50. Because your chance of being T was so very very low at the start, it needs a test even more accurate than a 99% one to push your chances of truly having a T after seeing a P result, any higher than that.
Let’s say no one actually has the virus. If you test everyone, a test with 99% accuracy (specificity) will incorrectly tell 1% of them that they are positive. In this scenario, the test is totally useless - every positive result is false.
Let’s say instead that only 1% of the population has the virus. The same test with 99% accuracy will correctly detect 99% of that 1% of the population - but just as in the case above, it will also incorrectly tell 1% of the remaining 99% of the population that does not have the virus that they’re positive. And the result is that ~2% of the population has been told they have the virus, but for any given individual it’s a 50/50 shot whether they’re actually positive or whether the test made an error.
So, the test’s accuracy itself is not affected by the population - but the degree of confidence you can have in the results is. Really what’s happening here is that 99% accuracy is just not good enough to yield high confidence in a positive result when the overall prevalence of the disease you’re looking for is low.
The technical term for this quantity - the chance that a person receiving a positive result from the test actually has the disease - is Positive Predictive Value (PPV).
This is a problem throughout medicine - it is frequently discussed in the context of breast cancer and prostate cancer screening, for which the statistics are similar.
So if I had suspicious symptoms but not typical, and tested negative via PCR, is the first wave of serological testing worth it for me? I believe I had covid19 and that the PCR was a false negative and want to confirm my suspicions, before exposing myself without antibodies in case my symptoms were coincidental.
I’d hate to make a recommendation - if your doctor thinks a test is worthwhile, he may know some factors that we don’t. But if your doctor doesn’t think a test is worthwhile, at least we’ve explained why
There would be a 50% chance that you’d had the disease. That is what a 99% test in a 1% population means. Yes, your individual results would be 99% accurate: that means there would be a 50% chance you’d had the disease.
Even knowing nothing else, we can already say that a negative test would indicate less than 50% chance you had the disease. If you want to confirm your suspicions, would that be worth it for you? Your call.
Anyway, now you aren’t in a 1% population. We already know that you are in a population of those who’d had “suspicious symptoms”. We can’t work out the numbers unless we know how many people had “suspicious symptoms”, and how many of those had the disease, but you could take a guess. Is taking a guess worth it for you? Your call.
Lets say you guess that 10% of the population had “suspicious symptoms”, and that half the people with the disease had “suspicious symptoms” So the 10% includes 0.5% with the disease (1 in 20) and the 90% contains 0.5% of those with the disease (1 in 180). So before the test, you guessed that you had a 1 in 20 chance, and after you tested positive you guess that you had a 5 in 6 chance (5 true positive, 1 false positive) that you’d had the disease. Your opinion has gone from “maybe” to “probably”. But it’s only as good as the numbers you plugged in.
I get the 5 in 6 number as (1/20)/(1/20 + 1/100), which is close enough, give that my numbers were rubbish to start with.
But a 50 percent chance of covid19 is a coin flip… Meaning the test could be done now by any reader when has a spare dime. Does that mean a penny saves me a trip to the doctors?
How can a serological study be useful then if we don’t know the actual post-infected percentage of people carrying antibodies, but if that unknown percentage is too low for the tests sensitivity/specificity, the study had no value beyond a coin flip?
A coin flip would not be 99% accurate (or, at least, not 99% sensitive and specific). The test isn’t perfect, but that doesn’t mean it doesn’t give you information.
Keep in mind that right now, in the scenario we’re assuming, the chance you have coronavirus is about 1%. If you take the test and it’s positive, we now think the chance is 50%. That’s a huge change!
On the other hand, if you test negative, we can be about 99.99% sure that you don’t have it. Whereas right now, we’re only 99% sure.
If the answer is worth less than a coin flip to you, then there is no point doing the test “to satisfy your curiosity”. If the answer is worth more than a coin flip, then paying to go from knowing 1/100 to knowing to within a coin flip is worth it for you.
You are at a casino. $1 gets you 1/101 chance of winning $100 at roulette. Or you can do the test, and now you have a 50/50 chance of wining $100 with your $1 bet.
The casino of science doesn’t let your friends flip a coin for $100 on a $1 bet. You only get to flip the coin if you have the test.
In real life, it’s not easy prizes like $100 and $1: I don’t know how much it is worth to you (100?) or how much it will cost (1?). All we know is the odds: 1% becomes 50%
A 99% sensitive/specific test is a lot better than a coin flip. Namely, it has 99% value for classification rather than 50%. If anyone is saying flipping coins is as good as actual testing, that is arrant nonsense.
Also, another way to look at it is that a coin-flip diagnostic test yields exactly zero information, so maybe a better way to describe it is not to emphasize the 50% detection rate, but a statistic like the (Matthews) correlation coefficient which will be, of course, zero.
ETA doctors are definitely supposed to know how diagnostic tests work, what are false positives and false negatives
Years ago I had a coronary scare and ended up under the care of a cardiologist. Assured that my heart was fine he sent me for a stress test. Reviewing my results he told me that my heart seemed fine. He was puzzled that I didn’t seem overjoyed at the news. I told him that my father had repeated stress tests, heard repeatedly that the results were fine and ended up having quadruple bypass surgery.
“Well, if you have a family history I will send you for a thallium scan. The stress test is wrong about half the time.”
“Hang on,” I said, “You sent me for a test that had the diagnostic validity of tossing a coin?”
I would have hoped the cardiologist would have explained it better, what the false positive/false negative rates were, etc., rather than agree his test had the diagnostic validity of a coin flip (i.e., none at all). Assuming the test in question did have some diagnostic value, of course.
If he understands medical testing himself, he should be able to explain it to a patient, at least in broad strokes.
ETA consider a condition affecting 1 in 20, and a “test” consisting of always returning a negative result. Instant 95% accuracy!
Nicely said. I was going to say that the value of the test is related to the decision you are going to make based on it.
If you’re trying to decide whether to self-quarantine, then going from 1% to 50% is highly valuable (if there’s a 50% you’re a carrier, you should be quarantined)
If you’re trying to decide whether to take a possibly dangerous treatment, going from 1% to 50% is not worth it. Get a better test, or wait until you have some symptoms.