False negative tests

All tests have false negatives, some due to the test itself under ideal circumstances, some due to technique in sample acquisition. Interpreting results, in studies and clinically, needs to be done with an appreciation of the meaning of a false negative rate in the context of any specific set of priors.

With that as preamble the reported false negative rate for the RT-PCR for SARS-CoV-2 appears to be 30%. False positive rate is low.

Shared partly for the practical - in the current context of high rates of the disease and low rates of other germs that give similar presentations believe your symptoms even if you get a test and it comes back negative. The diagnosis of COVID-19 can and is in practical terms right now a clinical diagnosis based on history of symptoms even in the face of a negative test result. The test “confirms” but clinically is often superfluous.

Shared also for those who think that wide RT-PCR testing is some sort of shield or a gold standard for knowing what rates are. It is neither.

Does this have implications for how quickly social distancing can be relaxed? Or are there other tests to determine that?

Holy shit! When I heard the RT-PCR tests were having a lot of false negatives I assumed the rate was somewhere in the range of 10% to 15%. A ~30% false negative is barely better than flipping a coin to get an answer.

Stranger

It can be hard for people to wrap there heads around the numbers, but a high false negative rate can result in some seemingly counter-intuitive results.

If we test 100,000 people at random in a community with a true rate of infection of 2% then ideally 2000 would test positive and 98,000 would test negative. But if the false negative rate is 30% then 600 of those who are truly positive test negative and only 1400 of those actually positive test positive.

But tests have a false positive rate too. Suppose it is much lower, only 2% - that 2% of those who are truly negative actually test positive. Then of that 98,000 who truly are negative in the above example 1,960 falsely test positive.

So with those figure as an example, only 1400 out of the total 3360 who test positive actually have the disease… 41.7%.

You need a false positive rate of about 1.4% to have half of those testing positive actually have the disease. And the false positive rate needs to plummet to about 0.07% to have more than 95% of those testing positive actually have the disease.

Which is why many countries are going by clinical symptoms.

Which is one reason for not testing everyone, but rather limiting testing to those with symptoms, or a known contact (and not a casual contact-- more like, living with someone who tests positive). If your pool is full of lots of positive people, and not many negative people, you get different rates of false negatives, and false positives.

But also, as part of controlled scientific experiments.

Now that there’s a likelihood that percentages of infected/over it people are high enough to be detected in a random experiment, we should be doing that too, in a number of countries.

For those sort of experiments, it doesn’t matter so much that there’s a large false negative rate, as long as you know really accurately what the false negative rate is.

I have a question: do we yet know whether false negatives are a shortcoming of the test, a factor of the virus itself, or something about certain individuals? Are there certain people who are infected, but repeatedly test negative, or are negative tests randomly distributed?

Yes, that was the figure my daughter got from her doctor when she tested negative a couple of weeks ago. Her symptoms were: fever (100-102 range), dry cough, some difficulty breathing. Her live-in boyfriend’s symptoms were: fever, gastrointestinal problems. He never got tested.

Given those symptoms, I assume they both had it.

Are the antibody tests expected to be more reliable?

Well, that is crazy high. I was just looking at the devices Canada has approved for testing:

Only a couple don’t mention using the RT-PCR method, one of which was just approved this weekend.

Anecdote alert!

In my daughter’s case, she thinks they didn’t get a good nasal swab (she moved or jerked away or something), and she was already a week in.

So, it might (might!) be a combination of difficulty of testing and being able to get a test while you have the most virii (j/k! viruses, of course). She had to wait four or five days just to get tested.

The linked article mentions several possible sources of error so my guess is no they don’t know yet. They haven’t even fully nailed down the error rate, “may give” 30% false negatives.

An excellent question!

Following the article’s link to the cited “research from China” - no asymptomatic infections included but no big difference on who had the false negatives to my read:

There were three people of the 213 cases who repeatedly tested negative in upper respiratory samples.

I’d suspect real world usage leads to poorer results as it introduces collection errors that likely did not occur as significantly here.

But yes the “reported” and “appears” for real world usage are important to have there as we really have no gold standard for mild or asymptomatic case diagnosis. My understanding is that the antibody tests are better if not done too early, but not of much use for early diagnosis for that reason. Not sure though if that is true.

A lot of tests are designed to have low false positive rates. But I’d even question whether a test with so much potential for Type II error is worth doing. 30% is pretty useless, and I’m surprised it is that high. How much does that even change pre-test probabilities?