Can anyone explain the math to me?

Or at the very least, we need to use the data from somewhere else that does that.

I mean, it seems to me like the horse has left the barn on universal testing*, but if we used say…the data from Iceland, we might get a better inkling of how things might turn out.

*for multiple reasons, not least of which is that we don’t have an antibody test to identify mild cases that have come and gone already. So if someone got it in say… February, and they’ve recovered from their minimal symptoms, current testing would lump them in the “haven’t had it” category, which is about as misleading as not testing as all.

Ultimately all they’re trying to do with the modeling is take current guestimates about R0 and population density, hospital numbers, degree of lockdown, etc… and mathematically model how the disease would spread. It’s not exact, but it’s better than licking a finger and sticking it up to see which way the wind is blowing.

For example, statewide, the models I’ve seen predict an early-May peak for Texas, but the local officials are talking like it’s going to hit about a week or so earlier than that for DFW- in the next couple of weeks. So the model isn’t exactly accurate for my specific area, but for the state as a whole, it might be.

The nice thing about mathematics is that there are many different ways to arrive at the wrong answer. If all the individual things are both unknown and quite variable, no model will produce a number with great confidence.

One approach is to look at information from various countries - case numbers over time, or better, patients needing hospitalization, ICU or dying. Ignoring important data like testing rates, social structure, medical competence, whether data is honestly reported or useless propaganda, degree of interaction - or trying to match some or all of these - you could come up with a “moving average” based on the experiences of other places and assigning weights to the number of cases last week and the weeks before.

You could use a historical approach, looking at (less fatal) outbreaks of similar infectivity, how long they lasted, and then extrapolating.

With better data, you could estimate how many people a person of each age group might infect, the chance of needing hospital/ICU/dying, the reduction from better physical distancing, the effect of reinfection, etc.

The wiki site discusses other forms of modelling, such as SIR and its variants.

Not sure there is great reason to assume it is the same R0 for children and the elderly. In any case, all of the variables are fuzzy and a good article at 538.com explains this effect.

Various simulations can also turn fuzzy data into useless predictions. Whatever model you use, relying on it heavily won’t change whatever really happens.

A good explanation.

And whatever you use, the experts disagree by quite a lot.

I think there may be an underlying truth that comes out of the modelling attempts. That is that it seems that not only are the models poorly conditioned, but there may be a real case that reality is itself poorly conditioned, and that the actual flow of the epidemic is very sensitive to the existing conditions. This makes things all that much harder.
OTHO, the curves across the planet all seem to be reasonably well behaved. Despite the variables in testing and diagnosis. It may be that some reporting is deliberately being fudged, but I can’t imagine that it is a general pattern. There will be PhDs written for years on this.

Yes. https://jamanetwork.com/journals/jama/fullarticle/2764137

The unusual third hump of this last influenza season countrywide was not likely influenza.

So yes there are likely some significant uncounted numbers who have had infections and contribute to slowing it down as well as to its spread.

Absolutely. I’ve irritated several friends by pointing this out. They say that the experts say it’ll be over in such-and-such weeks, and I rip them apart asking what the experts’ assumptions were, talking about the theory of forecasting and how the model is only as good as the data input into it, explaining how experts at analyzing the present are no expert at predicting future events, etc. This irritates them because they really just want to believe it’ll be over soon and not think about the reality of the situation that much.

Thanks, I saw this article and even shared it on Facebook! No doubt the friends I irritated previously didn’t take a glance at this article, but a few others did, and it’s nice to know some people out there don’t want to bury their heads in the sand.

P.S. Just checked the forecast for Virginia again, and the peak resource date jumped from May 24 to April 20. How’s that for a volatile model?

Eh, that’s a pretty big leap to make, from some pretty recent sampling. Going back a little further, Stanford retested a bunch of San Francisco flu and pneumonia patients from January and February, and out of almost 2900 found only two cases, both of them later in February.

Hopefully, experts in a field like this will have a much stronger idea about the strengths and weaknesses of various models, will be able to identify bias in their data, and will be able to formulate plans for continuing research in order to narrow the gaps in our understanding. They will understand much better than most people how to calculate, analyze and communicate uncertainty.

Most people don’t know anything about statistics. They don’t understand sampling bias, confidence intervals, anything like that. Experts in a field will understand how those concepts apply to their research and will have a better understanding, critically, of what they don’t know yet.

Experts aren’t necessarily right, but they’re less likely to be wrong.

During January and February test positive influenza was common. Test positive influenza dropped off dramatically end of February. The weird national hump of influenza-like illness (ILI) began end of February.

Yes in January and most of February those who had an ILI (a fairly large number as influenza peaked during January and early February) most likely had influenza on testing. By early March that was no longer the case, most ILI was no longer testing as influenza positive, but an unusual third hump of ILI started up.

That study is not inconsistent with there having been very significant numbers of uncounted cases of infections with SARS-CoV-2.