An important point is what you think “science” is.
This is a slippery question. The popular view is that science is like physics. Extremely well known physical constants, precise laws, triumphs of falsifiable theories. Fantastic predictability.
Trouble is, most science is not like that. (This remains a serious subject for philosophical discussion.) Anyway, a lot of science is about applying rubbery or empirical knowledge to a problem, and most importantly, an entire science on how to evaluate and maintain metrics on the rubberiness.
Much has been made of modelling. Trouble is that we know a-priori that the models are only as good as the input parameters, and in some cases are quite sensitive to those parameters. So it is garbage in, garbage out. The trick is not to discount the models as useless, but to have a very good idea of where the variability in your parameters is and what the resultant variability in the predictions is. This is no different to making good life choices. None of us know what tomorrow will bring, but we have some idea, and we plan accordingly, balancing some risk of the unknown against some reasonable assumptions. In any time of uncertainty the risks go up. We evaluate the risks and make different decisions. But we have to acknowledge that our knowledge of the likelihood of risks is at best imperfect. Imagine you are in the middle of a civil war. Some activities become highly risky. But you have very imperfect knowledge. How do you balance your decisions? Moreover, how to balance advice to others? If you have responsibility for many others your decisions will be different than if you are only making choices for yourself.
At the start of the pandemic we had a set of parameters that were known with enough certainty to indicate we had a serious problem, even before the virus reached our shores. We knew enough about the virus to know how it killed many people, an estimate on the fatality rate, and an estimate on how fast it spreads. All estimates. With enough error in the known parameters to yield model results that varied from a bad year of the flu to a Spanish flu disaster. Knowledge of the range of probable transmission modes guides defence against further infection. Knowledge of one’s lack of ability to test, and a best guess timetable for availability of tests guides early decisions. The final call on how to handle the outbreak is guided by all this. Everything has unknowns. But even at the start there is some useful knowledge about the variability in those unknowns. There is constant commentary that says “we don’t know xxx”. This is unhelpful and not really true. We never know xxx like we know say the fine structure constant. We have estimates of xxx. Initially those estimates have a lot of slop in them. We work with what we have. Science gives us tools to work with imprecise values. As time goes on those estimates become better, the slop decreases and we can have higher confidence in what we do. All the time science guides this.
But there is never a cut and dried A versus B answer. And science tells us that. Eventually the choices are made by people who use the guidance science provides to make decisions. There will always be a question of prudence. Plus the simple mechanics of how you impose and police any restrictions. It isn’t viable to work through every nuanced special case. You have to accept that there will be warts in the rules. The pandemic isn’t going to wait for you to work through a raft of special exceptions and tweaking. You shut down now. If there is important problems, work then out next. But get the big stuff done. Distancing rules? Assume droplet and not aerosol, OK, 6 feet. There is good evidence from previous experience this is good. It may not be precise, but is good enough. This feeds into models with reductions in R as a result. Imperfect, but not arbitrary. And so on.
Bottom line. This is science in action. Not science that has been already done. Science guides how we operate in real time, and provides the tools to understand the limits on what it can do.
No doubt, John Hopkins is someone the governor looks to for advice; however, I’d be more comfortable with him checking with Johns Hopkins for something like this.
I mean, the infamous Donald Rumsfeld quote starts to hold water: “There are known knowns, and there are known unknowns. And then there’s shit we know fuckall about.” (That may be only somewhat accurate)
We are somewhere in the “known unknowns” fading into "fuckall "range. What is known is that physical distancing is better than standing near each other and breathing on each other, so I’m going to not quibble over exactly how many feet is best.
Your quibble is not with the science of epidemiology. The effectiveness of isolation isn’t in dispute.
What’s non-scientific, and fully disputable, is the political decision to cut corners on isolation measures because our society isn’t prepared to function in full isolation mode. This is calculated on intuition, reason, and politics. Gov Newsom of CA seems to be making good judgments here. Gov Kemp of GA seems to have lost his cotton-picking mind.
The decision to isolate is entirely scientific; the decision on how and when to make exceptions is entirely politicial.
To paraphrase what I told someone in another thread:
It’s fine and dandy if* you* drive alone in your car to go watch the sunset along the coast because *you’d *never, say, stop for gas and sneeze on the attendant or some other stupid thing, right? Cool!
Count me in with the confused. Somehow there’s a doubt that separating people will slow the spread of a virus? This isn’t exactly new science.
The syllogism is pretty simple, and the premises scientifically well established at this point:
Proximity to a disease carrier increases risk of transmission
Lockdowns reduce proximity to disease carriers
Lockdowns reduce risk of transmission
If you are asking for a % value or specific numbers, those don’t exist - there’s still a lot we don’t know about the virus.
But specific numbers aren’t required to generally established guidance.
The OP asks about the difference between buying a cell phone at Target vs Best Buy. Well, there isn’t one (and Best Buy is open for pick-up in my area at least - functioning electronics are actually rather important in daily life these days). But Target does sell several other items that are necessary for daily life. Ideally, Target doesn’t sell some items as normal, but practically, it’s not possible to police this.
Accepting that perfect enforcement of guidelines (determined by medical professionals) is not possible is not the same as the lack of scientific backing for those guidelines. The perfect isn’t achievable, so they are aiming for the very good instead.
The idea that a lack of perfect policy and perfect enforcement implies a willy-nilly approach instead of one guided and balanced between scientific expertise and practicality is bizarre and nonsensical.
As best I understand it, they do have some standardized formulas that come with some specific constants, which can be calculated and shared out, given a period of observation and (perhaps) some lab testing.
The Chinese passed us some starting numbers in January, I believe, and we have adjusted from there as more data came in.
For non-standardized formulas and more complex simulations, I would expect that they guesstimate some numbers and then adjust to match what they see happening in real life, using error bars to indicate the uncertainty in their values. Simulations allow you to more realistically test the likely results of various rules, if enforced on the populace, but it also gets you to a place where you would prefer to have a few different people independently create their own sims and then average out the results.
Actually, simulations using infection models can show statistically quantitative differences in outcomes between different choices of parameter. This is what epidemiologists do. Obviously, no one ran a model that looked at differences between restricting movement between 4, 5, and 6 miles because there is not sufficient fidelity in the model or assumptions to make useful distinctions, but a difference in spread between allowing movement for five miles versus twenty miles is probably a useful distinction insofar as it prevents widescale transmission and makes it clear to the population how serious the situation is. The same is true for wearing masks; even if they provide no protective value and make offer less in terms of preventing spread than many people would like to believe, wearing them is a constant reminder of the need to maintain distance, although quantifying that in terms of human behavior is difficult, particularly when people are being fed contradictory information and guidance.
If the o.p. is looking for precise, quantifiable estimates that can be verified by measurement, that doesn’t really exist, in part because the models cannot really account for vagaries in human behavior beyond some gross assumptions about differences in R[SUB]0[/SUB] between different populations based upon prior experience or social factors, and in part because such epidemiology is not a repeatable experiment where you can see what would have happened with different assumptions and factors. But it is certainly possible to make a heuristic model that uses Bayesian methodology to evaluate priors to adjust the posterior probability for future assessment. And data from past epidemics such as influenza (which occurs every few years at a marginal level) can inform those models although the SARS-CoV-2 virus appears to be uniquely contagious (both the rate of and asymptomatic degree of spread) and so existing models and early estimates may have undervalued the replication rate and possibly overstated the infection fatality rate based upon data from China and Italy that were of questionable veracity or not representative of the conditions in the United States.
We cannot know what “would have been” with less strict measures, but a generation of epidemiologists are going to look back at this pandemic and make qualified estimates of whether what was done was sufficient to prevent the unnecessary and avoidable loss of lives from overwhelming the medical system. Give the wide range of different approaches across states and nations that have a medical infrastructure that is potentially capable of saving many lives, they’ll certainly have a lot of data to plug into those simulations Do we want to be remembered as a country that expended tens of thousands of potentially savable people because we decided it was more important to get a haircut, or the country that utilized effective measures, saved lives, and found ways to keep the economy working without putting people in unnecessary risk? Because California (and other states with strong isolation measures in place) are working at achieving the latter, while states like Florida and Georgia are likely going to fall into the former.
A simple model would be to randomly drop some dots on a plane, with different densities. Draw a gradient radiating out from one dot, black at the center, white in the middle. If any other dots are in a sufficiently dark gray, add a gradient to them during the next cycle. After X number of cycles of being colored, remove the dot. Go until either nothing changes or all the dots disappear. Count how many cycles have passed. Reduce the radius of your gradiant circles, repeat, and compare how many cycles pass.
It’s not necessarily the best model, but you get the idea.
Logically, we know what the effect will be of shrinking the circles. How quickly the disease moves will slow and the likelihood of things stalling out increases.
If you want to make a useful estimate for humankind, though, you will need to look up some numbers: How far people usually go in a day? How often do people fly? How often do they visit another town? What’s the average distance to a food store? What’s the population density of the area you’re concerned with? What’s the relative population density of a town, versus a rural area. How many towns should there be, about?
Making a model is, really, relatively easy and it’s not hard to know what the result would be of making a change. But to make a predictive model of real life, you need to do a lot of research. That’s beyond what I can do with my current free time.
Johns Hopkins, I believe, has open source code. There’s nothing to stop you from downloading it, reading through the source, and fiddling with it. But it does exist and, I presume, has a lot of constants representing things like I said above - population density, etc.
I don’t know what in the hell you’re going on about. What I was asking is if we could say “this measure went into place on day 2, mobility stats shows it was complied with, starting day 10 covid-19 hospitalizations dropped. Compared to this similar community that did not implement the measure and saw continued increase”. Stuff like that.
The delay between contagion and presentation of symptoms makes any kind of near-real time predictions using data problematic. There is a 2-3 week lag, minimum, between exposure and the presentation of severe illness, not accounting for the lack of testing to verify that any particular illness is actually due to the SARS-CoV-2 virus. Hence, models using the best guesses on infectiousness and replication provide predictions that are statistically broad. It’s just a consequence of not having good representative sample data to tease out trends.
That would be historical analysis rather than computer modeling.
And yes, it exists. I don’t believe that I’ve seen that specific sort of retrospective report but probably there is something like it around, if you get on Google Scholar.
I find it bizarre that people would read reports about computer modeling, scientists looking at historical data, etc. and then assume that they’re all really just making stuff up and that there’s no real science going on. I mean this stuff is freely available open source code and Google Scholar has been available for a few decades now.
Yes. The post directly above mine that was addressed to me.
You also seem to be under the impression that I’m asking for validation of models. I don’t know how I could be clearer in saying I’m talking about evidence for measures actually taken being effective. I am curious about reality, not attempting to disprove some model.
If it is evidence you are looking for, examine the difference in outcomes between countries such as South Korea and New Zealand, which implemented early restrictions, lockdowns and large scale testing, and Italy and Spain, which did not.
Or Japan that had lax restrictions for months, with its first case back in January but currently very few deaths. I know they’re having a hospital freak out atm but it hasn’t been reported that they have a massive increase in ICU patients or deaths. Maybe being an island is the best plan.
I am at a complete loss to understand how you think I am “being flip”. The reality is simply that we do not have enough data to create accurate predictive models nor the empirical knowledge of hindsight to understand precisely how particular restrictions will affect outcomes in order to say, “Measure X will reduce morbidity by Y and mortality by Z”. And we won’t have that for months to come because of both a lack of adequate sample testing and the delay between imposing a measure and seeing a result, which between the latency period of the virus, the delays in testing and reporting, and just aggregating the data and interpreting it for identifiable trends may be several months.