Statistics - in Physics, in Big Philosophy?

I am trying to figure out how to ask this question - and whether it is even worth asking. Skip to the bottom for an attempt at a TL,DR summary.

  • Quantum Physics = introduced the need for probability theory at a quantum level based on the Wave Field nature of sub-atomic particles, the Heisenberg Uncertainty Principle, etc. Einstein’s famous “God does not play dice” comment is in reference to the probabilistic / relativistic nature of the universe called for by quantum mechanics - he argued that there is an “absolute something” underlying the universe - not a bunch of probabilistic particles that all seem to prop each other up into existence relative to one another.

  • Big Data - where the Information Age is at, evolutionarily. The ability to source and filter huge volumes of data to reach “probabilistically true” facts. This stuff is being used to forecast elections, search for terrorists, figure out which products and services should get ad space on our searches, etc. We are seeing a HUGE layer of Big Data growing underneath our online presence - and soon we will not be able to imagine life without it, no different than clean water, cheap fire, sliced bread, etc.

So - probability theory / statistics are central to how Reality, i.e. the universe, is structured, and also how our Reality will be structured as we become increasingly immersed in our online selves.

My question: is this probabilistic approach reflected in philosophy and/or religions - or will our conception of the world change?

  • Philosophy - I know from a philosophical standpoint, probability plays out in arguing about Determinism, Cause and Effect and other issues related to what Can be Known, i.e., Epistemology.

  • Religion - what I came up with was Zen-based relativistic Dualism - i.e., you can’t know Good without the existence of Evil.

How am I doing here - is there something to be discussed, or am I just allowing my head to be blown unnecessarily?

TL, DR: It feels like Quantum Mechanics, and this new Big Data world are not just changing how we can answer questions - but also what questions we can imagine to ask. How is that going to change our view of the Universe and our place in it?

I’m not totally sure what you’re asking… But, yes… There have been major “paradigm shifts” in our comprehension of reality, brought about by scientific discoveries. Darwinian evolution, Einsteinian relativity, Heisenbergian uncertainty, Goedelian indecideability, the Big Bang, etc.

Hell, the atomic theory of matter changed our philosophical perception of reality. The discovery that the world was a sphere did!

So – unless I’m missing your actual point by a country light-year – and this should perhaps be your null hypothesis – yes, I do believe that the perception of cause and effect to be more statistical than discrete is one of the big world-perception-changing things of our times. In Thomas Edison’s day, scientific discoveries were big and physical. Now, they often appear in the tenth decimal place. Salk cured Polio; today, medical discoveries are increasing survival rates of cancer by small percentages. We will probably not see a Salk-level “cure” for cancer, only continuing increments of improved treatments.

Which is sure better than nothing!

The project to map the brain’s neurons, along with other brain science, will probably lead to the re-definition of “thought” and “self” and “choice” and so on. That we are just “meat computers” shouldn’t be such a great revolutionary idea, but that’s how it will come about. The same issue will be approached from the other side by AI and robots.

Brave or not, it’s a New World.

Yes - that’s it nicely stated.

So - with the emergence of a Probabilistic Reality, how might we approach the Big Questions from new perspectives?

If we are able to glean ever-more-fundamental Truths via a probabilistic approach, how might that fundamentally change questions about the existence of God, Predestination, the limits of what’s knowable, the Why Something vs. Nothing question, etc…?

It rewards a more nuanced approach to things.

You know the “fuzzy set” notion? Instead of bright lines defining sets – as in our old Venn diagrams – sets are now blurry, like a modern shaded color gradation in Photoshop. The center of the set is bright green, fading out through lighter and lighter shades, till it finally is indistinguishable from the white background. At what point can you say with certainty that a point is, or is not, in the set?

In the thread on mental illness, in the Great Debates forum, Hentor the Barbarian noted that, for a certain illness, there is a multi-item checklist, and if the patient has a certain number of the symptoms, then he is determined to have the specific illness. I forget the numbers, but call it “Five out of Eleven.” If my doctor says “Yes” to five out of the eleven symptoms, I formally have “clinical depression” or whatever.

This is why we speak of “Autism Spectrum Disorder” and not of “Autism.” You don’t “Have Autism;” instead, you fall somewhere on the spectrum.

You aren’t “gay” or “straight,” but have a Kinsey number. Even AIDS isn’t absolutely binary; it’s measured by a number of antibodies in a test volume of blood. (Just as your breakfast cereal is permitted by the FDA to have a certain amount of rat droppings and insect parts.)

Much of this is due to the triumph of our technology. We can build instruments that can measure things to an incredible degree of accuracy. We can now sense distinctions, and magnitudes, and dosages, and the like, far more accurately than our native senses – and our naive philosophical assumptions – can cope with.

If a man with 5,677 hairs on his head is “bald,” what about a man with 5,678? We know enough to reject the inductive fallacy – but we are also now beginning to approach the ability to make determinations of that nature. We can count the number of hairs on a man’s head!

Did you note the thread asking, “Is a Post-It Note on a Computer Monitor a ‘New Thing?’” (I can’t remember the exact phrasing, but that’s basically what was asked.) If I add one atom to a certain drug molecule, which causes no change at all in the drug’s effects, is it a “new drug?” If I strike one word from “Huckleberry Finn,” is it a different book? (Was “The Wicked Bible” a Bible?) These questions are more amenable to statistical reasoning than to old-fashioned binary Aristotelian logic.

Yep - everything you are saying makes sense.

Okay - how about this: There is a question of Determinism, which asserts that all of our actions are predetermined. In a religious context, it is discussed from the standpoint “if God is all-knowing, then won’t he already know if you are a good guy or a bad guy, etc.?” From a science standpoint, Darwin, for instance, moved towards a scientific determinism - we are byproducts of evolution and behave in a way that is shaped by our evolutionary nature and environmental nurture.

Okay - now we know we live in a world rooted in a Probabilistic structure. Are there schools of thought/inquiry related to Determinism? E.g., - because of our nature and nurture, it is NOT that every precise decision and action is already determined - ah, but we DO have a “probability field” of options around us - so at any given time, there are a number of decisions/actions that could happen, but until there is an interaction with an observer, all of the possibilities are potential. Schrodinger’s Destiny, perhaps?

Is that a reasonable example of an attempt to take a Big Question and account for our deeper understanding of probability theory…??

I’m a biologist which means I’m up to my eyeballs in uncertainty and complexity. As such, I have a certain fondness for Bayesian inference and epistemology. (Warning: listening to a biologist tell you about statistics and philosophy is like handing a loaded gun to a small child.)

You could spend a lot of time reading about BayesianvsFrequentiststatistics, and Bayesianapproaches to philosophy. But in a nutshell, Bayesian inference is a procedure for taking your best guess about some fact (the prior probability) and using data to update your guess (the posterior probability). The beautiful thing about this approach (IMO) is that you can use your knowledge about the world to make an informed guess as to the prior probability. But even if your guess is completely off, with enough data you’ll eventually converge on the same posterior probability.

In my spitballin’ biologist mode, I’m producing a lot of hypotheses that probably aren’t true. So I can think about “what is the probability that my hypothesis is true”? And then after a number of experiments, I ask “what is the probability that my hypothesis is bogus, given that it is supported by several experiments?”. And I can condition that probability based on “the probability of false positive or experimental artifact, given this type of experiment”.

Now in practice I can’t accurately quantify those probabilities, but there are plenty of cases where Bayesian inference is directly applicable (genetic sequence analysis comes to mind).

Very cool, lazybratsche - thank you. Sounds like you are using the Bayesian approach to converge on the “probabilistic fact” of a conclusion*.

I think where I am going is: for the way you are applying a Bayesian approach to your scientific discipline, how might that/is that being approach to larger philosophical questions?
*side note: my son is doing an internship this summer - genetic oncology testing; right now he is preparing a sample of a specific section of DNA so he can subject it to a variety of therapies to see if they kill off cancer-related mutations…hmm, I wonder if he will run into that Bayesian approach you describe…