Has modeling the result of something ever improved from probabilistic to deterministic?

It occurs to me that some problems in optimisation almost meet the OP’s requirement.

Many optimisation problems are simply intractably large to ever get the “correct” solution, but instead heuristics of many kinds are used to get to a “good enough” solution. Indeed in many systems some level of randomisation is used as part of the method (ie simulated annealing.) However as computational power has increased and more study has come to bear, there are problems for which direct solutions are now possible.

This was inspired by a recent conversation with an old colleague. Ages ago he had been working on the numerical solution of a particular problem. One that has been typically managed with approximate heuristics. He never bothered to publish this work as despite his success in bringing it down dramatically in complexity, it still required a very significant amount of computation. When I quizzed him on how much, I realised that it was now quite a tractable solution. He may even publish it now.

It is easy to get a quiet reasonable approximation to pi by nothing but randomly throwing points into a circle and calculating the ratio of throws that fall inside to outside. That is clearly a probabilistic method, and may actually have pre-dated any evaluation of pi by construction, or by algebraic means. So that might actually meet the OP’s quest as well. (Of course a complete expression of pi in numeric form is impossible, but the advent of algebraic forms meant that computation to as accurate as needed became possible, whereas previously the results were subject to error.)

Neuronal firing in the brain.

Many neurons fire at a certain average rate (e.g. 10 action potentials (AP) per seconds), but the exact number of AP in a given time interval or the specific timing of each AP appears largely stochastic. This is called “irregular firing”; and there are formulas to quantify this.
For decades, this was simply described as a random process (usually Poisson).

However, if you isolate one of these neurons in a Petri dish and activate it with electric current, it will fire spikes in a perfectly regular and reproducible manner, indicating that there is not intrinsic “noise generator” in neurons.

Modeling work in the early 2000’ has studied the properties of networks of deterministic spiking neurons that activate or inhibit each other. Under the right conditions, spiking activity becomes very irregular and looks random. Neurons receive excitatory or inhibitory AP at irregular (“random”) times and, as a consequence, emit APs irregularly, etc. This explains neuronal firing very well.

Note that not all neurons are irregular in the brain. Some fire very regularly in vivo.

AIUI, the models are good enough to predict tomorrow’s weather - but on existing hardware, it takes several days to run the model to completion. However even if computers get fast enough to predict the weather before it occurs, I would think that at some length scale the model must deviate from reality. As the OP suggests, once we get to smaller and smaller factors, it gets harder and harder to account for all of them, i.e. your model may predict a big-ass afternoon thunderstorm, but it probably can’t tell you the exact pattern of all the fluffy little clouds in the morning sky because it doesn’t know the exact local variations in humidity and/or wind.

Years ago I read an article about a robot arm that was tasked with tossing a coin a bazillion times and recording the results. Each toss (or set of tosses) had a prescribed height and spin rate. At the end, they generated a plot of coin toss result versus toss height and coin spin rate, with red areas of the plot indicating heads, and yellow indicating tails. For low heights and spin rates, the result was 100% predictable: large patches of the plot were filled with a single color indicating heads or tails. As the height and/or spin rate increased, the color patches got smaller and splotchier: turbulence and coin bounce (and at some point even variations in initial clock-orientation of the coin on the robot arm) were having an effect. at high enough spin rates and toss heights, the plot was a uniform orange color, indicating that the result could no longer be predicted at all. I suspect that if researchers had accounted for at least some of those other factors, they’d end up with an n-dimensional plot that clarified some of the fuzzier portions of the existing 2-D plot, but at some point it would still break down because of tiny random variations in the behavior of the robot arm.

I have to disagree with this distinction. Every prediction has an error associated with it, mainly because initial (current) conditions can never be measured absolutely accurately.

For example, sometimes a new asteroid is discovered that looks like it’ll pass very close to earth. A few days’ measurement may be enough to predict that the asteroid will pass through a million-mile square that happens to include earth. You can describe that result as “this is the path of the asteroid, with a million mile error bar.” But more likely you’d announce “This asteroid has a 1 in 10,000 chance of hitting the Earth.”

After a few more weeks of observations, the measurement error goes down, and you have a more accurate prediction. Now you can say “this asteroid will definitely hit North America.” Or maybe “It has a 1 in 1000 chance of hitting New York City.”

My point is, all predictions have some margin of error, and deterministic vs. probabilistic is just a difference in how you describe that result+margin.