Has modeling the result of something ever improved from probabilistic to deterministic?

I’m unsure exactly how to phrase the question, but here goes.

When we try to model what will happen on a dice throw, we talk about the result in terms of probability, ie, there’s a 1 in 6 chance that any particular side of the dice will come up, assuming a fair dice, a fair roll, etc.

Other things, however, are modeled deterministically, eg the speed at which an object in freefall will accelerate.

I’ve always thought of probability as another way of saying “it’s not practical to factor in every variable that can influence the result of this action, so the next best way to model this is to work out what the chances are of all various outcomes, assuming the variables/unknowns have a random influence on the result”.

My question is… has there ever been something that used to be modeled probabilistically, but then due to improvements in how we measure or understand said thing, we now model deterministically?

If I understand your question well enough, then I would suggest a number of things in the health care sciences … we can determine the average rate of occurrence of disease X, then twist and choke the data and find that the rate of occurrence of disease X increases dramatically among folks whose grandmother experienced famine during her pregnancy with your mother … now we research the biochemistry and find the smoking gun, malnutrition when the egg was forming causes disease X in the patient …

We once had only a statistical link … now we have the exact mechanism for the link …

However in medicine, even when we think we understand the mechanism, we are still often left with probabilities. Just better probabilities.

We understand pneumonia and infectious disease pretty well and we have good treatments, but I promise you that it is still possible to die from it.

But maybe in genetics, for single gene diseases. It used to be that couples would have to be counseled that there was 25% (or 50%, or whatever) chance that their next child would be afflicted. Nowadays the gene is identified, and the diagnosis can be made in utero, or even shortly after fertilization. But having said that, there are still probabilities (25% of embryos), we can just diagnose and intervene sooner.

Weather prediction has gotten so much better, just in the last 40 years, that it almost qualifies. The weather guys have predicted the last couple of (nasty) heat-waves here, five days ahead of time.

They can also talk about specific causes – “high pressure ridges” – in ways that sound deterministic.

Computer models can become very good, but they are never predictive.

Fermat’s Last Theorem?

It used to be that here was some sport in searching for counter-examples to the Theorem. With more and more searching, the probability that the theory was right was felt to improve. Since the theorem is now proven, it is deterministically known that the search is futile.

The four colour map theorem must rate an honourable mention, being proven by a computer.

The search was only for a method of proof. The validity of the theorem was never in doubt. If Fermat was in error the Pythagorean theorem would have applied to solid geometry.

Crane

The Pythagorean theorem does apply to solid geometry, and whether it applied or not would have nothing to do with any Diophantine equation.

EDIT: And back to the OP, we can in fact deterministically model the throw of a die. Of course, you need a lot more detail about how it’s thrown, and what material it’s made out of, and the surface it’s landing on, and so on, but you can still model it.

Chronos:

So the volume of a cube constructed upon the hypotenuse of a right triangle is equal to the sum of the volumes of the cubes constructed upon the other two sides? a^3+b^3=c^3?

Crane

No, you’re making the wrong generalization. But the square constructed upon the diagonal of a rectangular prism is equal to the sums of the squares constructed on three orthogonal edges.

In any event, that’s still completely irrelevant to Fermat’s Last Theorem. Nothing in Fermat’s theorem says anything at all about triangles or any other geometric figure.

Chronos,

Yep, as long as you stay planar it works. To work with solids requires exponents greater than 2 - just what is forbidden by Fermat’s conjecture.

Crane

??

|(a,b,c)|[sup]2[/sup] = a[sup]2[/sup] + b[sup]2[/sup] + c[sup]2[/sup]

The thing about an unproven conjecture like Fermat’s Theorem (as opposed to something unprovable like the Continuum Hypothesis) is that it is (by the excluded middle) either true or false, so there is not any probability involved. That does not mean there may not be heuristic reasons for being confident it is true (or false), despite not knowing a proof of either. Eg, the Riemann hypothesis.

Agreed - the Pythagorean heuristic does lend confidence.

Crane

Rectangular prisms are planar?

And even if you are interested in the equation a^3 + b^3 = c^3 for some reason, it’s easy to find solutions to that equation. For instance, a = 2, b = 3, c = the cube root of 35.

Meanwhile back on topic - the question is whether computer models are becoming more deterministic.

I believe the models are becoming far better at anticipating outcomes. Since they are working with stochastic events the programmers may be using some stochastic techniques to avoid getting stuck in local minima. In that case the program would be more accurate but less deterministic.

Perhaps some of our participants who are active programmers can give us some insight.

Crane

Chronos,

I stand corrected. Fermat referred only to integer solutions.

Crane

This probably isn’t quite what you want, but:

Encryption algorithms ideally produce output that is indistinguishable from randomness. No matter how much output you look at, the sequence of bits is indistinguishable from flipping a (fair) coin.

However, some encryption algorithms are weaker than others. They have a fixed amount of internal state, and this state can “leak” in some cases. Look at enough data, and the internal state can be determined, and the encryption broken. The apparent randomness is no longer random; it’s an entirely deterministic and reversible scrambling of the original data.

Pseudo-random number generators work in a similar way. The output is intended to be random, but they have a finite amount of internal state to produce the scrambling, and with poor implementations the state can be determined.

Of course, we already knew that the data was deterministic. And even secure implementations are deterministic; we just don’t know how to reverse-engineer the internal state from the data in these cases.

That’s nonsense and has nothing to do with the Fermat theorem. I once knew the proof for exponent 3, which Fermat must have known too and I do recall the proof for 4.

An easier way to look at it is to simply say there’s no such thing as determinism. Everything is mere probabilities. The electronic circuits in the computer you are reading this on aren’t deterministic - they have been constructed to have a very, very, very high probability of performing the boolean logic operations at the very bottom layer of the computational circuitry but it’s not guaranteed. There’s a tiny chance at any moment a circuit will make an error. Sometimes those glitches are the cause of crashes, most of the time they don’t make any noticeable difference. More reliable circuitry, like the kind they use in space probes, has triple redundant logic and the probability of failure becomes even tinier, but it’s still not zero. Still not deterministic.

Suppose you wire up a simple logic circuit and play with it. Suppose it’s an AND operation. Flip one switch on, output light stays off. Flip the other switch on, output light turns on. Both switches on = output on, all other cases = output off. Well, the chance that this predictable behavior happens every time you flip the switch isn’t 100%. Switch might wear out. Transistor or vacuum tube circuit might fail. A lightning strike outside might cause it to give the wrong output for an instant. It’s not deterministic, it just is close enough to deterministic that you can build a computer out of it.

Indeed, Fermat himself came up with the proof for the n=3 and n=4 case. It’s likely that that’s what he was referring to in his infamous margin note: He found the proof for those two cases, and thought that it would generalize to all of them. Of course, once he actually wrote out the proofs, he realized that it didn’t, but the original note was just a margin-scribbling, so there was no point in going back and scribbling in an erratum.