Now to have a stab at HMHW’s most recent offering.
Well that’s about as bad faith an argument as you can really get.
My position has been, and continues to be, that scientific models are verified by testing their predictions and inferences.
It’s been a long thread, and at some earlier points it was sufficient to talk about just prediction.
However, the point being put to you, four times, was about inferences. I specifically talk about inferences because you seem to really want to frame scientific models as purely extrapolations, and that is wrong. Scientific models can and have been used to infer completely novel empirical data. And there’s no reason why this should work if the cause and effect that underlies these models was incorrect.
Again, I think you’re mixing up different meanings of “random” (and indeed, choosing the less common one as your main meaning). Let’s say Tuesday night is the lottery. We could say that whether or not the lottery actually happens is not determined and so “random” or we could talk about the randomness of which lottery balls will fall out. I mean the latter in terms of randomness, you seem to be alluding to the former.
There is no inconsistency with a certain event having a random outcome. It’s certain that there’s an electron, but it’s position and momentum are fuzzy.
Lol. I say it’s a silly misconception and you claim “There! You’ve defined it!”
On reading back my last post it seems I got a bit snarky again, so let me just reaffirm that I actually want to get somewhere in this debate.
To clarify on the last “lol” statement for example, when I said that what many people implicitly have in mind for free will is actually an “undetermined determination” it was a deliberate oxymoron. It was intended to illustrate that the concept makes less sense the more you think about it.
Lets take two identical universes in every observable way. In both universes, it’s the case that B is observed if and only if A was observed. Lets call this the AB Law. In both universes, almost all people say A causes B and might refer to the AB Law.
In the first universe, the AB Law works because, indeed, causation exists and A caused B.
In the second universe, A doesn’t cause B, they merely happen together because of the existence of the AB Law. (Why does the AB Law exist? Because we observe it to exist. Why do we observe it? Because of the AB Law).
How would an outside observer be able to distinguish these universes? It seems to me they can’t.
I think one disagreement arises because of the everyday understanding people have of causation. In order to refute pseudoscience or determine whether a drug works beyond placebo, we have to demonstrate a ‘causal link’ instead a ‘mere’ correlation.
I propose the following: since we can’t know whether we are in the first or second universe, all ‘causal links’ cannot be proven to be such in a fundamentally metaphysical way. In fact, a ‘causal link’ is just a stronger form of correlation, a (more) empirically and scientifically supported one.
This does not mean that ‘causation’ as an informal semantic term doesn’t denote something of value. What it denotes is what I would call a metaphysical correlation (this basically the same thing as Hume’s constant conjunction). Ice cream and sunscreen sales are positively correlated, but this not a metaphysical correlation. Sunny skies and warm whether would be closer to a metaphysical correlation to ice cream and sunscreen sales, whilst a true metaphysical correlation would be whatever fundamental reason(s) for the sale of these things and things like E = mc2, namely natural laws.
Informally I can use ‘cause’ to describe things that are metaphysically correlated without assuming metaphysical causation to actually exist. Why does causation have to exist? It doesn’t add anything useful to the observed phenomena. It might exist or it might not. When B always (and only) follows A we can informally say A causes B but that doesn’t metaphysically prove A causes B. This might all be a simulation with us being brains in vats.
Those were your literal words. Sorry if you feel that holding you to what you’ve said is in ‘bad faith’, but it’s all I have to go on. I can’t divine when you decide that, suddenly, you don’t want to talk about prediction anymore.
And it’s actually somewhat stunning in its brazenness to accuse me of arguing in bad faith for failing to address the part about inferences when the post you’re quoting actually does so:
Besides you explicitly talking about the necessity of causality for making successful predictions, I have also focused on that notion because I simply don’t understand what you mean by that. What inferences that aren’t predictions are used to ‘verify’ a scientific model? (Not to mention that, of course, no scientific model ever is ‘verified’: after all, any scientific model can turn out to be ultimately wrong, and generally should be expected to do so.)
In any case, really, the whole discussion is moot. There are no inferences you can draw in a causal universe you can’t also draw in one without any causal influence at all. This has been demonstrated by explicit example many times over. All the inferences that can be drawn from thermodynamics can still be drawn after understanding it as a consequence of pure statistics of large numbers of particles.
But this sort of thing is exactly what one typically means by scientific models making predictions (with the caveat that I’m not sure what you would count as ‘completely novel’ empirical data). It’s certainly what I’ve been talking about by talking about ‘making predictions’. Again: all that is needed for such predictions is that the regularities observed so far hold. It’s happened time and again that the ‘cause and effect that underlies these models’ has proven incorrect, yet, one can still derive ‘novel empirical data’. Such data can be derived based on the phlogiston hypothesis, yet, no phlogiston exists. There are only statistical effects tending towards more likely configurations of particles.
You have previously ignored it, but what about the case of the prediction of novel spectroscopic lines of the hydrogen atom from pure data alone—without assuming any sort of ‘causal’ model. Nobody, at the time, had any idea of Bohr’s model of the atom (which, of course, it itself strictly speaking incorrect). Or, as you’ve also previously ignored, reconsider the example of the test pilots’ performance declining after being praised for an exceptional showing. One might produce a causal story, on which being praised causes one to become more lackluster in one’s effort, and thus, predict that sort of effect; and that prediction would turn out to be right. But the real reason this tends to happen is a purely statistical effect of regression to the mean. So, the prediction bears out, but not because the underlying causal model is right—indeed, it’s completely independent of that.
And as pointed out, to yet more silence on your part, our entire universe may be like that. If Nelsonian stochastic mechanics is right, then there is no causality at the fundamental level of the universe. The theory is built on a Wiener process in the space of configurations of the system. This process can be thought of as the continuous limit of a random walk (drunkard’s walk), which, at every time-step, completely randomly takes a step in any of the possible directions. This leads to a diffusion process on configuration space. From this, the Nelsonian programme derived the Madelung equations, an alternative formulation of the Schrödinger equation, which governs the dynamics of quantum systems.
Now, the Schrödinger equation is deterministic—the same initial data will yield the same outcomes. Only upon measurement does the (possible) indeterminism of quantum mechanics enter into the picture. But from the Schrödinger equation, by Ehrenfest’s theorem, we obtain the approximate, Newtonian laws of motion of macroscopic objects.
So take any macroscopic mechanical system, such as a collection of billiard balls. You may model it as a Newtonian system, assuming cause and effect when balls collide. You may derive predictions and inferences, whatever that may mean, from this description. But the above sketch of a derivation shows that there is an equivalent picture where the fundamental process is a wholly statistical one, where there is no causal fact of the matter whether, in any particular time-step, the Wiener process diffuses this way or that way (whether the drunkard takes a step to the left or the right). Yet, all the same inferences and predictions will be valid. The systems are equivalent in the macroscopic limit.
So, as demonstrated, by now, many times over, it is possible to ‘infer completely novel empirical data’ even if the cause and effect underlying a model is incorrect.
What I mean by randomness—the only sensible meaning in this context—is an event happen without sufficient reason determining its occurrence, i. e. genuine, irreducible randomness, not e. g. pseudorandom chaos or effective uncertainty. So any case where whether A or B happens at t+1 is not fixed by the state of the universe at time t. It’s in this sense that something like the Wiener process underlying Nelsonian mechanics is random.
I assumed, charitably, that you had an argument for your position, deriving that free will leads to an ‘undetermined determination’. If you don’t, and you just sort of threw that out there randomly, I apologize. But under the assumption that you’re not just typing what your fingers happen to land on, if you have deduced that free will leads to such an apparent contradiction, then that must mean you have some starting point—some understanding of free will. Consider if I tell you to take an umbrella when going out, because it’s raining, and you’ll get wet otherwise. This implies that I have some notion of what rain is; otherwise, I couldn’t come to the conclusion that being out in the rain makes you get wet. I can’t both say ‘being out in the rain makes you get wet’ and ‘I have no idea what rain is’; both can’t be simultaneously true. So you can either claim that free will leads to a contradiction, or that you don’t know what it is; but not both.
And yet, you still simply ignore the arguments proposed in this debate for the past 300 years. I’m not quoting Searle (quoting Hume) at you because I want to settle this by appeal to authority, but because I believe it’s an exceptionally lucid explanation of the topic and the sort of arguments that make your position unsupported, and I think that it would benefit you—if you indeed are interested in getting somewhere—to engage with these ideas. I still find this rather curious. I think you’d typically grant that, by and large, expert opinion on a topic isn’t to be ignored. If expert consensus on vaccines is that they do work, and don’t cause autism, then even if you don’t understand all the relevant science yourself, I think you’d be inclined to grant that there’s probably some reason why the experts think so. Likewise with climate change, or the fact that the Earth is round, or evolution. So why this complete refusal to engage in this case? Why aren’t you at least curious why these experts believe what they do believe?
Sure, if that were the beginning and end of it, then you’re right.
But in reality, it is more like D following A. And then we use a scientific model to say “If A causes D then we might also expect to see B and C data along the way”. When we subsequently measure those things, it gives us reason to have confidence in our scientific model (which incidentally rested on Causality).
Science is not just about passively noticing correlations.
A direct question was put to you four times.
I find it pathetic that you would go back earlier in the thread, find a different question, and insist on trying to answer that question instead.
That previous question is not contradictory to what we’re talking about now – I stand by what I said about prediction – but it’s irrelevant to the question that is being put to you, and I feel sorry for you that you feel you need to deflect in this way.
I had my doubts previously but at this point I am certain you are not interested in honest debate so I am done responding to you.
You might not like that answer, but to just contrary to the actual facts in evidence in this very thread claim I haven’t provided one and use this as a pretext to call my honesty into question and huffily declare your grand exit is just laughable.
P-> Q implies that some observed properties of Q result from some observed properties of P. At best this is a conjecture because all P and all Q cannot be known. A hurricane spaghetti chart is not predictive even when it contains the ‘correct’ answer.
~P → ~Q
Some Q is the necessary consequent of some P and sometimes you get what you want. Most models fail, good ones produce a normal distribution of conjectures. Like the spaghetti plot.
The observation that the state of the universe is the direct cause of the whole history of the universe, has some serious temporal problems. Most of the previous states of the historical universe are ‘don’t care’. Extrapolation of future states would require knowledge of which ‘don’t care’ states will change to ‘care’ and become dominant in the future. They will be unanticipated and possibly be labeled ‘random’. There is a causal connection between between P and Q only from the time that P became ‘care’ and Q is observed. Within that period P and Q will experience unanticipated events.
A semiconductor wafer will have flaws in it’s crystal structure. The number of flaws can be anticipated within limits, their position cannot be anticipated. The position of the flaws is determined by proximal events during the time of manufacture. It is anticipated that a known number of ‘don’t care’ states change to ‘care’. Random distribution is a term assigned to such events, but in reality their number is known and their position is simply unanticipated.
Leibnitz ink spray is random, so is made up of all the components of some limited segment of the spectrum. A modern illustration of this is the Wein Bridge oscillator. The only input to the circuit is random thermal noise. The low Q circuit selects a single frequency out of random noise to amplify and produces a sine wave. Like a pattern that can be found in ink spray.
So, we observe a temporal causality, but it’s not valid to extend that to some kind of universal absolute. And, causality is only a term assigned to observed mechanics. No intent or divine intervention.
I’ll be honest @Crane and say I didn’t understand your argument at all. Can you have another stab at it please?
I’ll try to respond to the conclusion though:
The point is that assuming cause and effect in our models has allowed us to make many novel and useful inferences and predictions.
We do not need to make the claim that cause and effect holds everywhere to do this. Although any phenomenon where cause and effect does not hold may be difficult for humans to ever form an understanding.
In terms of your comment about intent and divine intervention…yeah, sure. I’m an atheist so if anyone was going to head in that direction in this thread, it wouldn’t be me.
I believe those statements are not correct. Inference engines can generalize from complex data, but that is not cause and effect. Computer models are analytical tools, not predictors of “completely novel empirical data”.
So let’s make this a little more concrete. You flip the switch (A), and a lamp comes on (B). You formulate a model according to which there are these things called ‘electrons’, which react to an ‘electric field’ in such a way as to move across a potential difference, and which, moving through matter, encounter resistance, which leads to energy loss in terms of heat, which is what makes the filament in the lamp glow. Based on that, you surmise that (C) the filament should give off a certain amount of heat, and (D) there should be a certain potential difference across the lamp (a ‘voltage’).
You take out your thermometer and measure the heat; you take out a voltmeter and measure the potential difference. Your model bears out. So you surmise, well, that’s a causal story: the potential difference causes the motion of the electrons, which cause the heating of the filament. Consequently, I have just demonstrated causation!
But this simply isn’t right. First, the argument is just formally invalid: you surmise that causation C implies phenomena P, observe phenomena P, and infer causation C. This is logically fallacious: it’s affirming the consequent. Rain gets you wet, but from the fact that you’re wet, I can’t conclude you were out in the rain; you could’ve just taken a shower.
Fine, so suppose that we hold that only causation C can yield the phenomena P. Then, observing P would yield license to infer C. But this is clearly false: in a universe consisting of an infinite succession of entirely random arrangements of matter, P would occur an infinite amount of times.
So suppose we hold that C were necessary to go from a set of observed P to some explanatory hypothesis H, making C a prerequisite of hypothesis formulation (in brief, PC → H). But this is self-defeating: then, we can never appeal to any observed phenomena as justifying the hypothesis that there is causation, because the act of hypothesis-justification presupposes C, and any such argument would be circular (this is the problem of induction).
Moreover, one can give explicit counterexamples to the idea that coming up with ‘inferences’ (C) and (D) substantiates the notion of causality. I’ve pointed above to Nelsonian stochastic mechanics: if the program can be carried through, it will yield a universe observationally indistinguishable from ours, up to the limits of feasible measurements; hence, it will allow us to tell just the same story as above, and have it come out right, with virtual certainty. Nevertheless, it is conceptually possible to get different results: the story above is the ‘equilibrium’ case, and arbitrarily large fluctuations away from equilibrium are possible. So, the lamp might not light, or the electrons not start to move, once the switch is thrown.
And even if Nelson’s mechanics doesn’t work out, the conceptual point, of course, still stands. Moreover, we know that it’s possible to replace causal stories with stochastic ones, because it’s already happened. The transfer of heat from a hotter to a colder body was thought to be something like the movement of electrons across a potential difference. Great scientific advances were made on this basis; Carnot formulated the principle of the motive power of heat on the basis of caloric ‘falling’ from the hotter to the colder body, and basically, every consequence of the theory of thermodynamics can be derived from there, with this causal process at its heart. But of course, we know that the functioning of steam engines etc. does not thus prove the causation inherent in the model: that causation doesn’t exist. Heat doesn’t ‘fall’ from hotter to colder bodies; it’s just a statistical effect. It’s vastly more likely that, a hotter and a colder body being in contact, the hotter body cools and the colder one heats up—but the reverse is entirely possible.
To repeat, a model with an explicit causation at its heart yields true inferences—encompassing the entirety of thermodynamic phenomena—, yet, that causation is wrong. It just isn’t there. Heat is not caused by a difference in temperature to flow from hotter to colder bodies; yet, models assuming this are stunningly successful. The inferences drawn from these models, such as the impossibility of a perpetuum mobile, the Carnot cycle, or the ideal gas law, remain perfectly valid; consequently, that we find those inferences to be true doesn’t substantiate that the causal mediation they were based on is present in nature. That’s science’s strength: that one model can be supplanted by another, more broadly applicable one, without thus losing the successes of the earlier model. If doing science depended on the necessity of each model’s details, then science would be greatly diminished.
But moreover, any attempt to derive inferences from a model, obviously, just applies further models. These inferences aren’t a priori judgments, and hence, are consequent upon experience. So if you say that, for instance, the electrons traversing the filaments should produce heat, you are calling upon an earlier generalization from observations, namely, that friction causes heat; but that is itself just a generalization from some finite set of experiments, in which friction was accompanied by heating, to stipulate that this ought to always be the case. So the causal story in your current model is only apt if the causal story of that earlier experiment holds; but this just never bottoms out. All we have is generalization from observed regularities.
So, by simple logic, by explicit example, by experience, and by the strength of science: no, your successful inferences based on a model including causation do not establish the existence of causation.
Yes, a simulation in which the rules of the sim predicate that A follows B. We can observe the universe as far back as 13 billion years ago, and we have evidence that A has followed B for all of those 13 billion years. It doesn’t matter whether that evidence is simulated or not - the evidence demonstrates the existence of a rule, which the simulation obeys. That rule is exactly the sort of causation that allows us to make predictions - at least from within the confines of the simulation.
The existence of observable rules in the universe, whether simulated or actual, demonstrates the opposite of an absence of causation - it shows a very definite and well-defined form of causality, where the universe is constrained to behave in a particular fashion.
Even if all these events were simply dreams in the mind of a God or demon, it appears that the dream is following the rules of causality, and has been avoiding acausal events for billions of years (unlike, for instance, a typical human dream, which often includes such events).
Sim1:
prevValue = "A"
While TRUE
if prevValue = "A"
print "B"
prevValue = "B"
else
print "A"
prevValue = "A"
Sim2:
prevValue = "A"
i = 1
for i < 10^(61)
if prevValue = "A"
print "B"
prevValue = "B"
else
print "A"
prevValue = "A"
i = i + 1
if prevValue = "B"
print "A"
prevValue = "A"
While TRUE
if prevValue = "A"
print "C"
prevValue = "C"
else
print "A"
prevValue = "A"
It’s true, in both, that we have evidence that B has followed A for 13 billion years. In the first simulation, that will continue to be the case; in the second, it won’t. Eventually, A will be followed by C, instead. There’s no way to decide, observing only the simulation’s output before that switch has been made, whether B will continue to follow A—whether we live in Sim1 or Sim2. So all of that observation does nothing to establish whether A causes B—that is, whether A’s occurrence necessitates B’s. For if A did cause B, then it would be impossible for A to occur, with B failing to occur. But this always remains possible, no matter how long we’ve observed B following A.
You might hold that it’s not likely for a rule to be set up in the above overcomplicated way. You might hold that, by Ockham’s razor, we shouldn’t expect the rule to be like that. All of that is true. But the point is, to claim that you know that A causes B is to claim that you know that A necessitates B—but that knowledge is unattainable as long as A occurring, while B fails to occur, is possible at all. And in a simulation, it always is.
Of course, the issue of causality in a simulation is a far more thorny one. One probably wouldn’t want to say that there’s a causal relationship between A and B even in the case of Sim1; rather, if there is such a relation at all, it’s between the underlying computational substrate and the simulation it gives rise to. After all, it’s not A that somehow makes B happen, it’s that substrate that makes either happen. And even that, most people would call a relation of supervenience, not one of causation. The tiny little dots of light making up the picture on the screen you’re looking at don’t cause that picture; the picture supervenes on them. In a similar way, events in a simulation are really just a particular, ‘chunked’ way of looking at the events in the underlying computational substrate. So, one could rather point at those events as instantiating causal relations. But then, of course, we’re just begging the question: no finite amount of observation will allow us to conclude that there’s causal mediation there…
I really don’t see the difference. We don’t directly observe the world; rather we gather data from our senses and from scientific instruments, such as radio telescopes and LIGO systems. These data are the result of phenomena we can never directly observe, but whatever these phenomena consist of, they obey certain rules which are exactly consistent with causes-and-effects.
My current working model in my head of the universe is a kind of plum pudding. This has a pudding-like matrix made up of clockwork causes-and -effects, interspersed with plums which represent truly random quantum events such as radioactive decay. Such a universe would not be predictable in detail because the randomising effects of the plums, but would still display a mechanical regularity otherwise.
There’s no need for the ghostly influence of a disembodied will to allow a thinking entity to allow choices to be made - a choice is a process made possible by the evolution of complex organic processing in living beings, and it would be impossible to exactly predict the outcome of such a process without replicating the entire universe in full while somehow ensuring all the random events occur in exactly the same way.
Exactly is impossible perhaps, but is it impossible to predict choices made by humans accurately? It seems to me there is a very simple linear relation between accuracy of predicting human behavior and evidence that humans lack free will. The more accurately you can predict human behavior, specifically the outcomes of choices made by humans, the more evidence you have against free will.
Psychohistory is a myth, and individual humans are fickle; I cant imagine any way to predict them accurately, even with a computer the size of a galaxy.
It doesn’t have to be real-time. People can make decisions under laboratory conditions - in an MRI chamber or something. You could collect the necessary data, let them make their choice, and simply withhold the outcome from the prediction team until they finish their calculations.
There’s no observable difference, sure; but the point is that causal relations, should they exist, aren’t observable. And of course, being consistent with something isn’t the same as having reason to believe in something. You having wet hair is consistent with you having been out in the rain; it’s also consistent with you having taken a shower, or having had a bucket of water dumped on your head, or having stood on the shore during a surprise freak wave, and so on. Similarly, observing B following A for 13.8 billion years is consistent with us living in Sim1, and consistent with us living in Sim2—thus, doesn’t tell us whether A causes B. (Not to mention the fact that we’re really only observing the state of the world as it is right now, and need additional hypotheses to reconstruct its state in the past.)
Regarding causality in simulations, take this one:
Sim3:
i = 1
While True
if i mod 2 = 1
print "A"
else
print "B"
i = i + 1
This will yield the same output as Sim1. Would you say, here, that A causes B? There’s nothing about A that makes B occur; it’s completely irrelevant whether A occurs for B to occur, it will occur whenever the counter is even. But regarding our observations, the output will be the same.
Then take this simulation:
Sim4:
i = 1
While True
if i mod 2 = 1
print "A"
elseif i < 10^(61)
print "B"
else
print "C"
i = i + 1
This will yield the same output as Sim2: for the first ~17 billion years, if one cycle is one Planck time, A will always be followed by B, but then, it will instead be followed by C. Nothing has changed about the way the algorithm handles A, however. A doesn’t ‘work’ any differently than in Sim3. Yet, eventually, A is followed by C, instead.
But then, there’s simply no relation between A and B (or A and C), much less a causal one. A isn’t the reason for B, it doesn’t lead to, much less necessitate, B’s occurrence. Yet observationally, we have the same data that some might say would lead us to conclude that A causes B. But then, that conclusion is just an error.
But impossibility of prediction doesn’t entail any sort of choice. Any complex system is impossible to predict without explicit simulation, where ‘complex’ here means anything with more than three moving parts, basically (the 3-body problem being the simplest one without analytic solution in the general case). So if you’re saying that’s sufficient for choice, then if some constellation of three stars eventually ejects one of them, that was a choice made by that system. But of course, it was unavoidable from the initial conditions of the system. The same unavoidability holds for choices made by biological systems (and if it doesn’t, and it’s just some random quantum events that are responsible, then there’s no choice in that).
To say that you eating peanut butter on toast for breakfast was due to your choice is the same as saying in the simulation that B no. 2,768,342 was due to A no. 2,768,341. But A no. 2,768,341 doesn’t have any real sway in whether or not B no. 2,768,342 occurs. It’s just set from the beginning. If, in Sim1, I had initially specified prevValue = B, then event no. 2,768,342 would have been A instead of B. So, this distinction matters. But once that choice is set, everything is completely fixed. What happens on the road, so to speak, doesn’t matter; it’s the only road that can be taken. It’s like going down a rollercoaster: once you’re underway, everything is fixed. Calling some stretch of track a ‘choice-making process’ is just window-dressing.
Now, of course, that’s perfectly fine. If that’s how the world is, it’s how the world is. Rollercoasters are fun, despite the ride being fixed from the start; it’s the experience of the ride that makes them worthwhile, not whether we get to influence its direction. Crime novels, TV serials: lots of things we enjoy are perfectly fixed in advance. Why shouldn’t life be like that?
But for those who want choice, who think it’s not sufficient for life to be like that, I don’t think meaningful choice can be found in just singling out some particular whirring gears and coiling springs and call that process ‘choice’. If that’s the case, then a stone tumbling down a hill is exercising its power of ‘choice’ in which way it hops and bops. I don’t see how that sort of choice makes any difference.
That said, of course being predictable isn’t in and of itself inimical to being free in one’s choices. If I’m free at all, I’m free to choose the same thing over and over again. If I just don’t like peanut butter, I won’t choose it as long as something else is on offer. That doesn’t mean it’s not the case that I could’ve chosen differently—I just didn’t. And I won’t. So, you could be perfectly accurate in predicting my behavior in that case. And you could spin a story according to which peanut butter ‘causes’ me to avoid it. But that doesn’t mean I’m not just avoiding it out of my free choice. That just shows how little our observation actually constrains the possibility for how things really happen.
The digits of Pi have highly predictable normalcy. They are therefore random. It is predictable that, regardless of Carl Sagan’s conjecture, King Lear will never appear as linear text in the digits of Pi. But all of the digit codes will appear normally distributed.
Measurements of natural events yield normal distributions. This is useful in process control. Failures that have a normal distribution are natural. Failures that do not have a normal distribution are not random and therefore are have a cause.
This raises !P’s issue of ‘cause’ semantics. Bothe failures have some cause. Naturals can be explained by known processes that occur naturally. Non-random failures are introduced by something unanticipated being injected into the process. In that case a willful cause produces a non normal result. It is an example of free will unless our definition of free will requires mystical intervention or a homunculus…
Not as long as it just hops and bops in the ‘normal’ way. Very many years ago, a couple of guys at IBM Advanced Systems made an RC bowling ball. When bowled the ball would hop and bop normally then suddenly take an unanticipated turn. The ball exhibited free will because it did not follow the normal path for a balanced ball.
Anticipating objections:
The ball didn’t think - not required the ball acted in a willful manner
The guy with the RC transmitter did it - no he created necessary conditions but the physics of the ball did it