Free Will with a Little Bit of Science! (TM)(C)(R)

As some of you know, I’ve recently been working in a medical lab doing computational medicine/neuroscience work. Specifically, I’m working on:

  • A study on optimal vaccine distribution
  • A study on brain modelling <---- this is the one of interest for this
  • A study on machine learning in medical education

I’ve been work on that middle study for about a year, and I recently got a couple of interesting results. One of them is a model for predicting (or inferring) brain activity states from a sequence of brain activity states. Being a model this is, of course, an abstraction. I am not simulating all of the hundreds of billions of neurons in the brain, and all of the connections. Without getting too much into the technical details, I’m using graphs that have self-contained modification rules.

Our initial experiment was to use deterministic rules only. This was not very successful.

So we added stochastic rules (rules with an associated probability). This means that it is only possible to predict either the most probable next brain state or predict some set of brain states with associated probabilities of occurring.

When we changed to stochastic rules, the accuracy went up considerably. Currently, it is about 60% accurate at predicting the next most likely brain state.

Now, the intent of this research has nothing to do with free will. Rather the next steps are to further improve accuracy, and see if there are transition patterns that would allow us to recognize a neurological injury, disease, or the state (at rest, overloaded, etc.). Wonderful.

But it did get me thinking about free will. Any time you have a stochastic model, the obvious question is whether the underlying process being modeled is, in fact, stochastic, or is there some information that is missing that would allow for a deterministic model to be produced.

  1. Suppose we find a fully deterministic model with 100% accuracy. Would you take this as evidence against free will? It seems to me like a deterministic model with 100% accuracy has to imply that there is no free will. … Except there’s always that little bit of wiggle room coming from the models having some layers of abstraction.

  2. Consider now the stochastic model. Suppose the accuracy is improved such that the next most probable brain state matches the actual brain state with some percentage x. (Let’s say x=95%, but you can pick whatever you wish). And furthermore, suppose that in the remaining cases, the brain is always within the top N predicted states (again, you can pick N to be whatever you like). The point is suppose that the model has, with these conditions, 100% accuracy. Would you consider this to be evidence for or against free will?

  3. Consider now our existing stochastic model with an accuracy of about 60% at predicting the next brain state with a certain probability. Do you consider this to be evidence for or against free will?

  4. Let’s just generalize, what sort of scientific evidence would you consider definitive or at least extremely compelling with respect to the existence of non-existence of free will? The non-existence of free will has the misfortune of being on the trying to prove a negative, so we’ll only hold it to the standard of extremely compelling.

FYI, I believe in free will. I have no real evidence for this other than my own experiences and observations of the universe. When I see the universe, I do not see something that is driven purely by the inevitableness from an initial physical state. I believe that we have a soul/spark/whatever that allows us some control over the mechanisms of our thinking. I’m not necessarily trying to (or going to try to) convince anyone I’m correct, I’m just stating what I think to provide context to the post.

Note, to any mods, I fully expect that this thread will descend into a more general free will debate. I’m ok with that. There’s no need (from my point of view) to call any such discussion as a hijack.

How did you even define deterministic rules? Accurate rules would require input about sensory inputs and internal brain states that would be difficult for you to get. Stochastic rules might work better because the operation of the brain was truly stochastic or because they modeled deterministic rules applied with inadequate information.

First of all, I think your computational modeling sounds very interesting and I would like to hear more about it. For your stochastic model, are you using some kind of Bayesian approach with a prior estimate that is updated with a posterior probability, and if so, does that give any insight as to how ‘random’ the observed behavior actually is? I’d like to read more. FWIW, your observations are consistent with with current work in computational neuroscience, as I’m sure you are aware.

With regard to the question of ‘free will’, I’ve long held that any sufficiently complex system that results in something that with the qualia of subjective experience is indistinguishable from free will. Even a system with purely deterministic physical mechanics will have a degree of random variability due to perturbations caused by random variability (e.g. error, noise, et cetera) if it is really complex, and of course, the brain is not a reversible state machine because it is constantly self-modifying as it builds new connections (memories, skills) even if the overall architecture is relatively predetermined.

At some point, the argument over whether such a state is or is not ‘free will’ becomes a semantic distinction or a philosophical argument, neither of which are theses that can be resolved by experiment or observation. That being said, for whatever degree of ‘free will’ humans or other animals have, much of our actual behavior is autonomic, and the play of ‘free will’ comes in the rationalization of why you made a decision. When you decide what to have for lunch, it may seem like you went through a rational process of consideration and elimination until you downselected to your French dip with au joux, but really there are layers of ‘decision-making’ well below any conscious process that prompted your gustatory cortex to recall the memories of previous meals and indicated that this is the thing that would most satisfy your hippocampus, with your forebrain perhaps invoking other desires such as adhering to a preplanned diet or the foresight of knowing that this selection will result in heartburn. This is why trying to adhere to a restrictive diet is so challenging; you can make the commitment to eat healthy and nutritious food, but in the moment of selecting a meal from an assortment, various other areas of the brain are prompting different choices that have nothing to do with your previous rational intent, and the ‘discipline’ of adhering to a diet is more about conditioning the brain to reject those impulses than exerting conscious self-control.

So, in short, I don’t think there will ever be a scientifically-definitive determination of the existence or extent of ‘free will’. I think we can observe via behavior the extent to which deliberate, pre-determined choices overtake autonomic impulses, but I don’t think you can actually separate those on a fundamental level because so many unconscious processes underly our perception of consciousness. I think we probably have less ‘free will’ than we believe, but enough to at least cultivate the ability to self-modify behavior in desired ways. I think we also underestimate the degree of ‘free will’ many other creatures have versus ‘instinctive’ behavior, but again, I doubt we can quantify that in a meaningful way beyond qualitatively describing the ability to self-modify behavioral patterns.

Stranger

You should not be surprised by these results. People are that predictable, and I am sure you can do much better than 60%. Psychologically, if people want free will, they really have to work at it, ironically including using some techniques to reprogram themselves, not to crave cigarettes for example.

Not sure that the deterministic vs stochastic distinction is always relevant at a certain level. If a dice roll influenced some process, is it more “free” than merely being influenced by a chaotic complex system? [Also note that chatbots, neural-network text manglers, etc all have stochastic elements but no “will” at all.]

There is a lot of interesting background philosophy concerning the problem of consciousness itself, including whether it even exists, but that is too much to summarize even were I an expert and is IMO not relevant to your case.

I think the key to your problem are indeed the layers of scaling and abstraction. Perhaps a good analogy would be predicting the weather. You really need that abstraction, as you are not modelling individual gas molecules nor would it improve much if you could. I would not draw major philosophical conclusions either way. If you want to examine free will you probably do need to go back to some philosophy for a definition you can test for in your computational models, or at least something in that direction you can begin working with like ergodic criteria. Maybe you should work backwards and 90% figure out how people choose between the orange and the banana with and without the presence of the apple and reverse-engineer the workings of their minds to a good extent.

If I have understood correctly from the o.p., they are actually building and applying a computational model using some kind of active neural imaging like functional magnetic resonance imaging (fMRI) that is intended to replicate activity in defined neural circuits. Behavioral experiments have the obvious limitation that the observer can only infer internal states or functional activity from behavioral responses, and there is such a wide array of both external influences and prior conditioning that the ability to run a controlled, repeatable study is limited to only the most distinct inputs and responses (not to mention the logistical difficulty of running such experiments with a statistically significant and homogeneous population), hence the recent uproar over the lack of successful replication studies in psychology and behavioral neuroscience.

Stranger

I’m not quite sure what exactly you are asking so if this doesn’t answer your question then feel free to ask it again.

We model the brain as a series of graphs with nodes indicating activity in some defined region of neurson (obviously we cannot use graphs with 100 billion nodes so a node represents some collection of neurons). So, we end up with G1 => G2 => G3 => Gn. We hypothesize that there => is some set of rules that cause Gi based on Gi-1 (actually Gi-j to Gi-1). I use machine learning to find the rules that comprise =>. The rules would be deterministic is say N5 in Gi always → N6 in Gi+1. A rule is stochastic if N5 in Gi → N6 in Gi+1, p=50% and → N7 in Gi+1, p = 50%.

It is not Bayesian. It is purely functional. In other words, there is some function f(G) where G is some graph such that f(G) = G’ where G’ is the graph of the next brain state.

First, I appreciate the rest of your post. I don’t really have anything to say about it, but I did read it and enjoyed it.

This is what I find interesting about this work. Suppose, that I could in fact build a graph with 100 billion nodes or 1 to 1 with the number of neurons in a brain. It is still a model, an abstraction, because it would not be neural tissue. And even if I could model the node as neural tissue it would not be THAT neural tissue with all of its specific states all the way down to the quantum level.

But suppose that with such a model, it has 100% accuracy at inferring the next state in a deterministic way. This would certainly have to be a nail in the coffin of free will (and would make me quite sad)?

And then, what if such a model is non-deterministic, then it doesn’t prove anything because of the extreme difficulty in proving a negative, i.e., some would say “Oh well, it is stochastic because of the differences in noise between your graph versus the actual brain it is modeling! No free will!” But it seems to me that there comes a point where, yes, there can almost always be more reasons why it isn’t free will, but it shrinks that window. Or at least it does to me; however, I recognize that I have a bias towards believing in free will.

Would such a model be convincing to you in any way, while not being proof?

See Stranger on the Trains reply. He has it exactly right. Right down to some of the problems we’re having soooooo I’m kind of wondering if he’s in our lab. :slight_smile: :laughing:

I think that’s the tricky bit. Suppose there is found to be some extremely accurate stochastic model. The question is what is the cause of the stochasticity? Is it a die roll? Some quantum effect? Or is it the expression of free will?

As in the OP, I don’t think it is possible (or at least extremely difficult) to prove free will because saying here it is can always be cancelled with “Ahh but …” It is like the invisible pink elephant in the backyard. Why can you not touch it? Oh well, it is very quick. How come I cannot hear it move? It had ninja training. How come …?

While absurd, to me, it highlights that explanations have to get more absurd as the window closes around the truth. That there is no invisible pink elephant at all.

If you’ve followed any of my other posts on my research then you’ll know my previous work was on inferring the algorithms of natural processes from observational data captured of the process over time.

I cannot help but wonder if maybe at some point, it might be possible to do algorithm inference on the brain and be able to say “There is something ‘executing’ here, that seems to control decision making but is itself non-deterministic.” In other words, to be able to recognize the “free will” function in the brain, while recognizing again that there can be alternative explanations.

My curiosity here is, at what point does it become compelling evidence, even if not proof.

The model not being an actual brain is a distinction that many people don’t grasp (and is also the reason that I doubt we will ever be able to ‘upload’ consciousness into a simulacrum of a brain running on top of a digital substrate) but it is kind of a fundamental problem, because while you could tune your model to any degree of fidelity necessary to replicate an observed functional state, it still won’t behave in the way they brain does in modifying itself in use. You would have something that functionally looks like a brain, but wouldn’t do the things the brain does, or to borrow from Korzybski, “The map is not the territory.”

An actual model of the brain that functioned sufficiently produce comparable results is probably going to be essentially as complex as the brain is itself, and thus practically just as difficult to get useful comprehensive measurements from (although if you could dispense with the difficulty and nuisance of fMRI, it would at least make the work easier). So even if you had a model with “100% accuracy”, I don’t know that you’d be able to make measurements a detailed enough and yet capable of being interpreted to a level of granularity as to be able to dismiss ‘free will’.

As an example from my field, you can have a computational fluid dynamics model that will represent the airflow around a body with a sufficient degree of accuracy to predict aerodynamic response (at least, within certain regimes) but it isn’t as if the model is actually simulating the flow of every individual gas molecule; it is just applying the Navier-Stokes equations to a discretized volume around the rigid boundary of the body, and then using a bunch of averaging rules, turbulence models, and compressibility factors to get the thermodynamics and conservation laws to come out right. The result looks ‘right’ compared to a wind tunnel test (at least, we hope it does) but it doesn’t predict the path of any individual particle, much less the entire mass of them. That doesn’t make the model useless–in fact, CFD can be incredibly useful in modeling behavior and give the ability to put ‘sensors’ in areas we couldn’t achieve in ground testing–but it is always just an approximation.

BTW, if you are interested in the neuroscience field in general, the Brain Science Podcast is actually an excellent ongoing survey of the field with interviews with working neuroscientists and neurophysiologists. The podcaster is not a neuroscientist but an MD who is interested in neuroscience, and while she’s a bit dry she manages to keep the discourse at a level that is well above pop-sci journalism but doesn’t require you to have read through Principles of Neural Science several times as she does a good job of summarizing key points and providing show notes and references. The podcast doesn’t really go into computational neuroscience, but it does tend to straddle between the hard science and philosophy of consciousness.

Stranger

Free will? It seems incompatible with stuff like General Relativity would it not?

Funny thing about CFD is that even though a sufficiently complex system is chaotic, and sufficiently complex is not all that complex, that does not mean that the system is non-deterministic. People like the concept of free will and a sense of self and moral culpability but that doesn’t mean the mind isn’t deterministic.

Sadly, I agree with everything in your post. I’m not much creating “Great Debates” I guess. :slight_smile:

Thanks, I’ll check this out.

I have no idea.

While I, too, would like to hear more about your research, as you anticipated this would turn into a general discussion on free will, I would just like to mention a few points related to this. In philosophy, this is generally known as the consequence argument, whose modern formulation is essentially due to Peter van Inwagen.

Its main assumptions are that the past is the past, and the laws are the laws—that is, there’s no way to change the past, and the physical laws uniquely determine the future given the past. These seem, to many, unassailable, but of course, for any obvious truth, there’s a philosopher who’s made a career out of questioning it.

One particular way of questioning the ‘law is law’ part is to appeal to a philosophical stance known as Humeanism—as might be recalled, Hume challenged the idea that we can know anything of causality; all we can know is certain things habitually occurring together. So on a Humean view, the world is just a sort of patchwork of local effects (this particular reading of Hume is mostly associated with David Lewis). A world looks like it is governed by law if it has a highly compressed description (you are probably familiar with algorithmic information theory, so I won’t dwell on it, but the gist is that a law is anything that takes a set of data and outputs a shorter description from which the data can be regenerated—like ‘compressing’ the complete set of points a baseball traverses after having been thrown to its initial conditions and the law of gravity).

But then, laws really don’t have any sort of compelling power—that you had peanut butter rather than jelly on your toast may be derivable from the initial conditions of the universe and the physical laws, but that doesn’t mean that you couldn’t have had jelly: it’s just that in that case, the laws would’ve been different.

Another way to challenge the consequence argument is the following. Suppose you have an agent capable of miracles. They are able to, at their whim, violate the laws of physics, and spread jelly rather than peanut butter, even though only the latter is compatible with physical law. Suppose now that they elect not to do so: they go along with the physical laws, because they really do want peanut butter on their toast. It would be weird not to, in some way, consider this agent still free—after all, they could have violated the laws of physics, they just chose not to (this is related to so-called Frankfurt cases). But then, it’s logically possible to act in accordance with all the laws of physics, and still be free—hence, pointing out that the future is uniquely determined given the laws of physics does not logically exclude the possibility of freedom (a universe in which everything always happens according to the laws of physics and in which there is nevertheless at least one genuinely free agent is possible).

Finally, one may question the definiteness of the past. At least macroscopically—and on some plausible assumptions even quantum mechanically—many possible (microscopic) pasts are compatible with our observations, and a decision for one option rather than another may then just determine one (subset of) the possible microhistories rather than another. (This argument has been recently championed by physicist Carlo Rovelli (link to video lecture), but I actually think it’s developed more fully developed by Barry Loewer in this article.)

Hence, regarding your question, I think that the determinism of the world isn’t necessarily a complete rebuttal of the idea of freedom. But I also think that stochasticity doesn’t necessarily help—a random event isn’t really subject to will, in any sense. What we want isn’t really merely to be free from coercion (whether by external agents or the laws of physics), which a fairly rolled dice might have some claim to be, but rather, we want to have meaningful choice—and there are arguments against that that I think are significantly stronger than the consequence argument, such as the argument from regress: in order to make a choice, you must meaningfully make up your mind—that is, you have to have some reasons. But why do you have these, rather than other, reasons? They could be accidental (many of our reasons probably are), but then, you’re not really ultimately free—your choice is ultimately determined by these accidental reasons. Hence, you ought to have reached those reasons deliberately. But what reasons did you have to arrive at these reasons? And so on, ad infinitum. (This argument is most closely associated with Galen Strawson in recent years.)

Getting around this is, I’m afraid, rather more tricky. I think it can be done, but it requires some degree of metaphysical radicalness, and isn’t really going to fit into this post. :slight_smile:

Even in physical theories that lack randomness, dependence on initial conditions is often something you pretty much have to put in the theory by hand. In Newtonian mechanics you can contrive a situation like Norton’s dome, in which for a given initial conditions there are infinite ways a system can evlove. Even worse, given a set of intial conditions in general relativity, there is ALWAYS an infinite number of ways a spacetime can evolve. In both cases you can add conditions to how a system may evolve so they evolve uniquely. However, these conditions really serve no othe purpose but to guarantee uniqueness and may be too restrictive, for example in general relativity it also places restrictions on the intial conditions themselves.

I suppose my point is that multiplicity of outcomes is not something that MUST be associated with stochasticism, though the laws of probabilty are needed in order to maintain the ability to predict.

The literal reality of what actually happened is irrelevant to us, though. The past is only significant in terms of what we perceive it to be. Which is to say that the past may seem to us to be carved in stone, but then we may, and often do, disagree on what actually happened. What we perceive or think happened is what affects our behavior (and I think this applies either to some extent, or entirely, to any given system from a human to a hadron), and sometimes our understanding of prior reality undergoes revision.

Which is to say the past, upon careful inspection, appears to be as dynamic and unknowable as what is to come. Really, even the boundary layer of now is not discrete, like a smooth plane, but comparatively nebulous and shifting: my this-exact-moment is very meaningful to me, but you cannot really make meaningful sense of it, since you are over there and I am way over here.

Hence, determinism fails by definition. It demands a strict Cartesian model of time/spacetime, which does not readily map to reality. Yet, at the same time, to insert ‘free will’ in its place seems premature, at least until we understand why it needs to be there. For all I can tell, it looks like a vestigial appendage to our understanding of stuff.

Deterministic and calculable are two separate issues. With determinism and an eternal universe the passage of time itself might not be anything more than a perception.

Reality doesn’t depend upon perception or measurement. That seems like a philosophical tangent in a purely physical universe.

Deterministic means that the past completely determines the future. This means that Newtonian mechanics (unless forces are restricted to be Lipschitz continuous) and general relativity (except when restricted to globally hyperbolic spacetimes) are not deterministic. In both cases you can find examples of time evolutions that don’t completely depend on initial conditions or in other word where the past doesn’t determine the future.

In both cases we know these theories are not 100% accurate, whether or not you place restrictions on them to make them deterministic, but I’m not trying to make any points about physical reality. I’m trying to make a point about physical theories/models, i.e. it is not necessarily correct to think a model is either deterministic or stochastic. As I also alluded to though predictability and hence usefulness is a huge issue for a non-stochastic, non-deterministic model.

The behavior of an entity/system has some small or large effect on reality. That behavior is based on that entity’s understanding of the environment, which is predicated on perception of of past events. Logical behavior patterns of living things that have a measurable capacity for adaptation behave according to their understanding of the configuration of the present, as predicated upon how past events have affected this moment.

For deep-reasoning beings, such as humans seem to be, the understanding of past=>future vectors governs many behavior patterns, which means that our perception of what the past was affects how we behave. Every individual has a slightly different understanding of what actually happened, and in some cases that perception can be subject to revision.

In realistic terms, the past is as turbulent and uncertain as the future. Some things definitely did happen and some things absolutely will happen, but as you move outward from the core certainties, the capability for sharp focus drops off in both time directions.