Yep! That’s how probability and apparent randomity would occur in a purely deterministic system. Pseudorandomity is actually a well known thing to we computer programmers, because computers by and large just don’t do actual randomity. So so-called “random number generators” actually function by taking a determined input (the current time on the system clock) and running it though gnarled-up math to produce a series of outputs that looks random, but which actually isn’t. This is pretty much where “randomity” comes from inside a computer.
This sort of thing is as close as you could get to randomity in a deterministic universe - and it could look really random in practice, as the coin flip example does a great job of demonstrating.
Note that the universe might not be deterministic - there could also be actual randomity present. However I argue that if you actually consider how decision-making actually works, the addition of randomity to decision-making isn’t an addition of will, free or otherwise; it’s a reduction of will. The more random you are, the less will you are exercising. Something that is completely random is acting on no will at all.
If libertarian free will is just “deterministic free will with some random elements preventing it from being perfectly predictable”, then “libertarian free will” is “freedom from will”.
There’s no paradox here (though there is one potential problem). Once you add knowledge of your current existing brain state to your existing brain state, then the resulting brain state is necessarily different than it was prior to adding the awareness of your existing brain state. The new brain state may, or may not, decide things differently than the previous brain state would have.
Note that something like this can actually happen in real life - you can say, “Well, I know that normally I’d choose the strawberry pie here, but I’m feeling contrary today for some reason (perhaps I’m trying to prove to somebody that I have free will) so I’ll choose the shit sandwich instead.” Of course you didn’t actually do a full simulation of your normal brain state, you just approximated it, due to that potential problem I mentioned earlier:
The potential problem is actually one of storage. Consider a brain that has 100 bits of data. If this brain tries to store an image of its own brain inside itself, the brain only has 100 bits of data to store that image in. There’s no room for anything else but the image - so the image becomes the new brain state with no room left for thinking about the fact it is the brain state.
“But compression!” you might exclaim, and you might have a point. However proofs have been done showing that regardless of what form of compression you use, compressions must either be:
- Lossy (like the approximation I mentioned earlier)
or - Sometimes counterproductive. Any non-lossy system will have some data arrangements that doesn’t compress, and some data arrangements will even become larger when compressed. This is unavoidable for mathy reasons. And it does put the kibosh on reliably being able to simulate your entire brain using only a part of your brain.
There’s also the extreme likelihood that your brain will actually have less storage capacity than it has moving parts. This is why when you emulate a computer you require noticeably more storage capacity than the computer you’re emulating had. Plus of course human brains don’t actually know their own physical makeup (it took hundreds of years to figure out!) so people would never emulate their own thoughts to that precision anyway. But they can, and do, think about and approximate their thought patterns in general terms. Far from being a paradox, that’s pretty normal.
People don’t really do random shit. Actual random shit would probably not result in coherent action - if I “randomly” decide to punch you, that’s me identifying you as a target, locating you in physical space, orienting my body, and carrying out a rather complicated set of muscular adjustments to cause my fist to clench and move rapidly towards you. Actual random impulses occurring in my brain would probably look more like a spasm.
And of course murdering your family is a scenario I invented, because I don’t think I’ve yet heard a libertarianist come up with any scenario that is supposed to demonstrate libertarian free will that is not obviously just as easily explainable with compatiblist free will. Libertarian free will adds nothing.
Especially since libertarianists tend to insist that it’s not randomity either, which really screws things up. (And not just because it’s literally impossible by definition.) There is literally nothing they could possibly point to as an example of libertarian free will that is not more easily explained by determinism with a sub-microscopic dash of randomity. (Or no randomity at all.)
The state of the universe at time T+1 occurs because the state of the universe included a decision made at time T that was instigated by the state of the agent’s mind at time T-1. If not for the physical machinations that occurred while the mind considered its options and selected one of them preferentially, the physical universe would not have ended up in the state it was after the decision is made.
Seriously, I don’t understand what non-compatiblists imagine is going on here. At time T-1 Fred is looking at an apple and an orange. At time T+1 Fred is picking up the apple and biting into it. What do they think happened in the meantime? Nothing? That the molecules of the universe moved randomly and in defiance of the laws of physics to get from where they were at time T+1 to where they were at time T+1 without stopping anywhere inbetween? Do they think that everything happening in the brain was meaningless unaware static and that Fred doesn’t even know that he’s reaching for the apple? What?
You’re arguing that Fred didn’t make a choice. Why is he suddenly eating the apple and not the orange? Because a freak wind blew the apple into his mouth or something?
“Wouldn’t be predictable nor entirely random” describes a universe with random elements. This universe wouldn’t be a deterministic one, at least not technically. However it could be mostly deterministic. (In fact if an agent continued to exist for more than one unit of Planck time, it pretty much has to be.)
The thing is, though, randomity doesn’t add anything good to the decision-making process. Randomity is a reduction in will - in fact, it’s an abrogation of free will. Because randomity could be described as an external force (the randomity, which isn’t caused by anything about you) intruding into and interfering with your will. Which is no better than Zeus reaching into you and flipping your switches around.
Hold on a second - if your plan is to wait for somebody to shout “fall” before you drop the marble, and you’re planning to drop it as soon as they do, they’re causing you to drop the marble. Sure, the chain of causation is rather esoteric, but it’s certainly there. You just argued a deterministic example, where the system that encompasses the marble the human, and you chose that the marble would fall.
The non-deterministic example would be if you set a timer to drop the marble after ten seconds, and then your volunteers shouted “fall” or “hover” whenever they wanted. But what are the odds that that will work out? The vast majority of your volunteers will figure out they’re not causing anything, especially if you let them do the test multiple times. (Read: let them do some science on it.)
“Neither fully determined nor random” just means “deterministic with random elements” or “nondeterministic”. Which certainly could be occurring, but it wouldn’t be libertarian free will, unless libertarian free will is way lamer than libertarianists claim it is.
Also worth noting, just because the universe is nondeterministic, doesn’t mean cognition necessarily is. It’s entirely possible for a physical system to account for and correct minor perturbations in its state, to operate deterministically regardless of the peturbations. Computers have been doing it all the time with regard to minor fluctuations in the electrical current, and if they didn’t they wouldn’t work at all. I’m firmly convinced the the human brain operates with similar protections against most or all randomity within itself, because we’re not spasming all the time.
You could tell me what you think libertarian free will is, how it works, and how it adds value to cognition beyond what a completely deterministic cognition can offer.
I could get you to understand what compatibilist free will is, how it works, and why I think it is the only true form of free will, which adding randomity only detracts from. (I don’t expect you to come to agree with me, but it would be nice if you understood where I’m coming from.)
I would LOVE if somebody could give me an example of libertarian free will in action that isn’t obviously just deterministic free will doing what deterministic free will does.