Do humans have free will?

**Do humans have free will?
**

Of course. There is no proof to the contrary. As far as your argument about men committing more crimes, etc., keep in mind that, although we claim we are “civilized”, we are still apes. The males are the ones who hunt and fight for dominance. We haven’t gotten past that part yet.

Sigh. Dude, please don’t argue for the FW side.

There is plenty of proof for Determinism, scientifically. Do a little reading, please.

Heck Christianity has pretzeled itself into knots because it can’t reconcile “Omnipotent God” with “Predestination” and the ability to choose to accept Jesus.

My point is: even if we live in a Hard or Soft Deterministic World - which I am open to accepting - we still behave and feel as if we are existing in a Free Will world. I would rather focus on what I am experiencing and how I can use that go be a better person (by my definition).

I am absolutely not arguing dishonestly and am insulted by your accusation. I really do honestly believe that the mind does not work in mysterious ways.

I’d like to start by pointing out that at no point, ever, have I claimed that determinism means that there is a generic way to use data prior to the choice to derive what the outcome of the choice will be. I have said that to discover what the choice will be, you have to set the universe in motion and watch it happen.

This is exactly analogous to having to making a measurement and reading off the outcome.

I think the kicker here is that undecidability and irreduceability do not make things magical. They just mean that you can’t skip steps. You can’t create a simple generic program to analyze the questions and solve them in a shorter amount of time; you have to actually run the processes out in full to find the answers.

This makes a huge difference in the place I usually hear about undecidability - when you want to know if a computer program will ever terminate. The reason it makes a difference is because there is no trivial way to prove that a program will not eventually terminate, since running it and waiting for the result isn’t viable if it runs forever but you can’t stop waiting early because you might assume it’s unending when it was actually planning to end just a few minutes later. So it would be awesome and super useful to be able to know at a glance if you should be wasting that time, and thus it was super-disappointing when it was proven that no generic “at a glance” solution was possible.

But that’s dealing with processes that might never end. When processes will end, and you know it, decideability is no big deal - you can always just run the processes to their completion and see what happens. You can always just take the measurement.

Human decisions don’t take forever to make, and even if they wanted to, no individual human’s cognition runs forever. And there’s also never an infinite amount of time between the big bang and any given person’s decision. So in a material and functional sense there’s no reason to care if human cognition (or physics in general) are irreduceable. You can always just wait for the result - which in a deterministic universe will always be the same no matter when you started waiting, since since no random variations changed things before the decision was made based entirely on their knowledge, beliefs, mood, and preferences at the time the decision eventually was made.

So. You said “Then, it is true that the decision was completely determined by this preference, and that the decision was not logically fixed in advance, i.e. for instance at the big bang.” I don’t believe this is true, because, well, it’s not. For one, “completely determined” is pretty much synonymous with “logically fixed in advance”, and for another, there is not and has never been anything about undecidability and irreduceability that states or implies that the result is not logically fixed in advance. Those terms only mean that it’s not possible to use some generic shorthand method to predict the result in advance.

That’s what undecidability and irreduceability are about - it being impossible to use a generic solution to answer a specific question about a situation or process without actually carrying it out and examining the outcome. They are not about the solution being Schrodinger’s cat, their final state magically in flux simply because you haven’t looked yet.

I fully agree that we should hash this out before moving on to other aspects of the discussion.

Well, except maybe this. I would just like to state for the record that, in my personal and educated opinion, this is absurd. There is absolutely nothing about determinism that states or implies that human brains don’t work in the way they obviously do, which is to say driving your actions based on calculations based on your knowledge, beliefs, mood, and preferences at the time the decisions are made.

Sorry, but that’s so blatantly wrong I couldn’t let it stand. Determinism isn’t proven (or provable) to be correct, but it’s also not so transparently absurd that nobody would ever have considered it for more than a second.

What’s wrong with compatiblism? Sure, it means nailing down the definition of “free will” in such a way that it makes syntactic and logical sense, but after that everything works together fine.

“Epistemic Arrogance” - also known as Hubris. Believing that Human brains are capable of grasping more than a fraction of our reality. Good luck with that.

My luck is awesome!

Also, I’m not claiming to understand the entirety of our reality. I’m not even claiming to understand the precise mechanisms by which our brains function! (Though I’m pretty sure they’re squishy.) So I think I’m currently up to claiming to understand approximately 0.000000000000000000000000000000000000000000000000000000000000000000001% of reality, or possibly significantly less. Hubris indeed!

However, the tiny bit of reality that I do claim to understand includes the fact that humans have knowledge and preferences and ain’t afraid to use them - and also what incompleteness, irreduceability, and Gödel’s Incompleteness Theorem actually are.

(For a fun but not light read, try Gödel, Escher And Bach: An Eternal Golden Braid by Douglas R. Hofstadter. And to better understand incompleteness and irreduceability, er, college level computer science classes?)

Yep. Read it, thanks. And I was a CompSci major in college.

Excellent! Good to see another compsci guy.

You seem to dislike compatiblism. Do you have a specific problem with it, or are you just burned out on the awful mishmash of meanings people try to apply to the term “free will”? (Or option c: none of the above, I suppose.)

I dislike long noodlings about Free Will vs Determinism. Read the thread I linked to, and while you’re at it, relook at this thread and how freakin’ long your posts are. People pondering this question burn more oxygen than a wildfire.

When all is said and done, pondering the science of Determinism is fascinating as an intellectual exercise, but doesn’t help one deal with the everyday and quickly devolves to masturbation.

I’d rather spend my Philosophy Units reading Montaigne, Lao Tsu, Epictetus or the Skeptics. They are trying to apply Big Question thinking to everyday life.

My posts? I make long posts because I like it! I like talking! I like explaining! I like thinking! I like precision! I like covering all cases! I like explanation points! I like cake! Even if it is a lie!

Ahem. I’ll concede that text walls are scary to the average reader and that I’ve probably personally driven several hundred people from this thread, several dozen from the board, and I’ve probably turned a couple of them off of reading entirely. But you’re not going to convince me that the discussion and debate and the usage of oxygen is itself a bad thing. If nothing else I’ve probably starved and put out a few wildfires.

Also if just occured to me that a guy who calls himself WordMan is saying I use too many words. Man, that’s odd.

Recognizing the validity of compatiblism actually pulls the teeth of people trying to use it as an excuse for nihilism or lawlessness or an abrogation of responsibility or whatever. Following the argument deeper restores a recognition that consequences matter, punishments both work and are jusitifed, and basically that the entire decision to ‘pretend’ that free will is real is actually justified in reality, regardless of whether determinism is true.

Well, when you keep repeating the same falsehoods despite having been shown an explicit model of how they’re false, one starts to wonder.

Depends on what you mean by that. There’s certainly plenty that’s mysterious about the mind. But I don’t believe in any magic fairy dust to make things go, either.

Well, you can’t have it both ways. Either, every outcome is logically determined by that prior data—even if there may be no easy way to get at it. And then, there is no responsibility: the full reason for a given outcome is that data, plus the laws of physics. When talking about the reason for a given outcome occurring, you never once have to talk about agents.

Or, that data does not logically imply which way a given choice is made. Then, there may be responsibility: it’s only the data, plus a given choice, that implies a certain outcome. And then, you believe in logical independence.

It’s not: in my example, the outcome is not decided by prior data.

No, they don’t make anything magical. But they mean more than just that you can’t skip steps (that’s irreducibility on its own): they mean that there is no logical implication from prior data to an outcome, too.

That’s not quite it. In fact, the problem of deciding whether there’s a spectral gap in many-body systems is actually equivalent to the halting problem: there’s a certain program that halts if, and only if, there is a gap. (Although in fact, the proof is a little more indirect than that, and involves first a mapping of the system in question to what’s known as a Wang tiling, which is however equivalent to the halting problem.)

So that’s why you don’t have to wait forever here, yet there is still undecidability.

I’m not saying that you can’t just wait for the result; I’m saying that you have to, in a sense different from even when the decision problem is merely irreducible. Even if all you can do in order to find out which way a given decision will go is, say, explicit simulation, it’s still the case that given the initial data, that decision is logically necessary, it’s fixed; but with independence, that’s not the case: the initial data does not suffice.

While it’s intuitive that ‘determined’ and ‘logically fixed’ are the same, both are defined differently, and it turns out that those definitions actually don’t match up. That’s counterintuitive, but out intuition does not rule what sorts of things can happen: the world may be counterintuitive; it’s not limited to what we consider reasonable.

Again, the analogy to Gödel helps: whether the Gödel sentence is true may be ‘completely determined’, as the axioms either are consistent, or not; yet still, there’s no way to logically derive this truth from the axioms, so it’s not logically fixed, in this sense.

As for there never having been something about undecidability that implies that a result is not logically fixed in advance, well, that’s just what I’ve been showing you these past few posts: undecidability exactly means that something is, well, logically undecidable, i.e. not fixed by the laws of logic.

Well, it’s a very famous argument (commonly known as the evolutionary argument against naturalism), ultimately reaching back to Darwin himself; I think the clearest modern formulation is due to Alvin Plantinga (fun, but not too light read). But it’s not really something I want to debate right now; if you’re interested, however, this essay collection contains some interesting arguments.

But again I’m sure all of those debating this argument will be relieved to hear how it’s just ‘blatantly wrong’.

All that you’ve shown me, with all due respect, is that you don’t understand what incompleteness, irreduceability, and Gödel’s Incompleteness Theorem actually are. Or more accurately, you’re trying to stretch them way, way, way beyond what they actually state or imply.

I’m reminded of how people like to stick the infinity symbol ∞ in arbitrary places in equations and try to use it as if infinity is actually a number. Which it’s not.

False. Again. For the dozenth time, god dammit, you can’t predict jack shit without carrying things forward enough to instantiate the agent, and then only then, within the mechanisms of the mind, do the calculations operate which produce the result.

Plus it’s completely incoherent to claim you can predict the results of agent A’s decision about choice X without acknowledging the existence of agent A or choice X - no matter what simplified reduced generic method you imagine could be used to perform this type of calculation on the state data of the newly-formed universe, to even formulate the query you’d have to enter the identity of agent A, the details of choice X, and the time of the decision, minimally, as parameters, to even explain to your predictive algorithm what decision you’re curious about! To allege that you could make such predictions without recognizing the context they’re taking place in demonstrates conclusively that you literally haven’t bothered to think about the mechanics of the claim you’re making.

And regarding that ‘responsibility’ thing, I’ll answer that right now.

When you throw a ball at a wall, and the ball stops moving forward when it hit the wall, what was responsible for the ball’s forward motion stopping?

You can facilely answer ‘physics’ and wave your hands dismissively in the air, but the second you attempt to answer in anything close to descriptive detail, explaining why it stopped moving there, your description is going to include some sort of description of the wall, and what the wall’s properties did to the ball. There is, quite literally, no avoiding it. Because the wall was responsible for stopping the ball, even if it used physics to do it.

Just like there is, quite literally, no way do describe the process of an agent making a decision without at some level describing the relevant aspects of the agent. Because the agent was responsible for making the decision, even if it used physics to do it.

What, out of curiosity, do you imagine is making that choice? Because it sure as hell couldn’t be the agent; he’s mentally deterministic and doesn’t act in ways independent of his preferences, knowledge, beliefs, and mood. Logically or otherwise.

What’s it decided by then? Randomity, or pixies? (I will accept either as an answer.)

Bull. Shit. I know you don’t like dictionaries but these words have meanings. They do not just stretch and morph to function in any way you like.

I was literally talking about the halting problem. That exact thing. :mad:

I’m still waiting for you to explain where this magical “logical independence” comes from. Though this stuff you said here is instructive, because you seem to be stating that even in an exact simulation of our universe, which is mechanically equivalent to the real universe in every way, it won’t be present. Which of course means it doesn’t happen in our universe either, barring external interference which isn’t present in the simulation. So I guess it MUST be fairies.

You’re seriously making a ‘the world works in mysterious ways’ argument. Seriously.

I know Gödel’s Incompleteness theorem. You are exactly wrong about it. Godel’s argument showed that in any sufficiently powerful formal axiomatic system there are syntactically correct and logically derivable statements that cannot have an accurate truth value calculated for them, and thus no such system can be complete. That is ALL it showed. You cannot use it like a magic word to hide preposterous claims behind; I know better.

Do us both a favor and never refer to it again in the context of this argument.

Not that wikipedia is the best source, but:

And of course the argument in question is that it’s ‘super improbable’ that sentience could evolve without God’s guiding hand, thus atheists should accept that they can’t possibly have functioning minds.

Looks like it REALLY IS fairies.

You keep claiming expertise on subjects where it’s clear your understanding is at best marginal. On Gödel’s theorem, when you say:

All of this is pretty much completely wrong; if that’s your level of understanding, then I’d suggest you re-read GEB. Gödel showed that there are well-formed formulae that are precisely not logically derivable from the axioms. The theorem, as pointed out to you previously, also has nothing to do with truth, since the notion of ‘truth’ for theorems of a formal system F cannot even be formulated within F (Tarski), hence, there is no formula of F such that it asserts ‘formula x is true’. There is, however, a formula that asserts ‘formula x is logically derivable’, and it was this that Gödel worked with. So try and update your understanding there—I’ll be glad to help—, and we will see if the rest starts making more sense to you.

That depends on your number system. It is a number in the projectively extended reals, where for instance an equation like a/0 = ∞ is perfectly valid. This is once again an instance of the same general phenomenon: you lack thorough knowledge of a subject without recognizing that, and thus, make false claims you state as incontrovertible truths.

Again, this misunderstands the claim I’m making. To be as simple as possible, let U(t) be the state of the universe at time t, and let t[sub]B[/sub] be the time of the big bang (or near enough). Let L be the laws of the universe, and let there be some choice between options A and B. Then, however you actually realize it, the logical implication {U(t[sub]B[/sub]), L} –> A, on your determinism, is valid; on my model, it’s false. That’s the difference; it’s very simple and concrete.

I might have to talk about the agent in order to state the question, but that doesn’t mean they have to figure in the answer, and that’s what’s important. Plus, I can always talk about the state of the universe tomorrow morning, or of some subset thereof. If the logical implication {U(t[sub]B[/sub]), L} –> A is valid, then this means that the set {U(t[sub]B[/sub]), L} is sufficient reason for A occurring. On my model, it’s not valid, and you need to introduce the actual choice, the actual, mechanical process of how the agent makes her decision, as an additional piece of data.

In other words, only the implication {U(t[sub]B[/sub]), L} –> A v B holds on my model, and it is necessary to introduce the choice X in order to get {U(t[sub]B[/sub]), L,X} –> A; hence, X is part of the sufficient (logical) reason for A occurring.

You can answer ‘the wall’, of course; but you can equally well answer the set of conditions that led to the erection of the wall, such as 'Trump wanting to keep the Mexicans out of ‘Murica’, because if this prior condition were false, then the ball would not have been stopped, there being no wall. And you can continue this right up to the big bang, if your model of determinism holds true. You can’t do that on my model, because there, it would be true that whether the wall stops the ball is not logically entailed by the boundary conditions of the universe; hence, that must figure into the explanation of what’s stopping the ball, too. So, the difference between you and me is that on your model, you can, but need not appeal to the wall; on my model, you must.

It’s nothing more complicated than a causal chain: in place of each cause, I can substitute its causes; since if those were not to occur, it wouldn’t, either. So Trump’s decision to build the wall is necessary in order to stop the ball; and so on, right down to the dawn of time. And under your brand of determinism, ignoring logical independence, everything just follows from there.

Again, that’s the point of the example system I introduced. The choice of what the measurement outcome will be is determined by whether there is, in fact, a gap; it’s just that this existence of a gap does not logically follow from prior data. It’s an independent fact.

I know. And you were alleging that it only applies in cases in which you’d have to wait for an open-ended period in time. So I explained to you how that’s wrong.

I have done so many times with the example from many-body physics. The logical independence comes from the fact that you can encode a Wang tiling problem in the state of the physical system, such that there is a spectral gap if, and only if, the tiles do actually tile the plane, and hence, if, and only if, a certain program halts; but since whether that program halts is undecidable, so is the Wang problem, and consequently, whether there is a spectral gap.

I’m not, though. In an exact mechanical replica universe, everything would occur the same way as in ours. The measurement outcome would be the same; and just as much, it would not be logically determined from prior data.

Seriously not. I’m giving explicit and detailed models of how these things work.

So, because it argues for a conclusion you don’t like, it must be false? Well then, that’s sound logic!

And of course, if you’d bothered to study the argument, you’d have notice that it essentially has two parts: first, it argues that our cognitive faculties are highly unlikely to be reliable if both evolution and naturalism are true, and second, that assuming a god eliminates this issue. I’m trying to find an alternative, naturalistic response to the dilemma posed by the first part.

If you feel you must tar me with the continuing references to magic and the like, fine, knock yourself out; but you really shouldn’t hold your possible readership in so much contempt as to believe they won’t figure out this cheap guilt-by-association ploy.

One absent-mindly steps into the road in front of a car … one’s body reacts to safe itself from the danger before one is consciously aware of one’s situation. What “will” caused one to step in front of the car in the first place? Maybe that of the person one had a dispute with the other day?

This is why I would say that will is not absolute. There are certainly breaches and interruptions of will, and when we’re not paying any attention to the world, we’re certainly not exercising our will. That’s when things like stepping into traffic happen.

There are also things (some horrible) that happen contradictory to our will, such as when we lose our temper and shout at a loved one, or can’t manage to stop smoking cigarettes, no matter how much we try.

I believe human volition exists…but, wow, it is certainly flawed.

I’ve seen it several times.

And now you have see it at least once, too.

No.

That something is ethical does not mean it is non-psychological. These aren’t mutually exclusive, not even close. If ethics existed in a manner completely outside of our psychological perception of it, then nobody would care about it.

No.

I. Do. Not.

I say that because when I point out the psychological congruence of these beliefs, you counter immediately with non-psychological points based entirely in logical differences. It’s not that I am personally discounting psychological evidence – which is good, since I’ve cited a good chunk of it myself – it is that you are personally switching between psychological argument and logical argument in a way that I can’t understand. When I make a psychological point about the relationship between “free will” and beliefs about punishment, you immediately shift to the logical differences.

From my perspective, there doesn’t seem to be any rhyme or reason to that shift in the argument. I’m not saying that you’re being inconsistent, just that I can’t personally perceive any underying structure. That’s not unusual, though, for any complex argument. I don’t think your argument is entirely clear when you do this shift, but it’s a tough argument and I’m not blaming you for that difficulty.

I didn’t “essentially” say anything of the sort. I didn’t say anything at all about what I WANT to believe. I said the exact opposite. What I believe and what I WANT to believe are not the same thing, and I have already given you one example already of the difference between the two.

For the record, I do want to believe in a (sensible) notion of “free will”. I’m not exactly sure why, maybe because it’d be a philosophically interesting position to hold, maybe because there are people I know and respect and admire who believe in it. But I don’t. I don’t believe in “free will”, regardless of what I want, and although I do make many “choices” (internal “decisions” that I would happily classify as choices, even though I believe they are determined), “belief” is not one of those things. I choose what I want for breakfast, but I don’t “choose” what to believe. There is no internal consideration of which belief I’d WANT to have, and which not.

I believe something yesterday. Then I see strong evidence against it today. And that’s it. The old belief is gone. Even if I for whatever reason wanted to believe that the Brouwer fixed-point theorem was wrong, I wouldn’t be able to manage it.

If you “want” to put words in my mouth and then say I “want” to believe things that I never said, you’re of course free to do so. But then it’s strange for you to put words in my mouth that I didn’t say, and then accuse me of communicating poorly when I don’t write what you said I wrote, not “essentially” nor in any other way.

Now, communication is a two-way street.

I don’t understand the shifting you’re doing between psychological and logical argument. When I make a psychological point, you seem to shift to logic. I don’t personally see the consistency there. I think your posts are not communicating what you think they are communicating. Does that mean that you are “communicating yourself poorly”? I could certainly blame you for that, if I were the sort of person inclined to those accusations.

Or I could accept that you’re trying to express complex ideas to someone who doesn’t share your views, and that this is a difficult task, and that if I don’t understand, it’s not necessarily that you “communicate yourself poorly” but that there will be more effort necessary than just a single post. But I have never accused you of being “wrong” or “deceitful” for this weird shifting from the psychological to the logical (or about anything else for that matter) . What I have said is that I’m faced with a “perplexity”. That means I don’t understand what you’re trying to communicate. I then pointed out my issue, the part that I don’t understand. And then you accused me of “devaluing psychological evidence”, which wasn’t even close to what I said. Although, again, I could’ve been unclear about the nature of my perplexity.

Communication is a two-way street.

I’m trying to rephrase anything you misinterpret, and you can do likewise. Meaning is negotiated, it isn’t beamed telepathically from one mind to another. Our communication might be better if you took your share of the responsibility for making it work, rather than placing blame elsewhere when it seems convenient.

Fair enough.

Since Riemann has no apparent objections, I’ll accept that.

I “alleged” no such thing.

Again, these words you land on for describing my posts do not match what I say.

What I said, quite explicitly, is that I don’t trust you to have the lofty ideal of truth. (I don’t trust myself to have a lofty ideal of truth either, although this is another thing I want to believe about myself.) That is NOT an affirmative allegation or accusation that you are certainly are deluded/dishonest/etc. It’s simply that I don’t trust your self evaluation. It could go either way, based on your future behavior, in this “trial” (to borrow, again, the metaphor that you started with). And I also made clear that this wasn’t an issue with your posts in particular, wasn’t based on “any indication” in this thread, but that it was a general rule that I relied upon. You just don’t happen to be an exception to that general rule. You’re not special in this way.

Now, if you keep putting words in my mouth, I’ll keep correcting you. If I misinterpret you and put words in your mouth because I misunderstand something you said, I have no doubt that you’ll be happy to correct me. And that’s fine, as long as we both realize that communication is difficult and we both have parts to play. We both have responsibilities for making it work.

Defend yourself from what exactly? How are your ideas being attacked? Is “not being convinced” an assault upon your person?

In the next post are my extended reasons for not being convinced. I don’t particularly expect that you’ll find my reasons convincing. But if you responded simply that you just weren’t convinced, then that would not be an attack against which I had to defend myself. I’d just shrug my shoulders and move on. I’d probably ask you your reasons, just as you (fully justifiably!) have asked mine, and I would not be particularly surprised if your reasons took some time to type up, even up to several days. But I don’t see how not being convinced is an “attack”.

This is more of that you-putting-words-in-my-mouth stuff that you should probably stop doing.

I didn’t come remotely close to “extolling my virtues”. What I actually did was: 1) point out that I don’t particularly trust your self-evaluation (which is NOT the same as “alleging” that you are definitely wrong and deluded), 2) in the next post, make reference to the psychological research that people are more rationalizing creatures than rational creatures, that we deceive ourselves about our motives all the time, then 3) state that I don’t generally trust people’s self-described motives for this reason, not just of you but of everyone. This step in particular just seems like plain common sense to me, good logical hygiene and nothing highfalutin or “virtuous” about it. And then later, 4) I pointed out my own poor experience in the past with people who claim that their motives are high and noble.

If you’re going to kneejerk from that to claiming that I was “extolling my virtues”, then I must again point out that despite the flaws of my writing style – which I have no doubt are copious – your interpretation of my posts are not always landing in the same universe of what I wrote.

This is completely fair, and this is exactly the kind of evidence that would be relevant.

I haven’t gotten around to reading those links yet.

Mine also.

There’s been a sort of family emergency, I’ve had to go out of town for the extended weekend and won’t have more time for writing until mid to late next week, after the next post.

Every single thing in the universe that we seek to understand can have literally an infinite number of explanations for it.

There are infinite stories that could play out in just such a way that would ultimately result in what we see and hear and feel around us. The question, then, becomes one of how we reduce that infinity of possibilities into the stories that we find more plausible versus less plausible. For that, we use a razor. We cut away the ridiculously complex in order to focus on that which seems simple. Not too simple, of course. The story must be sufficiently complex that it can explain what we see, but ideally, not any more complex than that.

This isn’t anything new. Nothing that I say in a thread like this is new or original. I just turn around the words in my mind until I feel I have a handle on what they mean.

Then what’s next? The entire question of what’s reasonable to believe, versus what is not reasonable to believe, focuses then on the nature of the razor that we use. Which razor do we use? Because that’s the real issue. That’s the core matter at issue. People disagree about which explanation is simpler, and which is more complex. We have different notions of simplicity. There isn’t just one razor that people are using collectively, because if there were, there would be a lot more agreement in the world. When I say that the many-worlds interpretation seems, to me, to be the simplest interpretation of quantum mechanics (based not on direct study, but rather on reading ideas from the physicists who made the most compelling arguments), other people often look at me like I’m insane. Why? They’re using a different razor. They hear about the MWI and they don’t see “simple” but rather “ridiculously complex”. If we shared the same razor, this would not be an issue. The MWI actually is simple… according to a particular and precise definition of simplicity. This is the definition I rely on. And it is, in fact, the only one that makes any sense to me.

The entire question is: How do we define complexity? That’s the core of it. And there are basically two answers to this question.

The first answer is common sense intuition. Gut feeling of understanding. This is the razor that the overwhelming majority of people use. This is how we get stories like god. This is how we get intuitive notions of morality and justice, like “guilt merits punishment”. The vast majority of people who believe in free will also believe in an objective morality, that there are strict moral facts that really exist, that these strict moral facts are just as fundamentally a part of the universe as the laws of physics as we understand them.

These two issues, of justice and of “free will”, are not actually two separate questions. They stem from the same question, because people use the same razor to try to understand them. They use their gut instinct. They use their intuition.

We have a lot of very basic instincts, for example a propensity to assign “agency” even for inanimate objects, and even for the world as a whole. As I child, I once tripped on a crooked sidewalk, and I got angry when I fell and skinned my knee. So what did I do? I got up and tried to kick the sidewalk that had injured me. I really tried to do that. This was not an especially sensible thing for me to do, but my brain had immediately and unconsciously assigned agency to the event. I was angry at that concrete sidewalk, and I wanted to punish it for its transgression. This is the result that comes from overactive intuition.

I don’t trust intuition.

Suffice it to say that I do not believe common sense intuition is a suitable razor. I read people who rely on it, some of them extremely intelligent, many of them better human beings than I am. But its failings seem plainly “obvious”. It’s useful in everyday contexts, and in a pinch when we have nothing else, but it gets too many things wrong. Our intuition developed in one particular environment, and it does not extrapolate well into areas beyond that environment. The nature of reality is one such area where it doesn’t seem to do all that well on its own.

So what’s the alternative razor? If we don’t use common sense, what do we use?

A formal logical structure. Of course.

I say “of course” here, because it’s my intuition to do so. So I could be said to be relying on intuition, up to the point where my intuition says it’s completely fucking stupid to rely any further on intuition. Is that circular? I’m not sure that’s a fair criticism. We have conflicting intuitions all the time. Look at optical illusions, where our eyes are telling us one thing but our “understanding of the world” is telling us something else. (“A” and “B” are exactly the same shade.) Look at trolley problems from moral philosophy. If you ask people what they think the right thing to do is, they will give you a supposed answer. If you take that answer and apply it to a different case, they will immediately reject the ostensible principle they just gave you in order to answer in a different way. The vast majority of people do not have any moral principles built up from basic axioms. They have moral intuitions, and then they make up fake principles in an arbitrary post hoc manner when they’re asked to explain their intuitions. There are very few people who can answer questions about moral principles in a consistent fashion, because most of us don’t rely on principles for our internal notion of justice. People rely on their gut. This is the razor that they use.

There is, I believe, a better option than relying purely on intuition for these sorts of questions. There is something external to us, objective instead of subjective, a method outside of us that can be used as a clear marker in the landscape in order to settle decisively the notion of what is simple and what is complex. When we’re faced with conflicting intuitions, we can use logical tools in order to settle that difference.

And it just so happens that 20th century mathematical advances gave us exactly that tool.

When we try to formalize the definition of “complexity” using modern thought, we’re always pushed toward the same answer. There are lots of totally uncontroversial examples of this directly from physics, such as the cosmological horizon. If we were to travel fast enough away from the earth, we would eventually reach a point-of-no-return. If we went past that point, we could not return to earth, no matter how close to the speed of light we could travel. Space would fill up with more space faster than we could traverse the distance back. We’d never get back. That’s the standard view, and it’s totally uncontroversial.

But a different view – a different story that could explain the same thing from our perspective on earth – is that the universe simply ends past the cosmological horizon. Nothing exists out there. The universe doesn’t bother with “computing” any of that stuff beyond the horizon. Is that not a “simpler” explanation? Is it not “simpler” to say, hey, the universe is what we see and only what we see? And beyond that, there is no more universe? That would be a smaller, simpler universe, right?

No. No, it’s not. It’s actually a much more complex universe, even if it is smaller.

It’s much simpler to say that the laws of physics remain the rules that they are, playing out in the way that they do, even out beyond the point-of-no-return where we cannot confirm that is the case standing on earth. This is the Information Theory notion of simplicity. Physics is a very tough subject, not everybody can hack it, but the laws of physics – when examined from a formal definition of “complexity” – are actually not especially complex in the grand scheme of things. Intuitively, people have a huuuuge common sense problem with this. But it happens literally all the time in formal systems. The relatively simple laws of physics can describe a universe of extraordinary variety. Using this formal definition of complexity, we immediately reject ideas like that the cosmological horizon is the end of the universe, or that time in our universe started last Thursday, with all of our memories of before Thursday also beginning last Thursday. While it’s true that there would less time in the universe if it had existed only a week, rather than 13.7 billion years, “less time” is not the proper notion of complexity. The universe would have to be exquisitely complex, information-wise, for it to be described in a way to make it merely a week old. Similarly, for the cosmological horizon to be the limit of existence requires a much more complex universe, in a same clearly and coherently defined way. The universe would be “smaller” if everything beyond the horizon doesn’t exist, but the price of that smaller universe is much, much, much, MUCH more complicated laws of physics according to the rules necessary to describe those laws.

This is yet another place where human intuition goes wrong, another optical illusion that exists inside our minds. People have a psychological tendency to think that complex phenomena require complex explanations. That is not remotely true.

This conflict between our different intuitions about complexity can be seen very easily. Look at something like the Mandelbrot set. It’s a typical practice coding task to create a program that can visualize that set, and the reason why it’s practice is that it stretches skills while not being too terribly difficult. It’s just not that hard. Yet people look at the complexities of the Mandelbrot set as visualized on their screen, and they’re astounded by how simple the foundation is. This is something weird that people get wrong time and time again. This is another optical illusion. People see something complex, like the Mandelbrot set, and they intuitively want a complex explanation. But that’s not how it works. Complexity arises from simplicity. Complex outputs do not require a complex input, unlike what our brains tend to expect. I’m not immune to this illusion. When I look at the definition of the Mandelbrot set, next to a visualization of it, there’s something in my brain – in my intuition – that fractures. I cannot instinctively make sense of how something so complex can come from something so simple.

Yet it does. The Mandelbrot set is not actually complex. It is simple. It is simple according to a formal, logical, objective notion of what “complexity” actually means.

Our brains are physical things. Our perceptions rely on the physical substance of our brains. We know this. People who believe in “free will” don’t deny it. If the brain is physically altered, our perceptions are similarly altered. If the brain is damaged, our actions change. We know that our perceptions are dependent on the physical part of our brain. And yet what the “free will” people are trying to argue is that what we perceive depends not only on the brain, but also on “something more” beyond the brain. What they are arguing is the physical part is not sufficiently complex a story to explain the complex experiences that they directly perceive.

They might as well be looking at the definition of the Mandelbrot set and denying that the very same simple definition cannot possibly explain the complexity of the output.

Complexity arises from simplicity. Despite my own brain’s being flabbergasted by the fact that such a simple definition can result in such complexity, it is nevertheless true. I don’t need to posit the definition, plus “something more” in order to explain it. I can just focus on the simple definition by itself. That simple description, formally described, is my razor. We already know that the mind is dependent on our physical body. I don’t need to posit “something more”. That’s exactly the kind of extraneous stuff that a razor is supposed to cut away.

I cannot describe how the human brain works in detail. I will never be able to describe how the human brain works in detail. It’s too complex. I can use a Windows 10 computer to run a virtual machine of Windows 95, but I can’t use a Windows 10 computer to run a virtual machine of Windows 10, or even Windows 8. But if there were a Big Brain out there, a million times more powerful than any human brain such that it could encompass the entirety of the human neural system inside of its own imagination, then it would have no problem understanding our instincts, our intuitions, even our “qualia”. It would understand what we experience when we see “green”, even if it had no direct perception of the visible color spectrum itself.

People’s intuitions twitch in horror at that idea. I appreciate that. But it is, actually, a very simple idea. Complexity arises from simplicity. If you can understand the simple rules, and compute them fast enough, then that’s all that you need. There is no purpose in stating that “something more” is required. That’s just the optical illusion of our instincts at work again, the same thing that leads to the feeling of shock when we look at the simple definition of the Mandelbrot set.

Similarly we can approach the notion that the wave function just “collapses” and all of the other information disappears in a “random” puff of smoke in some discontinuous fashion that isn’t even properly defined. That is extraordinarily, hideously complex. There are much simpler explanations, and that’s true even if intuitively the other explanations don’t seem simpler to human common sense. None of this means that the simplest solution – properly defined in a rigorous mathematical way – must necessary be the correct solution. It might not be. Most of the time, it isn’t. We should be prepared to accept more complexity, whenever appropriate. It’s just the plurality of plausibility should be centered on the simplest explanation – properly defined in a rigorous mathematical way – which could explain what we see around us.

If we accept a formal mathematical notion of complexity, then all of these questions immediately answer themselves. There is no mystery remaining.

So if you come up to me and say that the laws of physics work, but only up to a point, and then suddenly a thing happens that is not, in fact, a direct result of those simple rules playing themselves out in complex ways, but rather something that is entirely independent of those previous rules… then you have just unambiguously added complexity in a formal fashion. You have made the story more complicated.

My natural reaction is to treat that idea exactly the same way I would treat the cosmological horizon as if it were the edge of existence, or that the universe is a week old. In order to describe a reality like that, I would need not only the basic laws of physics, which are relatively simple as these things go. I would also need to add a little marker of sorts in order to describe each “choice” that happened which suddenly appeared independently of those previous physical laws.

That is a complexity that is just… mind-boggling. Last Thursdayism is probably simpler.

And so that’s it. The discussion is over, from this perspective. I appreciate that your idea is an answer to my typical question: what is the physics of “free will”? No one has ever tried to answer that question for me before, and I appreciate that you actually have one. I just don’t see the point, at all, of these discontinuous aberrations from simple rules. I could ask a billion questions about how that is supposed to work (in order to describe it), and every answer to those billion questions would require a more and more and more complex description of how the universe works. There’s no point in going through that. I can simply say that a description of the universe in which things suddenly happen, which were not determined or described by the previous simple rules, is such a complex universe that it’s not worth any more consideration. My particular razor dispenses with that immediately.

In contrast, all of these issues immediately dissolve when we approach the issue from a formal definition of complexity. The universe is what it is, and it follows these rules. There is no question anymore. How do our brains work? Our brains follow the rules of the universe, just like everything else. That’s the way it seems like to me, even though it doesn’t seem like that to other people. When brains are damaged, they work differently, because brains follow physical processes. I don’t see why brains have to obey physical processes plus “something more”. It’s that extra stuff that a razor is supposed to get rid of in the first place.

I don’t see what the problem with that explanation is. It’s simple, and it can explain what we see.

And so what am I left with?

I’m talking to people who think their internal experience of “free will” says something about the laws of physics that would require a description of the universe to be much more complex than it would otherwise have to be, for psychological reasons that I cannot relate to and do not understand. And that’s the kicker here. “Free will” sounds like a neat-o thing to have, but I personally do not have the psychological impulse that other people seem to have that demands that we re-think how the laws of this universe work in order to accommodate the idea. I lack that intuition entirely.

So I’m trying to tell people, who intuitively seem to feel things that I do not intuitively feel, that they should stop listening to one part of their intuition and start listening to a different part of their intuition. I am sensitive to the strangeness of this. Truly, I am. But I’m guessing other people have seen optical illusions before. Other people have felt conflicting intuitions before, where one part of their mind says one thing and another part of their mind says something else. Other people have read about trolley problems before, and appreciate that our moral instincts can pull in different directions from the principles we think (often falsely) that we believe in. We can’t ignore psychologically compelling ideas (that’s why they are so compelling) but what we can do is look very carefully at those cases where our minds seem to be pulling in more than one direction at the same time. If we have a proper intuition about the ways that intuition can pull us in the wrong direction, then we can look for alternatives. If we can understand that our brains are shocked by the simplicity of the Mandelbrot set, despite seeing it first hand, then we can appreciate how our brains want to have complex explanations when, in fact, a simple explanation is all that is required.

When our vision is bad, we can invent a tool that helps us. The same principles applies. When our intuition about complexity is bad, we can invent a tool that helps us coherently and formally define complexity, and then rely on that tool. And thankfully, we do have this tool. We have the modern, formal notion of informational complexity.

“Something more” is not actually required. Just find the laws of physics. Then you’re done, up until the point that you see something contrary to them. That is what this particular razor dictates. I don’t see the purpose of any other razor.

With all that said, I don’t expect this to convince anyone. People cling to their intuitive notions of simplicity just as hard, if not harder, as they cling to their internal notions of “guilt merits punishment”.

You think people should give up the latter. You think people should “overcome” their intuitions about justice. But by and large, they won’t, for the same reason they won’t “overcome” their notions of “free will”. Their razor is their own intuition, used purely on its own and untempered by any external objective tool.

I personally think people should use information theory as their razor, and reject the optical illusion of thinking that complexity must necessarily be explained by complexity. This is why I think people should give up both ideas, both of “guilt merits punishment” and also “free will”. This is why I would expect that a fracturing of one idea is very likely (though not always) to be met with a fracturing of the other. But honestly, it’s highly unlikely for either idea to fracture for a general person. People love their intuitive notions. As we must. What else, ultimately, do we have? People are not always comfortable looking at optical illusions, not comfortable thinking about trolley problems, not comfortable having one set of intuitions challenged by other deeper intuitions. People have busy lives, work to do, bills to pay. It’s a privilege to be able to live a life of ideas and play intuitions off one other like playing a game.

And, ultimately, what makes the matter much easier for me than for any people is that I do not in fact have any intuitions about “free will”. On this matter, I simply do not have any subjective notion that my internal experiences are anything else than one more gear in the machinery of the universe. I honestly can’t remember when I first encountered the notion that the universe is a deterministic process, but it’s easy for me to believe that I heard that idea and went… “Yeah okay, that feels right.”

But for other people, obviously, that does not feel right for reasons that I cannot relate to and do not understand. If you say your internal perception of “free will” is strong enough that it demands complexity in order to explain the sensations that you feel inside of you, then I’m not going to argue with you. I’m not going to tell you that your instincts are absolutely wrong, that my razor is absolutely right, or anything along those lines. I mean, obviously my intuition is telling me that it’s completely fucking preposterous and not worth a second thought. But apparently, other people have different intuitions.

I don’t want to trust my intuition that the idea of “free will” is completely fucking preposterous. It’s another optical illusion. Plenty of other people, some of them smarter than me, plenty of them better human beings than me, believe in it. I should not say that my intuition definitively decides the matter, and their intuition does not. Of course, it’s difficult for me to overcome the notion of its absolute fucking preposterousness. My natural instinct is just to treat the idea with complete contempt. But I know what it’s like to have competing intuitions. In order for me to convince other people, I need to take the conflict of intuitions seriously. This is what leads me to my own razor, the objective tools at my disposal in order to decide between them. The formalization of “complexity” is in my view one of the finest achievements of the human mind.

And I think that a world in which more people took this tool seriously would be a better world than this one.

And I do hold it to your credit.

Ethics concerns what we ought to do, how we ought to behave; psychology concerns out beliefs, feelings, desires, etc. So, we might believe we ought to do something—a fact of psychology—, but this doesn’t imply that we actually ought to do that. Belief don’t make it so.

I’m honestly not sure what you’re trying to argue here. I readily acknowledge that many people believe that guilt merits punishment; I just think there’s good reason why they’re wrong about it. The facts of the matter are independent of people’s belief.

Could you give an example here? Because in the end, my stance is simply that human psychology is just a quagmire of heuristics that has no great claim to truth, but merely, to working well enough; and thus, in so far as we’re concerned with truth, we have to try and disregard psychological factors, such as biases and fallacies.

That’s not what I’m doing. Rather, when you point out that people do connect free will and punishment, I point out that there is no real reason for this connection; that it’s something that’s perhaps adopted because of expediency, but that can and should be questioned regarding its accuracy.

Yes, well, then that’s what I’m not getting: I’m supplying a notion of free will that at least to me seems sensible, and, rather than either acknowledging that or giving reasons why it isn’t as sensible as it seems to me, you reject it without argument, giving an analogy to Last Thursdayism: something that can’t be disproven, but which one is better off not believing. Thus, you’re simultaneously saying that you want to believe in free will, but think that one is better off not believing in free will. I can’t see a way to consistently combine the two.

See, this is what I mean when I say that I have trouble taking your admonishment regarding patting one’s own shoulder at face value: in my experience (which, as we already have seen, is fallible), no human psyche works that way. I came to believing in free will kicking and screaming; indeed, every change of mind feels like a sort of betrayal of my former self, who’s often fought quite hard for his positions. I don’t think it’s terribly plausible that anybody just dispassionately gathers the evidence and changes their mind. That’s part of why no discussion is ever purely factual—emotional reasons do have their role to play; that’s simply a fact of human nature.

I make inferences based on the behavior I observe. Of course, these are likely biased, and colored by my expectations. But I could either leave them unstated and view things through this lens, or express them and give you the chance to set me straight.

Yes, quite possibly, and if you ask, I will be happy to elaborate. The difference seems completely clear to me; but we do not all start from the same preconditions, so what’s clear to one might be opaque to another.

Again—and I only point this out because I think you’re honestly not noticing it—this is you extolling your virtues, in an effort to gain moral superiority, while at other places expressing skepticism of just this sort of self-serving behavior.

What you have done is not giving my arguments regarding free will any sort of discussion, but rather, embark on a side discussion of how our beliefs are shaped by our desires. So, what am I to conclude from this? If there is something wrong with my position that you have noticed, I would expect you to make an argument to that effect. If there is nothing wrong, and, as you claim for yourself, you just change your mind if you’re presented with sufficient evidence, I would have expected you to agree with me.

Neither of these occurred; rather, you felt the necessity to lecture me on how you don’t trust anybody’s reasoning that doesn’t directly derive from the four f’s. So really, what would you have me think there?

But then, why don’t you engage with my ideas—like, at all? There’s a topic of debate here, which is the existence of free will. I put forward a model that, I believe, enables human beings to have free will in a meaningful way. You disagree; but nowhere do you actually try and find a flaw with my reasoning—rather, you engage in diatribes on how human beings (not me, maybe, but human beings in general, you know) hold opinions only because they’re predisposed to do so.

So, again, if you want to engage in this debate, if you feel there are flaws with my view, point them out; if you can’t find any, and actually hold to the lofty ideal of changing your mind upon coming upon contradicting evidence, join me; but if you can’t point to any fault, and still don’t want to consider my model possible, then don’t be surprised if I question your motives.

No; I was merely using ‘defend myself’ as shorthand for ‘defend my ideas’. Because what you’re doing is essentially claiming that I’m wrong, without pointing out any errors I’ve made, which seems somewhat unfair to me. And this simply doesn’t serve to further debate. If I’m wrong, then maybe I can learn something if you point out my errors; but simply saying, ‘you’re wrong’ without pointing out any mistakes or indeed, making any arguments regarding the substance of the debate—i.e. free will—does not help any of us.

It’s not. But claiming to not be convinced, and then pointing out how we can sometimes come to believe erroneous things because of our unstated beliefs and wishes, at the very least subverts the idea of having a reasoned discourse.

It’s clear you view skepticism as a virtue; and you’ve spent a lot of inches of forum space pointing out just how skeptical you are, even towards yourself, which we probably all should take as an example.

Yes, to you, it all just seems like good common sense; I imagine you can hardly even understand how the rest of us can’t just follow you in these practices, it’s just all so obvious to you.

I’m sorry to hear that; I hope everything works out OK!

But there’s also an issue of differences in understanding here. Sure, if all you hear about the MWI is that it’s ‘just the linear evolution according to the Schrödinger equation’, it might seem simple—but that’s not the whole truth. For instance, there is the preferred-basis problem: any given decomposition of a quantum state into a basis gives, essentially, a set of ‘worlds’; and the MWI doesn’t have any native tools for how to deal with these different descriptions. Thus, it can’t easily fix the number of worlds that exists; in order to do so, it has to rely on further data—complicating the interpretation.

(By the way, decoherence doesn’t help: it necessitates a decomposition of the world into system and environment, at the very least; but that decomposition is itself basis-dependent and hence, such an argument is circular.)

It sounds like, in what follows, you’re essentially groping towards the idea of algorithmic complexity. If you go back to my initial post on my ideas towards free will, you’ll note that I refer to Chaitin’s constant—which is a central notion of algorithmic information theory (Chaitin, of course, being one of its originators). These notions are exactly the foundation upon which my model is built.

You can, actually—I’ve done it. Indeed, you can even run operating systems with greater capacity in a virtual machine: right now, I’ve got a couple of virtual machines running Windows Server 2016 Datacenter Edition on a completely ordinary Win 10 home installation. The reason for this is computational universality: at worst, you incur a certain slowdown; but anything the Win 10 machine can compute, can be computed by a Win 8 machine, or even one running Windows 95. Or Mac OS, for that matter!

That depends on a lot of very non-trivial assumptions. It’s somewhat tangential to the thread, but even law-like connectedness doesn’t necessarily imply logical derivability; indeed, that’s where the idea of philosophical zombies, or more to the point, Mary’s room, comes from: an early formulation of this knowledge argument is that of an archangel with perfect reasoning capacities who is capable of completely understanding the molecular structure of ammonia, and a human olfactory system’s response to it, without knowing anything at all about what ammonia smells like to us.

This is something of a crass overreaching, I’m afraid. Algorithmic complexity is a useful idea; but ultimately, it only applies to models of the world—to apply it uncritically to the world itself is a problematic case of mistaking the map for the territory.

This is not what I’m saying at all. In fact, it’s just a recognition of the complexity of the real world: the laws of physics work the way they always do; but they are not sufficient to logically imply every outcome. Again, this is not speculation: there are systems where this behavior is know. This is data.

There are no such aberrations; all that there is, is a recognition that simple rules may lead to enormously complex (indeed, in a formal sense, maximally complex) behavior. Take, for instance, the fact that Conway’s Game of Life, an exceedingly simple system, is capable of universal computation; and that consequently, there are undecidable questions about it. This is not an addition of complexity: this is merely the recognition of complexity that is already present.

And yet, according to the laws of physics as they are presently know, it’s the universe we live in. It simply seems to pay no mind to your predilections.

And I’m not claiming they do. Our brains may work exactly like the GoL: the same issues would obtain.

If that’s supposed to be me, then I would suggest you read my proposal again. I’m not claiming that my internal experience of free will tells me anything about the laws of physics; I’m saying that if I want my cognitive content to reliably correlate to the outside world, I need that content to be causally effective within the world, as otherwise, it would only be so associated by chance. Furthermore, I believe the universe works according to a simple set of laws, that, however, are capable of universal computation. This is perfectly sufficient for the points about logical independence and computational irreducibility to apply.

Well, again I can only applaud your detached rationality, but I don’t base my argument on a psychological impulse, and I don’t claim that one needs to rethink the laws of the universe.

And while I’m thankful for this attempt to educate me, the problem is still that you misunderstand both my motivation and my argumentation.

Once more, I’m starting with the laws of physics—as currently understood. They imply the existence of logical independence, and computational irreducibility, merely by allowing for universal computation. That’s all that I’m pointing out.

People who aren’t you, and haven’t seen beyond this psychological impulse, you mean.

This is again a terrible confusion of ideas. Moral axioms, or intuitions, or attitudes, are not reducible to the facts about the world—is does not dictate ought—, complex or not. Free will, however, is a metaphysical question—a question about how the world can be, about what sorts of things there can be, and how they interact. Not clearly delineating these things just gets you into an awful muddle.

A steady progression towards questioning our intuitions. A couple of thousand years ago, that the wind is caused by spirits, that thunder and lightning are magical, wouldn’t have even been capable of being questioned. Today, their explanations are common knowledge. We didn’t just shrug our shoulders and give up on trying to figure out lightning, claiming that people’s intuitions would just always assign it to Thor’s wrath. Why should we do so now?

Well, congrats! But not everybody is as blessed; some of us have to engage with their intuitions, probe them, see if they hold up, refine them, and even, repudiate them. I’m happy for you that you don’t have to wade through this quagmire, though.

Oh, so you do have intuitions about free will after all!

Again, I’m not sure where you get that idea from. It’s simply that there’s an argument that seems compelling to me to the effect that if naturalism is true, and beliefs have no causal warrant, and evolution works via selecting behavior, then there is no reason to believe that our beliefs track the real world; but I think that there exists a naturalistic response to this argument, involving causally responsible beliefs and choices, since the alternative is (for various reasons, most of all the problem of how different substances are supposed to interact) untenable to me.

I think you’re really gonna have to decide here…

Let time d be four seconds after the decision has been made.

Does {U(t[sub]B[/sub]), L} –> U(d)?

If it does, then the universe is deterministic, and A can be inferred from U(d), by simply eyeballing U(d) and seeing what happened.

If it does not, then the universe is not deterministic. Logically the only possible ways the universe can be nondeterministic are randomity and/or outside interference. The laws of the universe (L) are not abstract math; they don’t divide by zero or instantiate any paradoxical statements. They just move particles around. There’s room for randomity - results not dictated by any source. And there’s room for external interference - results dictated by things other than U(t-1). But there’s not room for execution errors in the mechanism of L. And there’s certainly not room for L to break down every single time anyone makes a decision.

Regarding whether determinism negates the notion that guilt merits punishment, it’s instructive to remember that whether or not people are deterministic, they’re still making decisions as they go based on their own preferences and knowledge. At the time they were made, their decisions may have been inevitable - but they were inevitable based on the internal state of the head of the perpetrator. If society doesn’t like such actions, they can do things to the head of the perpetrator to deter it from making such decisions in the future, like reeducating it, deterring it, incarcerating it, or removing it. All of these actions make as much sense in a deterministic world as one with libertarian free will, because in both cases the mental processes that led to the undesired actions took place in a specific head, and that head and its decision-making processes can completely validly be held responsible for the results.