Actually, many worlds-like models are one of the few ways to guarantee an absolutely deterministic universe: whenever the question comes up whether A or B occurs, the answer is always ‘yes’.
And yet, it’s the theory of computation, deterministic as all get-out, that allows for something that comes as close to ‘free will’ as I think anything can.
But let me elaborate a little. Free will, the way I think about it, needs three components: independence, irreducibility, and intention. By independence, I mean logical independence: for a given choice, the universe must be compatible with each outcome—that is, knowing all the facts about the universe, I can’t deduce using logical manipulations whether outcome A or B occurs; both lead to a logically consistent universe.
This seems somewhat odd at first blush, but it’s in fact very common: a famous expression of this phenomenon are Gödel’s theorems, that every (sufficiently powerful) logical system contains propositions that can neither be proved nor disproved using the axioms; consequently, for a formal system F and a Gödel sentence G, both F+G and F+~G, the extension of F with G and its negation ~G, are consistent logical systems themselves. Other examples are the halting problem, or the digits of Chaitin’s constant, which in some sense encodes the answers to instances of the halting problem.
In fact, for the latter, the phenomenon takes on an instructive form: for any formal system F, and a given Chaitin constant, after a certain index, none of the digits making of the binary representation of the constant can be derived from the axioms anymore—they are logically independent. But nevertheless, they are clearly not random (in the sense of being arbitrary; they are random in the mathematical sense of having no pattern): since each Turing machine either halts, or fails to, each Chaitin constant has a definite binary expansion, that however cannot be derived from the ‘set of facts’ F.
In the same sense, I suggest that there may be events that occur, which however cannot be derived from all the facts about the universe at a given time.
This, of course, doesn’t quite give us freedom; but the converse—that for each choice, its outcome is predictable from the facts of the universe at a given point in time—would certainly deny it.
In order to make headway towards freedom, we turn to the notion of irreducibility, or more accurately, computational irreducibility. Roughly, what I mean by that is that for certain complex enough systems, their behavior can only be understood in taking it as a whole; there are no shortcuts, so to speak. So, take a ball thrown in Earth’s gravitational field: its curve is simply solvable, and the position at any point in time suffices to derive the position at every other point in time. This is thus a reducible system: we need not know what the system does ‘in between’, so to speak.
But not all systems are of this kind. For systems of sufficient complexity—which, suggestively, are systems at just the same level of complexity such that there are independent propositions about that system, namely the threshold of universal computation, which will be elaborated on when we get to the third requirement—any explanation of their behavior at a given point depends on the entire history of that system, in the sense that we can only ‘predict’ that system’s behavior by explicit, step-by-step simulation. But that’s effectively just the same as taking that system, or perhaps a copy of it, and watching what it does—that is, what the system does is an indispensable element of accounting for the system’s behavior.
Now suppose that such a system meets a point where it has two options, A or B, which are both logically compatible with all the facts about the universe at a given point in time. Then, when it chooses A, due to its irreducibility, any account of its choosing A will include the system making that choice—that is, there is nothing else but the system making that choice that accounts for the outcome A. This, I believe, is the truest notion of freedom that makes sense.
Now, I suppose someone will bring up the idea of ‘could have done otherwise’. Could the system have done otherwise, i.e. chosen not A, but B? This depends on what you mean: it certainly could have, in that both A and B are logically compatible with the state of the universe. However, whenever you ‘rewind’ the universe to a point before the outcome is clear, and ‘restart’ it, the system will always make the same choice.
This is not, I think, in conflict with the idea of freedom. The reason for this is that the state of the universe thus wound back includes the system choosing, and its making that choice in exactly the same way—and one should not be astonished that a system making the same choice in the same way will choose the same outcome.
If this is clear up to this point, I think we have a solid account of what it means for a system to be free. But that’s not something in any way connected to human beings, or other agents—quite simple systems can be ‘free’ in that sense. As an example, it’s known that in certain many-body quantum systems, the question of whether there is a finite gap in energy between the ground state and the first excited state is undecidable, or logically independent in the sense used above. But the quantity is in principle measurable; consequently, such a many-body system plus the measuring apparatus that outputs a certain value for this spectral gap (either finite or zero) would be a ‘free system’ according to the discussion above.
However, one benefit of the above is that we can immediately say when an agent, usually a system with free will, is, in fact, not free: for instance, when I fall within the gravitational field of the Earth, and my trajectory is just as certain as that of the stone above. This fits very well with intuition!
It’s here that intention comes in. Roughly, intention is my proxy for will: only systems that have intentions, goals, desires, and the like, can be said to have a will, and thus, if they fit the criteria above, to have free will. I will keep this somewhat brief, since I expect less controversy here, so basically, what I think is needed for intention is a certain kind of forecasting ability, imagination, if you will—the capacity of modeling the world as being in a certain state, in order to assess the desirability of that state, and take actions towards bringing it about. And what’s needed for that is basically a certain kind of computational ability, a capacity of modeling everything that might happen within the world, by means of some symbolic system—and this essentially boils down to universal computation by means of some symbolic language, for instance, mathematics or ordinary natural language.
Universal computation, here, basically means the ability to carry out any computation that can be carried out by some computational system at all. Many systems have this power, but among them, the one that interests us is the human mind—even if it only has that power in the limit of infinite time and resources, as, for instance, pen and paper.
So, now going backwards, we have the human mind as a universal computational system, which by computational irreducibility can never be ‘short-cutted’ such that every account of its behavior is essentially equivalent to an observation of its behavior, perhaps in copy, and about whose behavior there are undecidable statements: all of the ingredients needed for a free agent. An option is chosen in a way that is not further reducible to anything but that process of choice, is chosen in a deliberate way, and is not dictated by the instantaneous state of the universe; we’re not going to get any more free.