I’m getting the feeling that you’re just trying to wind me up. But, assuming that’s not the case:
Merely something that happens, as in ‘cause and effect’. Some action, something that occurs, an event, whatever. The configuration of matter and energy at a point in spacetime (and corresponding mystical or spiritual forces, if one is inclined to believe in such). Eating a slice of rye bread is an effect; eating a slice of white is another. If neither of those are causally determined, and both are possible, both are equally-underdetermined and equally-probable (one might be more likely than the other, though).
If the will were to play the role of deciding agent between a set of equally-underdetermined, equally-probably effects, then there still wouldn’t be any freedom within it if it itself was something in need of determination – i.e. something for which there exist numerous equally-probable options. Because, if it then were causally determined, it would not be free, and neither would the eventual effect (like eating a piece of rye bread); if, however, it were underdetermined, then there would exist several distinct options between which there would not be any way to choose. (Because if there again one proposed the will as deciding agent, one would just loop the problem back onto itself.)
I’ve posted the wikipedia link earlier, but here it is again; if you reject wiki links for any reason, here’s a brief non-wiki discussion.
I’m just trying to get any concrete propositions amenable to discussion out of your posts: are you, or are you not saying that for me to be conscious of something, that something needs to be subject to volition? If not, then what do you think it takes to be conscious of something? And where does the will, or the freedom thereof, enter into the picture?
Interestingly, I think I have come closest to your ideas about being one with the universe, and non-local consciousness and whatever else under the influence of certain mushrooms, but that’s a bit neither here nor there in the argument at hand.
I would express some sympathy with this point of view, but I think my formulation of it is somewhat stronger – I do not depend on equating uncausedness with randomness; rather, I point out that nothing uncaused (or in some general sense acausal) can be wilful, since this would imply an impossible teleology – the directedness of a process (that of arriving at some effect, picked out by the will) despite its independence of prior states (without which it couldn’t be free).
And I’m in turn not sure what this passage is supposed to mean – I argue that it isn’t possible that anything occurs absent sufficient causal determination; further, that believing in free will is as meaningless as believing that colorless green ideas sleep furiously – something that makes sense only syntactically, but not semantically. If it (some effect, including the will itself) is free, it can’t be willed; if it is willed, it can’t be free.
But we’ve been through that over and over again: cause and effect, while a useful and utilitarian model for a lot of purposes, is NOT intrinsically “how things are”, not unless you can clearly distinguish between SET A the independent variable(s) and SET B the dependent variable(s) such that if Element X is in SET A it is not also in SET B. Because otherwise you are, at least in part, saying “Phenomenon J is caused by Stimulus-Precursor Z” when actually Phenomenon J is part of Stimulus-Precursor Z to begin with and not a separate thing at all.
With me so far?
SO, onward to human personal decision-making: I am saying that cause and effect is not a viable model and can’t play a meaningful role in any discussion thereof because the “I” who am making decisions has already, as well as currently and concurrently, played a deterministic role in establishing all of those stimuli, all of those external causes. Your dependent variable is already all stirred up with your independent variable.
Philosophically, rhetorically, argumentatively you don’t GET TO START WITH cause and effect as an already-established mechanism by which things occur, not in this context. In saying free will exists (in a meaningful way) I am saying the reason things happen can be understood in a way OTHER THAN and NOT REDUCIBLE TO cause and effect.
If you could start off with “well of course EVERYTHING that happens happens as a consequence of PRIOR CAUSATION” we would not be having this conversation. That’s just a rephrasing of determinism as opposed to free will.
The will determines them; they are not “equally undetermined” or “equally probable”. One set is chosen. By the chooser. By that which is exercising the free will, the consciousness who does the choosing. There is not a magic moment in time 7 minutes 13 and a half seconds prior to the decision being made at which they hang in the balance or anything like that, or at least there need not be.
Lost me here once again, but I think the lostness has something to do with a model that you have in your head which i do not share. And it probably again tied in with the notion of prior causation, the notion that in order for me to make a decision there has to be a reason behind my decision and that the reason is not “of me” or “in me” but somehow is an external factor, a precipitator of the decision? if I’m on the wrong track, I’m afraid you’re still not communicating on this part of it…
The two articles you linked to are less clear overall than you are. I haven’t the faintest glimmering of what the hell they’re talking about or what it has to do with consciousness, decision-making, causality, etc.
I am saying that for you to be conscious (period; never mind “of something”), YOU need to possess volition. I don’t know what you mean by “subject to volition” as used in this sentence.
This makes perfect sense. Maybe where we diverge is on the boundaries. To be “of free will” means that in some meaningful sense I am my own determinant, that my identity cannot be entirely extracted from the elements that give rise to my decisions (thoughts, conclusions, feelings, etc – the content of my consciousness). It is still determined, but when I say I reject determinism I mean the classic garden-variety EXTERIOR determination model whereby something ELSE that is NOT ME is the cause of everythying that I do think feel etc.
When I say (as I do in the beginning of this very post) that cause and effect are NOT simply “how things are”, I’m referring to the classical or conventional model whereby cause is one thing and effect is something else, and the latter is attributed to the former in a linear causal process. (Or a convoluted chain for that matter, but one which, if unravelled, would still take on the appearance of a linear chain of PRIOR situations and events giving rise to those that FOLLOW).
Actually, nothing I said precludes a thing from being a cause of itself (though I would, from other considerations, argue that nothing can be sufficient cause of itself), and conversely, some thing being a cause of itself does not eliminate the requirement of every thing being causally determined.
Basically, what it seems you’re saying, then, is that since both the river determines the path of the riverbed, and the riverbed determines the path of the river, the path of the river is free; well, I disagree – causal interrelatedness does not provide grounds for freedom (as I hope the river example makes clear immediately).
But I do get to start with consistency, without which any position is self-defeating – and something both requiring freedom and directedness (as is necessary for will), which are synonyms for ‘not being determined’ and ‘being determined’, is simply inconsistent.
Show a way in which anything can be understood that doesn’t rely on cause and effect.
The reason I don’t start of this way is that it doesn’t need something as strict as determinism to rebut the idea of free will; it merely needs the notion that logical deductions are possible, i.e. that there exists a formal system whose deducible theorems represent true propositions about reality. This is, I believe, the weakest sense in which one is able to come to any meaningful conclusions about the world; reject this, and I have trouble seeing how you could ever assert anything with confidence (i.e. how you’d know that any of your assertions apply to reality by anything but pure chance).
If the choice of the consciousness is a state thereof, that implies the necessity of a choice of this state. If it isn’t, the state is undetermined.
I haven’t said that the reason for your decision can’t be part of your state at one point or another; I’m not sure if it can be wholly determined by that state (for if there were an action that is wholly determined by your own internal state, it would seem that it is possible that there is an action that (sufficiently) causes itself to occur, which causes itself to occur, etc. – but there, one eventually runs afoul of simple thermodynamics), but it can certainly be influenced by it. But I have said that your own state – however fuzzily you want to define ‘you’ – is in itself at best either free or wilful, but never both.
Eh, it’s not all that important, really. More or less something like a time-travel paradox, just without the troublesome time-travel. As a matter of interest, if a time-traveller appeared to you, told you what you’d do tomorrow, and you’d find yourself doing exactly those things absent any manifest feelings of coercion, would you then consider your actions to be free?
And I’m saying that I’ve got no idea what ‘for [me] to be conscious, * need to possess volition’ could conceivably mean in such a way that it would actually follow. (And to a lesser extent, also what to be conscious means in the absence of being conscious of something.)
Nevermind exterior, interior, and whatever flavours of reductionism to holism you’d like to prefer – given the state of the universe at some point (seen from some inertial frame in spacetime, if we want to attempt some objectivity), could you or couldn’t you decide differently from the way you end up deciding?
So is a case like the aforementioned river, which carves its own bed, yet is directed by that very bed, a case of an exercise of freedom?
I want to take time to go back and try to muddle my non-physics-trained, non-math-adept mind through those linked articles about the pedestrian and the invaders from Andromeda.
Meanwhile… “free” is usually associated with “cognizant” and rivers generally are not held to be aware of a whole hell of a lot, so I’m not entirely sure that “free” is right word for it, but if as you say the river modifies the bank even as the bank constrains the river, it is at least true that one cannot say that the course of the river is determined by its bank, or at least not without oversimplifying and omitting some critical information from the description. I accept the analogy as a good one.
I don’t think he deals with your argument quite as stated, but it’s burried in there. His central thesis is that biology (specifically, evolution) is the scientific theory that describes the rules of free will, and that free will has nothing to do with physics.
For your argument specifically, it seems clear to me that there is intention in the world, and that it did come about gradually–both on long time scales through biological evolution and on short time scales as children grow into adults. So, if we’re on the same page so far, we need to show how it can be free.
For this, we come back to the statement I made earlier. Not only does causation not imply determinism, but determinism doesn’t imply causation! That is, just because something is determined by previous events doesn’t mean that it was caused by those previous events. Causation and determination are extrememly different things. In a nutshell:
[ul][li]In determination, we’re taking one situation and playing it forward, to see what happens in the future.[/li][li]In causation, we’re taking an event, and finding the set of histories that lead up the the event[/li][/ul]
If you like math analogies, determination is solving a differential equation, causation is solving an inverse mapping problem.
Once causation and determination are unlinked, then why is it necessary that the process not be “fixed in its beginnings”? What matters is that it’s beginnings don’t cause the will. The will is built up, literally ex nihilo, as it interacts with the world around it, creating its own intentions.
Is this meant to be a strongly emergentist position, then (which would surprise me, coming from Dennett)? Since otherwise, I would argue that biology ultimately is described by physics (and in some sense, determined by it – specifically, biological processes can’t break the laws of physics), even if it may never be feasible to actually ‘work with’ the physical description of, say, a kangaroo.
Well, there’s some ambiguity in the meaning of the term ‘intention’. Evolution, for instance, acts in a way to maximize the fitness function for a given genotype, by means of a phenotype; but would you say that this – optimization of reproductive fitness – is, in any way, evolution’s ‘intention’? And when it comes right down to it, are there any actually teleologically goal-directed processes, rather than processes that achieve goals through a mere incidence of their occurring?
You’ll have to elaborate on this one, I’m afraid. If I have a set of events whose occurring (perhaps in some patterned fashion – in which case this pattern would be an ‘event’ itself, in a causative sense) immediately implies another event’s occurring, how is that different from a causative relationship? Could you perhaps give an example?
If strict determinism holds true, then every time we ‘play forward’ a given configuration of the universe (as the sum total of all interacting systems), we arrive at the same future; in indeterminism, this future is in some way not always the same one, but subject to some arbitrary determination (‘determination’ here merely meaning making a choice between different possible futures – the necessity of making this choice, ultimately, is what I believe makes free will impossible, since either this choice does depend on prior causation – in which case, it wouldn’t be free – or, it doesn’t – in which case, it wouldn’t be wilful).
I’ll grant, for the purpose of discussion, that some present may be arrived at from ‘playing forward’ different pasts – though I’m not all that sure that is actually the case in any other than grossly simplified systems (one could, for instance, imagine something that just goes in circles, and visits each state again and again; the number of cycles this system has gone through would be irrelevant to its present state, then).
Your ‘unlinking’ then amounts to noting that there are presents that do not uniquely specify their past – but this is essentially just the difference between necessary and sufficient causation: if A is a necessary cause of B, then B’s presence necessarily implies A’s presence; if A is only a sufficient cause of B, then A’s presence does entail B’s, but B may also be caused by some other event C. You’re saying that not all causation is necessary, which is certainly fine by me; however, I don’t believe that furthers the case for free will in any way, since it is just as incompatible with causation that is merely sufficient.
What the postulate of free will now entails is that the future is indeterminate, yet that there still exists goal-directedness in the evolution of the present into this future. This is the sticking point of the discussion, not necessary vs. sufficient causation. If there is no teleological goal direction, there is no sense to calling it will; if there is no indeterminateness to the future, there is no sense to calling it free. But putting both together leads to a contradiction – a future that does not depend on its past, yet whose past still contains some notion of this future.
Claiming that there exists free will is like claiming that there can be events in the absence of sufficient causation for these events, and that nevertheless, these events are determined in some way (by the will); attributing freedom to will then is nothing but a category mistake.
I’m not sure I understand this paragraph. Don’t you then just ‘will’ what would’ve happened anyway?
It seems to me that you, like AHunter3 apparently does, take something like the river example, and claim that the riverbank not determining the flow of the river as it itself is modified by the river constitutes actual freedom – yet the process in total is one of strict determinism (or at least, totally compatible with that position), with both riverbank and river mutually determining each other; that there is no unidirectional flow of determination does not imply indeterminateness.
So, while I can easily picture a will evolving in an analogous manner, shaping itself according to circumstance, and shaping circumstance according to itself, I wouldn’t call such a will any more free than I would the river.
I am reading the two above-linked articles. I don’t understand the logic behind what is being asserted there. Joe and Sue are on 1st Avenue. Sue is walking in the direction of Andromeda and Joe is not. I’m under the vague impression that we’re supposed to ignore the rotation of the earth, the rotation of the earth around the sun, the rotation of the sun around the galactic core, and whatever equivalent rotational and revolutionary motion is happening on the hypothetical planet circling its own start over in Andromeda galaxy(?) So event X occurs on the planet in Andromeda. Light leaves that event. Math happens. Joe’s Monday is Sue’s Tuesday, but only with regards to Andromeda. I don’t know if Sue has to keep walking at her leisurely 2 miles per hour for the next 2.5 million years (at which point she’ll not be on 1st avenue or even earthbound?) or if she only needs to walk for a day, or just take 11 or 15 steps while the narrator is talking. Anyway, it is simultaneously true (but not FOR anyone? not true for Joe and not true for Sue but only true simultaneously if we concatenate what is true for Joe and what is true for Sue into one truth?) that it is Monday on the Andromeda planet and that it is Tuesday on the Andromeda planet.
But…there IS no event taking place “on a planet orbiting Andromeda” “right now” “to a person here on earth”. Simultaneity is itself a violation of the rules governing space-time. Andromeda is not merely “that far away” to someone on earth it is also “that long ago”. The appearance of Monday and Tuesday on Andromedaplanet being simultaneous for two different people on earth is illusory — only the Andromedaplanet’s Monday or Tuesday of some 2.5 million years ago are real events right now for Joe or Sue. And while I’m no math major I’m pretty sure that walking southbound on 1st avenue won’t create multiple contradictory simultaneities, although I’m willing to entertain arguments to the contrary. (Maybe I could use it to get to work: the UN is in session and the damn buses aren’t running and you can’t even walk to the damn subways. Can I make it be yesterday long enough to walk past the cops and their barricades and get into the Lex Av 6?)
Oh wait…is this Andromeda space alien invasion thingie somehow affiliated with the idea that the future already exists and that what will be is just as inevitable as what has already been is unchangeable? (the events yet to unfold in Joe’s Andromedaplanet Monday must be causally determined insofar as Sue’s Tuesday is simultaneously happening)?
it is the exercise of free will on the part of those who possess it that causes selections to take place. Choice, by definition, does not depend on prior causation, it is self-caused as an intentional action and it is entirely willful. If it depended on prior causation the outcome could not vary and you would indeed get the same outcome every time you ‘play forward’. The will is doing the causing; it is the locus of volition. It is not meaningless, as it would be if it executed “for no reason”, but it is not attributable either. In fact the “reason” of it, the explanation for it, is infinitely regressive, referring back to the choice-maker in relation to the factors taken into account, whatever they may be.
No, simultaneity is merely relative, as a direct consequence of the speed of light being equal in all frames of reference; I’m not talking about the events Joe or Sue are seeing with some hypothetical ultra-powerful telescope, I am talking about events occurring concurrently to them walking past each other. Simultaneity has the same meaning it usually has – all events that are separate from you only in space, but not in time; that occur right now. It’s just that, thanks to relativity, this now depends on relative motion, and events that occur now to one observer may lie in the future of events occurring now to another observer.
That would be the obvious conclusion to draw from the thought experiment, it seems to me (though to say that the future ‘already’ exists is a bit awkward, in the least – the events in your causal future exist much the same way things to your left exist). I’m not sure that it’s the only one, and one might argue that it’s irrelevant for all practical purposes, since there’s no way for Sue to tell Joe that the Andromedans have started such that he could in turn inform the still-debating Andromedans what their decision will turn out to be without possessing some form of faster than light information transfer mechanism, but it’s nevertheless a sound argument – what exists in Sue’s present may genuinely lie in the future of events in Joe’s present. It may not be possible for Sue to know these events, but can her ignorance really be grounds for rejecting their occurrence?
But that’s just what’s under discussion here, and the question I keep posing again and again: how can choice not depend on prior causation? What, then, determines its value?
Again, let’s look at rye vs. white bread. Both choices are causally underdetermined – there does not exist sufficient causation for choosing either one. You assert, then, that an act of will suffices to choose between one and the other. That is not so much an explanation as merely a black-box proposal, claiming that there is something that can solve the dilemma, that that something is will, and that it is free, without explicating either the terms used, or how this is supposed to work.
The fact of the matter is that, to choose between rye and white, the will itself must have some definite, determinate value – let’s say either ‘choose white’ or ‘choose rye’, for simplicities sake, though it’s of course highly unlikely that such high-level concepts are the atoms of any cognitive process. Now, once more, if there exists sufficient causation for the will to be in this state, to have this value, then there exists sufficient causation for making the choice of white over rye, for example, and there is no freedom involved. So, there must not exist sufficient causation for the will to have this particular value. But then, what chooses between its possible and equally causally underdetermined values?
It would be tremendously helpful if you would simply state what in the preceding argumentation you specifically don’t agree with (other than ‘the conclusion’). To me, it seems that the only thing one could even in principle quibble about is the idea of the will having some kind of value, but the key here is in realizing that I’m merely using a very general analogy that is largely independent of the actual workings of free will, of its actual implementation, so to speak: if the will were not in some state to determine one choice (if it didn’t have one determinate value), how would the choice-making actually occur? For then it would be in some state to permit multiple choices, and, since only one can be realized at any given time, we’d need yet another deciding factor.
Note that this is perfectly compatible with the will depending on itself (or at least, in a recursive manner, on its prior states), with the will shaping circumstance as much as circumstance shapes the will, with an ‘evolving’ will, or with any other flavour of tangled river/riverbed-like causation; all of these are perfectly causality preserving processes, and while lots of talk about them may obfuscate the underdetermination problem at the core, it can’t solve it, since ultimately, all these causal tangles, hierarchies, webs and loops are decomposable into events being either necessarily or sufficiently caused by other events.
This is not even a reductionist proposal, since it works just as well, in a completely general way independent from any ‘substrate’ one may have in mind when talking of events (like particle interactions or neurons firing on a microscopic level, or planets circling stars and galaxies forming superclusters on a slightly bigger scale), when applied to any hypothetical (strongly) emergent properties that supposedly are not present in reductionistic accounts.
It is, or would appear to me, at least, in principle nothing but the statement that something can either be indeterminate, or determinate, but not both at once, which is precisely what a ‘free will’ would entail.
Not if there is some random element of indeterminism in the mix – i.e. the decay of a radioactive atom (which causes the future to contain either a dead or a living cat, without the ‘starting conditions’ being in any way different).
And what if, by the way, the outcome did not vary? What if strict determinism were true? You have repeatedly asserted, if I understand your position in any way, that in this case, there could be no consciousness. Call me dense, but I still don’t quite see why that would be the case; if you just tried again to explain to me why you think that this is so, I would be very grateful.
But such an infinite regression can’t possibly be executed in a finite universe, at least not in finite time and with finite resources. And the process would have to complete all of its infinitely many steps before the will ever actually became definite – before any action could be executed!
And if, by the way, you actually wish to assert that the human mind is able to perform such a supertask, that it can execute infinitely many steps in a finite amount of time, then just let me express my dissatisfaction with the fact that, apparently, free will is all we get out of this – capable of such hypercomputation, we ought to be able to solve each algorithmically solvable problem instantaneously, decide undecidable propositions, find all prime numbers, compute pi to the last digit, prove Goldbach’s conjecture by brute force, in fact, solve any open mathematical problem in finite time; and all we get is being blamable for our slip-ups and goofs. If we were actually capable of this, we shouldn’t have to have this discussion; we ought to already know the answer, without fail.
Reads like: In order for you to choose rye over white, you have to have a REASON to choose rye over white, in which case the REASON is the reason you chose rye over white and therefore you didn’t do it of free will, the REASON made you do it.
Sure does, if reason is just another word for cause. You don’t arrive at reasons (or even REASONS) ex nihilo, but through a process of causal determination (‘reasoning’), and they themselves may act as causes of other events.
Slightly embarrassing admission time: I’ve never really understood what exactly strong emergence is saying. Dennett proposes that there are questions that simply cannot be answered, that cannot even be formulated when trying to work with the physical description of that kangaroo.
I wouldn’t say that evolution has any “intention,” but why does that matter? It’s the intention of humans that’s the important point here.
Dennett has a thought experiment that goes something like this:
Suppose we have two chess playing computer programs, A and B, that are connected to play against each other. This system is, of course, completely deterministic, since we can make the same game be played over and over, with the same computations by both players, by setting the random number seeds to the same value. Suppose that we let them play a game, and A wins. What was the cause of A’s victory?
Let’s even suppose that this system is falls into the “harder” case above, that is, different pasts lead to unique futures–the game can be run deterministically both forwards and backwards. You would say that the initial state of the game–the code of A and B, the seed values, plus the rules of chess and so on–is the sufficient cause of A’s win, correct? And because there is a sufficient cause, neither A nor B can be free?
But, by the time-reversibility of the system, saying that the initial state caused A to win is exactly equivalent to saying that the state of the system after the first turn caused A to win. After all, no information is gained and no information is lost in transitioning from the beginning of the game to the completion of the first move. And it’s exactly equivalent to saying that the state of the system after the second turn caused A to win. And it’s exactly equivalent to saying that the state of the system at the end of play caused A to win. So the fact that A won (in a particular way) caused A to win. I’m not going to argue that this statement isn’t true, but it leaves a little something to be desired.
A reasonable answer might be that “A is just a better player than B”, or, “B didn’t capitalize on A’s mistake in move 13.” These aren’t causes we can come up with by watching the same game, no matter how closely we examine it. Maybe A is just better, and would win in the majority of circumstances? Perhaps B would have made A choke on its mistake in other games, or maybe B wouldn’t have even recognized that mistake in the best of circumstances. We can’t settle these disputes unless we watch many different games, and notice patterns in the course of their play. Aren’t these explanations much more satisfying, not to mention much more useful, than the near-tautology in the previous paragraph?
Notice, in particular, that this is true regardless of determinism or indeterminism. If we replaced the pseudo-random number generator with a true random number generator, we would still need to observe games with a variety of circumstances before we could come up with a convincing explanation for A’s win.
The model above is, of course, a toy model, and I’m not going to claim that either A or B has free will in any important sense. But look at what happened. We have a clear goal direction, since both programs try to force the other into checkmate. The future is determined by the past, but the past doesn’t provide a good explanation for the future. It’s not until we bring the goal-directedness into the discussion, and move above the exact circumstances of the past, that we can come to any reasonable causes. Does this satisfy that “there can be events in the absence of sufficient causation for these events, and that nevertheless, these events are determined in some way”?
What does “would’ve happened anyway” mean? Would have happened if there was no will? I’m quite sure the world would be different if there were no will-possessing creatures inhabiting it.
A river doesn’t self-interact in the right ways to give it freedom. If you ask why the river curves at a certain place, the answer is “because that’s where the land eroded from the movement of the water.” There’s no “real” reason, it’s just the way the world happened to be laid out. Contrast this to the question of why the eye responds to photons within a certain spectrum. A convincing answer must contain two pieces of information: (1) it roughly coincides with the peak output of the sun, and (2) the purpose of the eye is to gather photons so as to infer information about the creature’s environment. That is, eyes exist precisely because provide this ability. Rivers don’t have a raison d’être, they just exist as a consequence of physics.
Well, the way I see it, there are two possible ways in which one can say that there are questions about a kangaroo its physical description can’t answer. One is that it’s simply impossible to compute the answers to these questions – after all, even three interacting bodies already isn’t analytically solvable anymore, and things don’t get any easier (though there are ‘high-level’ physical descriptions of systems whose microscopic description is simply too unwieldy to yield any results – thermodynamics and statistical mechanics would be the prime examples here; even classical mechanics can be said to be a high-level description of quantum mechanics). Nevertheless, though we can’t compute it, the dynamics of the high-level system are still determined by the usual physical laws applying to its components. (This is one sense in which I have seen the term ‘weak emergence’ being used, though I think it is more commonly applied to properties not inherent in any of the system’s components, but nevertheless being a consequence of their functioning – take a couple of transistors, for example. Any given random configuration of them will generally not be able to do much at all, but find the right one, and you suddenly get the ability to argue with strangers across great distances over philosophical minutiae. Surely, this ability does not inhere in the transistors, but it is nevertheless completely explained by their functioning – it’s a weakly emergent property.)
The other way is essentially to claim that even if we could compute the high-level description from the complete microscopic account (the emphasis here is on complete – in reality, we simply don’t have such a description yet), there would be some phenomena we wouldn’t catch that way; some qualities of the system truly irreducible to its component parts. This might not seem all that unreasonable at first glance, but it does have some consequences that I think are rather counter-intuitive – take, for instance, a system that is simple enough that its microscopic description is exactly known and can be computed arbitrarily far along its evolution. Strong emergence would then say that there is a point in this evolution where suddenly, the description ceases to apply, and new laws of an unpredictable character take over, causing the system to suddenly turn pink, recite existentialist poetry, and then vanish in a puff of logic – out of nothing!
In some flavours of the Copenhagen interpretation of quantum mechanics, classicality is a strongly emergent property, being attributable to macroscopic objects (especially the measuring apparatus), but not to microscopic quantum systems. This lead Erwin Schrödinger to come up with the whole cat thing – which is truly paradoxical precisely if classicality is treated in this irreducible way. I believe that for all flavours of strong emergence, one can find a Schrödinger’s cat-like paradox: through interaction, one can entangle quantum-mechanically describable systems with quantum-mechanically non-describable ones (those that exhibit the strongly emergent property), and thus attain a quantum-mechanical description of a quantum-mechanically non-describable system (or at least, something that purports to be one).
Anyway, this was a bit of a digression, so let’s return to the topic at hand.
Because it opens up the question if any apparent goal-directedness isn’t, in the end, just as intention-less as evolution while nevertheless somewhat reliably attaining its goal; as intention-less as a dropped ball that reliably finds local minima of the gravitational potential to settle into.
I’m not sure I see that. Every move depends on every prior move, and so does, in particular, A’s first move depend on the initial conditions. If the game had been set up with both AIs identical, but the first move already made, the game may have turned out a completely different way – the whole system depends on its own history, and thus, a description, say, from the 37. move onwards is different from a description starting, well, at the start. You do loose information when you omit this history, and thus, saying that the initial state caused A to win is different from saying that the state of the system after the first turn caused a to win.
You might counter by saying that, by state of the system, you mean the complete state of the system, including its history – not just how the board is set up, but all the values of all the variables in the computers’ memories, and if it really comes down to that, all the states of all the electrons and atoms that make up the whole thing physically. But is the description of such a state ever attainable without critically depending on its own past having actually happened in some way? I don’t believe it is; the apparent simplicity of the chess example merely serves to appeal to our intuition that it should, in principle, be a simple thing, when the reality – in particular, the full quantum mechanical description, which places critical boundaries on how exactly we can know the state of the system – is far more complex. In fact, it seems to me that this thought experiment is a bit of an intuition pump…
In some sense, you’re talking about ‘modelling’ a system. Let’s say we want to send a rocket to the moon. Now, the trajectory of such a rocket depends on the laws of gravity and the initial conditions (including, for instance, the distribution of gravitating masses, etc.), this data absolutely determines the model. So, knowing all of this, then, means we know how to get a rocket up there, right? The calculation does not generate any new knowledge, as its result is fully determined by the physical laws and the initial conditions. So why bother actually performing this calculation, when we already possess all the knowledge we could gain, if perhaps in some implicit way? Well, because otherwise, we have no clue where the rocket’s gonna end up!
In a similar way, the initial conditions of the chess match, plus the deterministic behaviour of the computer programs, determine the outcome of the chess match. Yet, just from this knowledge, I couldn’t tell you which one’s gonna win (without perhaps performing some sort of simulated run of the match). However, after the match is played, answering that question is trivial. So there is, in fact, some knowledge that has been gained, or rather, some implicit knowledge that has become manifest.
I’m not sure that they actually say anything different; they just provide a higher-level, ‘chunked’ description of aggregate judgements about the computers’ play, a sort of heuristic depending on many instances of implicit knowledge made manifest that allows us to make reasonable guesses about the implicit knowledge in a novel case. But are they really causes of the outcome? Does A being a better player cause A to win? It wouldn’t appear so, since A can very well be a better player and loose.
The statement ‘A is a better player than B’ means nothing other than ‘A wins statistically more often against B than it looses’, a statistic which, of course, depends on each win, which in turn depends on the games played. So, if ‘A won because of the initial conditions of the game’ is ultimately tautological, then so is ‘A is a better player than B’.
That this ‘explanation’ is more useful is merely a consequence of the categories we think and converse in – which are just such high-level, chunked descriptions of the world. The patterns in the play of the computers are determined by their ‘micro-play’, and don’t contain any more information (or different information) than their micro-play itself, in much the same way that the pattern of a cloud is determined by the movement of small water droplets through the air, which in turn is determined by molecular interactions etc.; that we typically find the description at the ‘cloud-level’ more useful doesn’t mean that it isn’t fully determined by the water droplets (otherwise, we would have a case of strong emergence), and it doesn’t bring anything conceptually new to the table.
Again, how ‘good’ the explanation is is simply dependent on perspective – a micro-being, observing the world at a molecular level, may be perfectly satisfied with clouds (or anything) described as particle interactions; this may mean that it might not be aware of the concept of clouds, or could only achieve that knowledge through great difficulties, but, in a similar way, we may not be aware of levels of description much more macroscopic (or microscopic!) than we ourselves are – and indeed, the discoveries of the last century showed just how true that was for the greatest part of our existence, and how true it still might be. This doesn’t make any explanation on any of these levels inherently ‘better’ than any other, it merely makes them more convenient to handle for our way of thinking.
It doesn’t seem that way to me, no. The different levels are just different ways to look at the causes, different ‘chunkings’. Causation still needs to be sufficient for anything to happen at all.
That’s not a bad way of framing this – could we concoct a world of free will-impaired creatures (let’s call them w-zombies) that would be completely indistinguishable from ours? Of creatures that would, through compelling forces, proclaim themselves to have free will, act as if they believed they had free will, perhaps even, in private, believe themselves to have free will (I don’t doubt that things might be different if those creatures didn’t have free will, and didn’t at least act and talk like they had, or probably even believed that they did – after all, actions based on this belief may exert causal influence)? I think we could, because I think we live in that world; however, if you think that this world would manifestly differ from ours in any way, I think demonstrating these differences would be a good direction in which to continue this discussion.
That’s why I asked the question about evolutionary intention earlier – there is no inherent purpose in the eye’s being sensitive to the sun’s peak emission spectrum, there was no teleological goal-directedness in its development; it’s just that, some versions of it turned out to work better than others, giving a greater boost to reproductive fitness, thus outcompeting others. It’s just the way the world happens to be laid out.
Though, having rambled on like that, it seems to me, from the summary at wikipedia, that Dennett finds his concept of freedom in the good, old compatibilist tradition of just defining it in some appropriate way – as, for instance, the freedom from duress, or the freedom from coercion, while at the same time acknowledging that everything is in principle causally determinate. If that’s an accurate summary, then I don’t have a problem with this, per se, and I think that especially these concepts are valuable tools in describing human behaviour (describing what makes human behaviour human behaviour), but I’ve been arguing against a different, libertarian, conception of free will, which proclaims a wider definition of freedom – where there is no (sufficient) causal determination to (at least some of) our actions. This is the sense, I would argue, in which most people intuitively use the term, and I’ve always been slightly ill at ease with compatibilist arguments that use another conception of freedom to show that we have ‘free will’.
There have been people arguing that, since libertarian free will makes no sense (like I’m asserting), it’s not worth talking about (or even that we’re then not able to meaningfully talk about it), and all that we’re left with is then some sort of compatibilism with a redefined meaning of freedom; but I think that since the concept of this kind of free will is nevertheless something out there – used, for instance, commonly in Christian apologetics to absolve god from the problem of evil: god is really good, whenever it seems otherwise, that’s just our fault through our free decisions – it merits talking about, and calling compatibilist free will ‘free will’, where freedom has any other definition than the total, libertarian one, is really a bit of a misnomer that all too easily leads to equivocation.
Determinism can only work on closed systems, so by the “state of the system” we must mean: the current position of all pieces on the board, the programming of the two AIs, and the values of all variables stored in the computer’s memory (in fact, all three of these categories are “actually” values stored in the computer’s memory). How much of the “history” is included in the “state of the system” is determined entirely by the programming of the AI’s, and nothing else. If the AI’s are programmed in such a way that they only remember the previous 5 turns, then that is what the “state of the system” includes. If they are programmed so that their strategies are not stored from one move to the next, then the “state of the system” doesn’t include the strategies from the previous turns.
Your rocket example actually perfectly illustrates what I’m going for here. If you know the initial conditions and the laws of gravity, you get the trajectory of the rocket. However, if you know the final conditions and the laws of gravity, you get the exact same trajectory! If you know the conditions half-way through and the laws of gravity, you get the exact same trajectory! There is absolutely nothing that you could tell me from solving one of these problems, that you couldn’t tell me from solving any of the others. They are exactly equivalent. When you make a claim about sufficient causation, all I’m doing is rephrasing what you said, in a way that preserves the meaning precisely.
I find your statement “Does A being a better player cause A to win? It wouldn’t appear so, since A can very well be a better player and loose.” quite odd, since it directly contradicts what you said before, that “I don’t think that causation even implies any deterministic necessity.” Surely we can have a probabilistic concept of causation? But this is just an aside, really, not the main point.
If you’re unhappy with “A is a better player than B,” then try “A’s algorithm for discarding possible strategies is more efficient than B’s”, or more precisely, “A solves problem X in O(n) time, while B takes O(n[sup]2[/sup]) time.” Neither of these statements can even be formulated in terms of the state of the system–they both come out of abstract mathematical analysis of the algorithms that each uses. Which leads directly into your next quote…
Yes, exactly. With the addendum that some questions, and some answers, cannot even be formulated in these different perspectives. When we’re asking a high-level question like “why did A win?”, we might be able to find a low-level answer, but we might need those higher level concepts. From the low-level view, we ask “why did A win?” and the answer appears to be “Well, we run the program and, uh, it just kinda happens that way,” which tells us absolutely nothing.
My intuitiion is that we couldn’t concoct such a world where this is true using Dennett’s version of compatiblism. I will elaborate on his version in the next post.
You’ve betrayed your cause already. Define “work better”, “reproductive fitness” and “outcompeting” purely in terms of physical units. Or, tell me how something can “work better” when it has no teleological goal-directedness–if it has nothing that it is “working towards”?
It’s been some time since I read the book, but I would definitely say that the wiki description of Dennett’s comcept of freedom is wrong.
The overwhelming majority of people are not compatiblists. They have an intuitive idea of what determinism means, and from this intuitive idea, they conclude that freedom and determinism are in conflict (some make the leap of saying that one, and only one, of them must be true–but any thoughts shows this to be wrong). In the beginning of the book, he spends a good amount of time trying to refute these intuitive notions. Off the top of my head, they are things like “if determinism is true, then the future is inevitable”, and it’s cousin, “if determinism is true, then we cannot change the future.” There’s also a discussion about what it means to say that someone “could have done otherwise,” and a discussion of causality. Moving along in the book, he tries to give an account of how we can be free, even when our constituent parts aren’t free (which is independent of whether determinism holds or not), and how we can be morally responsible for our actions. The point is not that we’re free because we aren’t coerced by other agents, but that we’re free because the only reasonable explanations of our actions are given in terms of our characters. As I see it, this is libertarian free will minus the silly assertion that our actions ought to have nothing to do with our thoughts, beliefs or desires.
Well, so there may exist very specialized conditions under which the thought experiment might apply the way you want it to; but that’s just why I called it an intuition pump, since it’s designed to illustrate something general about causality and determination, which it just may fail to do if you just disregard enough real-world complexity (even if it seems intuitively obvious that you can leave out these things without too much of an error) – since then, it may not be applicable to the real world at all (or at least not in general).
In general, there doesn’t exist any meaningfully complex system for which you can either compute its future evolution exactly from starting positions, or back-calculate from whatever state it’s in in any given moment; this knowledge can only be attained by observing the system’s evolution as it plays out. (However, even in those cases, barring strong emergence, the evolution of the system is determinate, it may just not be determinable through calculations.)
You’ve missed the point I was trying to make. You’re interested in the trajectory, and yes, you can conceivably calculate it at least numerically to good accuracy from each point on the trajectory; the point being, though, that you still need to carry out the calculation. Knowledge of the initial (in-between, final, etc.) conditions and the laws of gravity is not enough to know where the whole thing ends up, despite there not being any additional information at any given step of the calculation (just as there is no additional information at any turn of the chess game, yet you need to have the computers play the game – or equivalently, simulate it – to know how it actually plays out).
The playing of the game and the calculation of the rocket trajectory makes manifest implicit knowledge contained in the initial conditions of each problem, and the rules that are followed – and that knowledge is all you’re ever going to get about either, no matter from what higher level viewpoint you survey the whole thing. That there appear to be somehow more salient higher-level descriptions of the system is merely an artefact of not taking into account the manifestation of this implicit knowledge, or rather, another, equivalent, way to talk about it that takes itself to be something more fundamental.
Yes, I should have said ‘sufficiently cause’, but I got tired of typing that out. As it is, while ‘A is a better player’ is a causal statement about A’s winning, it’s not a sufficient statement; pointing to the initial conditions and the AIs’ programming is, so if anything, saying that ‘A is a better player’ contains less information, just framed in a way we are more easily able to relate to, making it seem more relevant.
I’m not unhappy with that statement, I’m unhappy with the claim that it constitutes more fundamental knowledge about the reason for A’s winning than knowledge of the initial conditions etc.; particularly since it is contained, at least implicitly, in that latter knowledge, just not made manifest (like the rocket’s trajectory).
Well, some questions and answers can’t be formulated in every given framework; that goes just as well when we approach things on our level. Asking about either the molecular composition and chemical reactions going on in the system or the speed of its rotation around the sun and its distance from the galactic center doesn’t really make sense on our everyday scale. Claiming that the facts on any level are more salient than those on any other is merely succumbing to anthropocentrism – we have an easier time dealing with the concepts on this level, therefore they seem more meaningful to us.
The answer that it ‘kinda happens that way’ if we run the program is not meaningless in any way – think about the rocket again: Why do you get the trajectory you end up with? Because the calculations kinda happen that way. Yet, what it tells you – how to get to the moon – is anything but meaningless. That there appears to be no great knowledge gain in observing that A won is just because in chess, winning is already the final goal; in the rocket example, getting to the moon is, so it’s more easy to see that the trajectory (=how the chess game played out) is indeed a worthwhile piece of knowledge to make manifest.
‘Works better’ is merely colloquial for ‘maximizes the fitness function’, which in turn just means ‘enhances reproduction rates’. ‘Outcompeting’ simply means that one replicator replicates itself faster and more efficiently than some other replicator; generally, replicators with higher reproductive fitness outcompete those with lower reproductive fitness, which is what this whole evolution business really is all about.
And lest you claim that ‘higher reproductive fitness’ is therefore the teleological goal evolution strives towards, think about how the whole thing got started: you have a bunch of replicators, each replicating themselves with some certain rate. Those replicative processes aren’t perfect – mistakes are made, most often causing the ‘copy’ that carries the mistake to be unable to, in turn, copy itself. But sometimes, a copy arises that has a mistake that causes it to be able to copy itself at a higher rate – thus, the relative number of descendants of the copy within the environment starts to rise; the copy has higher reproductive fitness compared to its competitors. Then, some other replicator makes another beneficial mistake, causing it to outcompete yet other replicators, and you wait a couple of billions of years, and here we are, not because our ancestors had a greater drive or intention to replicate themselves, simply because they just happened to do so. No teleology; just the way the world happens to be laid out.
Well, the reviews linked in the article are, at best, ambiguous on that front, and I haven’t seen any claims that would actually be incompatible with a compatibilist ‘reduced’ free will.
Well, yes – free will may also not exist if the world isn’t deterministic, but showing that it can exist within a deterministic world seems, to me, not quite so trivial as you make it out to be here.
How is the future not inevitable if determinism is true? I can see how you might, according to your preferences, say that since we are causally involved in ‘creating’ the future, we can indeed ‘change’ the future, but if our causal involvement is in turn fully determined by the past, then there is no more agency in this ‘creation’ of the future than there is in a meteor crashing to earth and exterminating the dinosaurs.
Just as an aside, I have always thought the assertion that to be morally culpable for your actions, you have to have free will, is a silly one. Moral is a societal construct, evolved to enable social stability; if you thus transgress these moral rules, you are liable for punishment. It’s really not conceptually different from erecting fences to ward of avalanches in the mountains – there is no agency in avalanches destroying a village, but that doesn’t mean that we have to let 'em do it.
If those ‘reasonable explanations’ are on the same terms as ‘A is a better player’ being a reasonable explanation for A winning, then I have a hard time seeing how this would constitute truly libertarian freedom. Just because we choose as the explanatory level one which is most easily accessible to us, doesn’t mean this level isn’t fully determined by lower, more fundamental levels (and to claim otherwise would be a claim for strong emergence), which the higher-level descriptions supervene on.
To return again to the example, the statement ‘A is a better player’ is fully contained in the initial state of the system and the rules for playing the computers follow; however, it is not manifest, only implicit, and requires the game to be played, repeatedly, to emerge.
That it is more convenient, to us, to frame explanations about other people’s actions in terms of their character, does not imply that 1) those explanations are in some objective sense better than any others 2) those explanations are not determined by lower-level causality. (Nor does it even imply that those explanations are true – take ‘A is a better player’, again. Presumably, one can come to this conclusion after observing finitely many chess games being played between A and B. However, since this explanation leaves open a chance factor in determining the actual winner of each game, it might be that B just had a stroke of bad luck – that if one were to observe yet more games between the two being played, actually B would emerge as the – seemingly – better player. The microscopic explanation viewing each game as a causally determinate sequence of moves, dictated by the AIs’ programming, while unwieldy, does not suffer from this ambiguity. The same thing goes for ‘character-based’ explanation of people’s behaviour.)
This is far from just an aside. It may be the most important thing you’ve written in this thread. The entire reason that we ever had a concept of free will is for explaining how moral responsibility is justified. If you reject this, then what is the purpose of having the discussion?
I don’t know whether I’m communicating poorly, I don’t understand your argument, or if I just disagree. You talk about “mak[ing] manifest implicit knowledge contained in the initial conditions of each problem,” but who are making the knowledge manifest to? Us? Surely that is completely irrelevant. I’m only saying that if we have a time-reversible deterministic system, and if A and B are complete descriptions of the system at times s and t, then the statements “A (at time s)” and “B (at time t)” are exactly equivalent. Whether or not you or I, or anyone else, is clever enough or has enough computing power to actually verify that they are equivalent is irrelevant.
What exactly does it mean to be “fully contained in the initial state of the system and the rules for playing the computers follow?” Unless it means “fully contained in the rules for playing the computers follow combined with the set of all initial conditions,” I disagree emphatically. You cannot determine that “A is a better player” by only watching one game, regardless of how many times you watch it, or what you look at – unless you’re bringing in information from outside the system, for example, bringing in your knowledge gained from watching and playing other games of chess.
I agree with the first paragraph. I’m not claiming that higher-level concepts are more salient for all applications. I am claiming that they are more salient for applications that involve asking questions like “Why did John Hinckley shoot Ronald Reagan?” Further, I am claiming that you can’t even formulate questions like these using only the language of quantum mechanics (or physics in general). Answering this question is important for the purposes of running our society–we ought to respond differently if he was hallucinating and believed that Reagan was a rabid dog about to attack him, vs. if he was trying to get Jodi Foster’s attention, vs. if he was a secret agent sent over from the Soviet Union. This line of thought continues at the end of the post.
To your second paragraph, the information made manifest in the rocket example is only not meaningless if humans project their own purposes onto it. Surely this moon-reaching trajectory is no more useful to the universe as a whole than a nearly-moon-reaching trajectory, or a trajectory that misses the moon completely.
Let me be clear: evolution has no intentions, no goal-directedness, no teleology; the products of evolution do have (some combination of) intentions, goals, purposes and teleology. What you’ve done in your first paragraph is to move the question back one step. Returning to the eye example, what is it about this structure that enhances reproductive efficient, as compared to hundreds of similar structures? In particular, a normal eye and the eye of someone blind are birth will be quite similar structures. How can we describe why the normal eye would give a reproductive advantage over the blind eye without asserting that the normal eye “works”, or that the normal eye “does something” that the blind eye doesn’t, or that they eye has a “function” or “purpose”? Honestly, if you only answer one question from this post, I’d like to see an answer to this.
Skipping a lot of discussion in the book, the conclusions he reaches are:
[ul][li]The phrase “change the future” can only mean “change the anticipated future.” Trying to force it to mean anything else produces absurdities.[/li][li]In order to make any sense of biological evolution, we need the notion of “avoiding” something, like “avoiding predators”. Claiming that the “future is inevitable” removes this.[/li][/ul]
Well, it’s not “truly” libertarian freedom, since libertarian freedom assumes at the outset that it is incompatible with determinism. It is much more similar to libertarian free will than other compatibists, and similar in the ways that seem to really matter.
The purpose of introducing the multiple viewpoints is not to assert that we can explain some event in multiple different ways, but to illustrate that sometimes we cannot explain some event in a certain viewpoint. How do you formulate the John Hinckley question in the language of quantum mechanics? It’s not so much that the human-level explanation in terms of morally relevant behavior is better than the quantum mechanical explanation, it’s that quantum mechanics doesn’t provide an explanation at all. The rocket example is just the opposite. There is no explanation of the rocket’s motion in the language of moral actions; we’re forced to take up the physical view. The eye example illustrates something else. Physics alone is unable to explain why that one particular structure is associated with increased reproduction, we are forced to adopt the view that the eye serves some purpose. This explanation is neither physical nor moral, but based on the idea of design. None of this contradicts that everything is determined by the lowest levels. It does contradict that everything is caused by the lowest levels.
Oh, I don’t know. Seems to me the main reason we have a concept of free will is that we appear to possess free will; that we thus are to blame for our actions is merely a consequence of that. But is this really the reason behind law and punishment? Fundamentally, when we lock up a murderer, we try to protect society. And that’s really enough justification for ruling anything to be morally wrong – that it’s damaging to society, or rather, transgressing rules that evolved within society to protect itself from damage. (In a way, though that may be stretching the metaphor somewhat, moral is a determining factor in the survival fitness of a society; societies that don’t effectively guard themselves against such damaging behaviour simply don’t last, though that does somewhat beg the question of what, exactly, ‘a society’ is – the US today, for instance, is surely a vastly different society from the US in the fifties, but where exactly does one draw the line?)
In any case, an environment with such moral rules (including those about responsibility) in place provides a different set of causative factors with respect to an individual’s behaviour than an environment lacking such rules does; thus, without any reference to free will, the concept of moral responsibility plays a role in determining individual behaviour, in such a way that damaging behaviour is made less likely.
I think this is a far more steady foundation to build one’s morality on than the reliance on the concept of free will, or alternatively any form of moral absolutism.
I fully accept that point. What I’m arguing against is that, since to you, statements like ‘A won because the game played out that way’ and its ilk are less informative than statements like ‘A is a better player’, the latter is an inherently better description of the matter.
This is why I bring the rocket’s (or the chess game’s) trajectory into the discussion – it is a piece of knowledge that inherently seems to be more contentful than the description of the system and the rules it follows at any given point in time, yet is completely determined – implied – by this description. The trajectory seems to be an eminently better description of the rocket problem – after all, it actually tells us how to reach the moon, which can’t be gleaned just from gravity + initial conditions --, but it doesn’t actually tell us anything different. And that’s what I claim higher level descriptions, essentially, are like, as well. I didn’t make my point too well, though, I must admit, and managed to confuse myself several times in different places, so I’m sorry for the unnecessary obfuscation.
Nevertheless, that A is a better player than B surely is a direct result of A’s programming, right? So in that sense, this knowledge is just as implied by the ‘micro-state’ of the system as is the rocket’s trajectory. (Nevermind the fact that, as I pointed out in my last post, it’s not strictly speaking true that one can conclude with any certainty that A is a better player from any finite amount of chess games observed. Also, strictly speaking, A being a better player is actually a fact about a system different from the one chess game we’ve been looking at – it’s one about a system containing several different chess games, but I’m not sure that matters all that much.)
Well, the question would take a different form, talking about different concepts; but there is some isomorphic set of facts to be analysed on the low level (otherwise, this would again imply strong emergence – I think): Let’s for the moment pretend that we had a nice, simple, Newtonian billiard-ball universe to work with. The micro-level, then, is a really, really large amount of interacting billiard balls – that make up Hinckley, Reagan, their thoughts, memories, personality, the gun, the bullet, the air between them, everything else. The question ‘Why did John Hinckley shoot Reagan?’ then becomes a question about the movement of these billiard balls – a question whose micro-level answer determines the macro-level motives of Hinckley. That it’s usually useless to talk about such questions on the micro-level is simply a matter of practicality, nothing else.
Well, yes, that’s the point of re-framing the example: the greater salience of explanations on a human-accessible level is merely a consequence of the fact that it’s on a human-accessible level, where we can project our own prejudices onto the matter.
I think there may be a level of meaning here that I’m not getting, otherwise, the answer would simply be that seeing people are less likely to wander off cliffs, thus have higher chances for reproduction. ‘Function’ and ‘purpose’ are not the same thing – the eye was not constructed with the purpose of enabling sight; nevertheless, it fulfils this function. Similarly, a stone being dropped was not dropped with the purpose of finding the lowest gravitational potential; nevertheless, that’s the function it’ll fulfil. The intentions, goals, purposes, and teleology of evolution’s products are simply descriptive shortcuts; ultimately, all of them can be explained in terms needing no more purpose than the falling stone has. It’s just rarely convenient to do so.
If what you wanted to get out of me is a concrete physical description of the eye’s benefit, well, such a thing would probably exhaust the maximum post length (not that I believe I could give one even if given unlimited space; I tend to overestimate my capabilities, but not quite by that much). But a simple toy model can easily be given, in the form of any number of robots using some form of visual system to avoid obstacles (like cliffs). I trust that you’ll grant that nothing going on in such an apparatus – from light falling onto some CCD chip, which creates a bit pattern in the form of high and low voltages, to those bit patterns being acted upon by various logical gates, to ultimately cause motors to activate themselves in a particular fashion, thus altering the robot’s course – can’t be perfectly well described physically. Now, if robots reproduced, we’d have at our hands, aside from the likely extermination of all meatbags at the claws of our metal overlords, a good model for the evolutionary benefit of a working eye – because those robots whose CCD chips don’t work properly will tend to drop off cliffs far more often than those with working chips, therefore being less likely to successfully reproduce.
The first point is, again, a bit of redefinition that I’m not exactly comfortable with, but can let stand as is; in particular, the insinuation that ‘the anticipated future’ has anything to do with the future at all appears a bit strange. The second point is more interesting – why does one need the notion of avoidance to make sense of evolution? Organisms either avoid or they don’t, in which case, they’re eaten. There’s nothing anticipatory in the notion: A random mutation leading to an organism not really caring about big things which sharp teeth tends to circle down the gene pool’s drain fairly quickly, while anything that hides in the bushes has much greater chances. There doesn’t seem to be any conflict with an inevitable future here.
Yes, but the question doesn’t matter on the quantum level, that’s the thing. There’s conceivably a complete quantum description of everything leading up to the assassination attempt that’s just as satisfying as a higher level assessment; however, it concerns itself with quantum objects rather than John Hinckleys, Reagans, motives and things like that. But there’s nothing missing from it; and the high level description doesn’t add anything. It’s just re-phrased in a different conceptual framework, is all. It’s not that the question about Hinckley’s motive doesn’t have any answer on the quantum level, it’s just that it’s transformed into a question of quantum interactions, the answer to which is exactly as contentful and complete as him wanting to impress Jodie Foster is.
[QUOTE]
It’s not so much that the human-level explanation in terms of morally relevant behavior is better than the quantum mechanical explanation, it’s that quantum mechanics doesn’t provide an explanation at all.
[QUOTE]
It provides a different, but equivalent answer to a different, but equivalent question (that generally no human being could ever fully formulate or appreciate, but still).
Well, I’d say that there is an explanation (or at least, an equivalent explanation) of morals in the language of physics, so that both views are, again, only separated by convenience.
I think that physics is perfectly able to explain why a functioning eye is beneficial to reproductive fitness, but again, there may be a level of meaning here that I’m not getting.
But it does not contradict – as I would say it – that the higher level causal description is at best equivalent to the causal description on the lower level; and in fact, in general, you can at best loose information moving upwards.