Why does calculation need sentience? You’re right that I consider it an utterly arbitrary distinction (and I usually prefer the term “red herring” to “arbitrary distinction”), and so I don’t see how you can say it is universal to make it.
And how is a random output “free”? Where is the “free will” in a dice roll, or weather pattern?
Just as a chess computer is programmed not to make illegal moves. It still outputs a decision based on some blind, soulless process. I argue that we do the same.
I say not that the rainbow does not exist. I say that it is an optical illusion. I say not that the experience of ‘will’, ‘choice’ or ‘volition’ does not exist. I say that it is a neuropsychological illusion. In this instance, I feel it useful to take a step from physicalism towards outright eliminativism.
What did you actually do?
If by body you encompass brain, then yes.
Yes, I believe it is: if “that” had consequences which you didn’t want, you must provide inputs which affect future calculations.
If I may ask a direct question of all here who believe in “free will”: which of these entities do you, personally, consider to possess “will”: electron, thermostat, amoeba, dice, pocket calculator, weather, plant, chess computer, insect, supercomputer, lizard, advanced neural network, chimp, human? If you say “none” or “all”, then we essentially agree (dependent on the precise definition of ‘will’).
You posted that “if ‘calculation’ is all that is necessary for ‘will’, then surely other ‘calculators’ have it too: the thermostat ‘chooses’ the room temperature, the amoeba ‘chooses’ the warmer region, the computer ‘chooses’ the chess move, and so on.”
I replied that no one who speaks of free will as existing believes that “‘calculation’ is all that is necessary for ‘will.’” Rather, calculation plus sentience, at least, is required.
In short, it’s not calculation that needs sentience to be calculation, it’s free will that needs sentience to be free will.
Or am I misunderstanding your post; were you asking: “why does calculation need sentience to be 'free will?” If so: because that’s how the term “free will” is used. There’s no problem with arguing that the will is an illusion, but it would be a pretty silly arguement if you didn’t take sentience into account. Or, rather, it would be a lonely arguement: everything else would be talking about that thing experienced and/or performed by sentient beings.
Yes, and I argue that such use is equivalent to that of, say, ‘Hell’. If ‘sentience’ is a necessary element of ‘will’, then the question is merely removed to that of whether sentience is illusory or, at least, what specific element of this epiphenomenon called sentience is necessary to the ‘will’. Is it, say, senses, or memory? In that case We might incorporate some simple sensory-memory elements therein: think of a speed camera. Information from a sense is continually fed into a control module, which makes the decision to trigger the flash based on sense inputs and memory inputs, upon which an image is committed to memory. I argue that the human uses free will to act on its decisions only insofar as the speed camera uses free will to remember the image.
All arguments start out lonely. I am attempting to change the use of the word ‘will’ to make it less misleading and refer more clearly to the outputs which we as biological computers cannot help but make.
A gedankenexperiment: Someone else uses the fingers on both of my hands as an abacus in order to calculate, say, 11+12 in base five. I’m not quite sure what he’s doing, but at the end he tells me to say how many fingers I’m holding up. “Seven” I say.
I am a sentient entity who has made a calculation and output the answer.
Did I choose that answer with my free will?
This example, like the Chinese Room, is more a discussion of what is meant by “consciousness”, “sentience”, “self-awareness” or the like, rather than “free will”, and is thus rather a red herring here, I feel. Another, just as arbitrary, difference is that I am alive where the weather and chess computer are not. Let us examine the entity itself rather than stipulating others with which it must associate.
Ok, that’s an argument that human cognition and decision making are not qualitatively different than that which is performed by a computer. It’s an argument that I understand, and it’s not the kind of thing I was objecting to in post #35. It was a tangential point, anyway.
Hmm. We may be saying the same thing but using our words differently.
I’ll grant that we’re in a deterministic universe and that, from the perspective of a timeless and/or omniscient being, our actions are completely pre-ordained, and therefore “will” is illusory.
What I’ve been trying to say is that perspective matters. What is true from one frame of reference does not necessarily hold from another. From our frame of reference, “will” is a very real thing. You need not give preference to your own perspective when forming definitions, but I don’t see why you shouldn’t.
Or indeed, from the perspective of the universe, of which we are a part. I’m not sure whether everyone in this thread is singing from our mutual choirbook.
Just something seems real doesn’t mean that it is. Does the sun really orbit the Earth? Is the rainbow really there? Indeed, did reality begin when I was born? I would suggest that the very pursuit of physics is all about ignoring the human perspective which, after all, wasn’t there for 13 billion years, and concentrating instead on the universe.
I have a lot more questions than answers regarding “free will”, so if you don’t mind, I’ll just occasionally make noises from the sidelines during this debate.
For clarity and simplicity, let me premise this with the most no-frills example of “free will” I can think of:
At a party last night I was leaning against a wall, sipping a glass of cheap Shiraz while various groups were chatting all around me and a band was playing across the room. I could seemingly freely choose to focus my attention on the conversation next to me, or on the band, or with a little effort, on the traffic noises outside.
My question: Why is it, as a biological computer, that I “cannot help but make” these little switches in the focus of my attention when they have absolutely no effect on the world external to me? From an evolutionary standpoint, why do I have this capability?
Hey, I’ll bite: Just what is this intention? And why does it seem to us that we have it when it also seems to us that other complex, natural systems don’t?
Hello again, friend. As I told Gyan, I would always advise a little caution in answering “why” questions by employing evolution. Evolution selects from conditions or situations which have already arisen, but sometimes the consequences of a condition regarding that selection is simply no big deal: evolution selects neither for nor against it, and so “why” might be an irrelevant question to ask of it.
Nonetheless, I feel it does have relevance here. Watch a small garden bird as its attention flits this way and that: it is merely seeking information from a new region of its sensory field after the last one provided no input worth ‘bothering’ with. One could conceive of a security camera in a casino which automatically tracked and scanned the upper portion of any moving ‘blob’ and used face-recognition software on the input, thus flagging up known card-counters to its human master. Shifting ‘focus’ from conversation to band to traffic is just what the bird does, or the camera.
I know it really really feels like “you” control it. But that automatic camera is you. As in Libet’s experiments, the ‘choice’ is made subconsciously (ie. “by your brain”), and “you” are the last one to find out about it.
Sentient,
You’re right, “why” questions can be misleading and inappropriate when discussing evolution; I apologize for the sloppy wording.
Hmmm… in birds, attention seems to be limited to being triggered or driven by, for example, seeds (as a precursor to eating), or my cat (as a precursor to fleeing). But while all sorts of environmental stimuli can trigger or drive my focus of attention, there seems to be a lot more to it than that. Generally speaking, I can (seemingly) hold my attention to whatever I choose for as long as I choose. I can not eat even when I’m hungry and there’s food available. I can deliberately endanger myself by bungee jumping off a cliff, or override my amygdale completely by suicidally jumping off a cliff without bungees. If my genes want to replicate, I can tell them to bugger off by getting a vasectomy.
In short, I can muck around with some very strong evolutionary imperatives… but how did this ability (or programming) survive, let alone arise in the first place?
As Dr. Love pointed out, Dennett has an alternate interpretation of Libet’s experiment and its results, and Dennett’s not alone: I’ve read a half-dozen plausible interpretations of Libet’s work, and I just can’t accept Libet’s experiments as evidence of anything until more experiments are performed, especially since Libet himself believes in free will (More accurately, Libet believes in free won’t: He thinks that brain processes initiate motor output, but our (non-material) volition can veto the processes. The brain says “Go”, but we can say “No”).
I think everything we do, including changing our focus and our train of thought is motivated by “drives”. i.e. we’re seeking or avoiding things in order to seek or avoid little chemical signals in our brains. (something like that) So we just do it. Even worrying about whether we should be changing our focus would be motivated by the same drives. Lots of authors have identified various drives (which I haven’t read)… many would say that we seek novelty or newness to some degree… this would been a drive which evolved because it would help our survival - it would cause people to explore things and learn to be more intelligent by learning more about how the world works so that they can solve problems. I think we have a need for “connectedness” as well - which involves a sense of belonging and order. This would stop our novelty seeking cravings getting too out of control. And of course there would be many others, and at any particular point in time some cravings would be stronger than others and based on our current circumstances we’d work out what we want to do.
Anyway, to respond to the first part of what I quoted, you seem to just be talking about our selective attention when it comes to hearing. This is also true of sight, etc. I think the reason we do this is because I think our working memory, where I think our decisions are carried out, can only handle maybe 10,000 little bits of information - so our sensory information can’t just be handled “raw”.
e.g.
Check out this. Note that this is NOT an animation. I think this demonstrates a flaw in the shortcuts our brain uses in order to compress sensory information http://mathworld.wolfram.com/ScintillatingGridIllusion.html
An unbelieveable optical illusion: http://en.wikipedia.org/wiki/Image:Optical.greysquares.arp.600pix.jpg
Being able to focus on (extract) a particular sound from the rest of the sound is important because it means that that focused on sound can be at a higher quality when it is brought to the working memory (IMO, since the working memory has a very limited capacity, and I think the reason is so that associations are easier to form). This means that in noisy situations, our ancestors would be able to still recognize sounds such as wild animals or footsteps quite well. BTW, our stereo hearing is probably partly responsible for use being able to separate “sound channels”. Another way would be pitch and another would be by the sound’s type (“timbre” - the way in which sounds such as musical instruments sound different even when their pitch is the same) - and maybe also loudness. (so you could concentrate on a loud or a soft conversation)
I would correct you slightly: you hold your attention for as long as you do. A bird might similarly do so, but “evolution” comes in if it happens to spend so much time focussing on an uninteresting region of its sensory field when a cursory glance was just as useful, "cost-benefit"wise, that it was outcompeted for food by the cursory-glancers.
As I have said throughout the thread, the answer to a question like “why do I not eat even when I’m hungry?” may well be unanswerable in that the causes of that action are not knowable (even theoretically if, as Penrose suggests, the ‘calculation’ is deterministic but non-calculatable, rather like a triple billiard ball collision). But the decision could still have been output based on prior inputs even if some of them were effectively random.
Suicide is the ultimate decision. Just as a computer could output a ‘wrong’ decision based on environment, memories (ie. programming), feedback and chemical equilibria (or inequilibria), so could a brain. Perhaps that ability to output any decision, not just those which are somehow ‘evolutionarily intuitive’, is itself an evolutionary advantage overall to humans as a species.
Agreed - it is an attempted demonstration of the very simplest elements of this thing we call ‘volition’, and hardly applicable to ‘real life’ thinking or decisions. However, like those examples of volition disorders I gave before, I consider that the experiments strongly suggest that each decision becomes “conscious” from an unconscious or subconscious origin. If they don’t suggest the same to you, or him, so be it. I usually have little regard for the eliminativist position, but in this instance I think it clears some unnecessary fog.
I think the ability of humans to commit suicide has only been around since we have been able to think very abstractly (about the far future, the death of ourselves, etc) and also since we have been able to use language. Also, about 0.02% per year of the population commits suicide and often this would happen after they’ve reproduced. To eliminate a trait (assuming suicidal tendencies are genetic) virtually none of those with it should reproduce.
That just means it is driven by concrete goals and stimuli rather than abstract ones.
I think the only way you can do this is by making it seem more attractive for you to resist eating rather than eating. i.e. maybe not eating in that instead would trigger off some attractive novelty-seeking and power-seeking drives. (assuming we seek power - in that case you might feel powerful because you explicitly believed that you’re in control)
In that case, suicide would seem like the most attractive course of action. For our ancestors though, they wouldn’t even have been able to conceive of suicide. I mean I don’t think a chimp is able to think about their suicide or their inevitable death or related things like a possible afterlife, etc.
I think those kinds of decision-making processes are self-learnt (Piaget-style). i.e. I don’t think our genes have any say in abstract things like that.
I think that’s true, at least in some cases…
e.g. http://www.nasa.gov/centers/ames/news/releases/2004/subvocal/subvocal.html
This talks about NASA’s device which can detect the voice we use when we’re reading or talking to ourselves. This voice would be inhibited since it is silent - yet it still exists in some form in the throat.
No, that’s not the argument I was making. It’s more than just a debate about whether the universe is deterministic at the quantum level; we’re also talking about how the brain functions on a macro level.
It’s not a matter of religious philosophy, it’s a matter of empirical evidence. Sentient has already covered it pretty thoroughly, so I can only defer to his posts. What we’re talking about is the difference between the traditional notion of consciousness, where there is an “ego” which “decides” to do things. If I’m understanding Sentient’s point, he’s saying that the action happens first and then the ego becomes aware of it. But then it’s essentially stored backwards in memory, so it appears to us that the ego made the decision.
If you make no such distinction, and call all such impulses the “will”, I think it’s inconsistent with the traditional notion of what “will” is. If you’re just saying that everything the brain does is “will”, would that mean that you will your heart to beat?
You’re right, completely random output isn’t free. I was using that as an example of how free will and the question of whether someone could have done other than what they did are unrelated. That is, free will does not require that you could have done otherwise.
A different question for anyone who’s willing to answer: where in the brain are you, the thing that experiences consciousness? Is it possible that your placement in the brain might affect your judgement of simultaneity? How about some examples of how that might happen:
Option C seems to be what most here are suggesting, where you sit awaiting input from the other centers of the brain, and answering with a simple yes or no. Does this seem to be a reasonable example of how consciousness works? What makes it any more likely than the other two?
Maybe there’s an alternative. Perhaps, instead of ‘you’ existing in some specific part of the brain, you exist throughout the brain, or more correctly, the different part of the brain are different parts of you. If this is true, then there is no single, indivisible you–you are a collection of different entities. “You are not out of the loop; you are the loop,” and Dennett puts it. Under this view, conscious actions are spread out over both time and space: consciousness doesn’t do anything at time t, it only acts over some span of time Δt (if that doesn’t show up right, it’s supposed to be delta-t). As such, the time t that Libet was looking for, the time when we become aware of conscious decisions, doesn’t exist. And when you try to make the data fit something that doesn’t exist, strange things seem to happen (remember the aether that light was supposed to travel through?).
But that is not how we use ‘will’, either. As I indicated, it seems to me that we most commonly use will when the consequences of our actions are in line with our intentions. It is not an explanation of the events. It is an expression of alignment between interest and effect.
Nonsense. Again: is it any more clear to ask you to hand me that stick with straw attached by twine than it is to ask you to hand me that broom? I might say that you would probably perform the same action in either case. Which shows that a description can replace a term that denotes an empirical subject (as we would expect). But it is my claim that ‘the will’ is not such a subject. There is nothing to dissect.
Only of the human can we say it with any measure of confidence, because only with the human are we used to deducing or outright asking about intentions. Without intentions, there is no will. The will’s freedom is only linked to normal use of the concept of compulsion or coercion, not some metaphysical independence of causality. Maybe that makes me a compatiblist. :shrug:
Well, we are now full square in the debate of consciousness rather than of this thing called “free will”, but I’ll go along with it so long as we keep the OP in mind. Again, I would have to say that both “you” and “consciousness” have illusory aspects to them IMO. “I” am a unique string of memories being added to continuously by my senses, and that process of sorting sensory input into memory while all the time reactivating various memories in order to compare, communicate and cross-file them efficiently might well explain an awful lot about this thing we call the “experience of consciousness”. And so we could even indulge in some of Dennett’s eliminativism and state that there is no “you”, “consciousness” or “free will” - there is only his “loop”, just as one might say there is no rainbow, only light and water droplets.
Of course it doesn’t, if consciousness itself does not really exist. And in Kornhuber’s original experiments, the “gradual onset” lasted over a second (compared to less than a fifth of a second when the act was involuntary). What we are looking at in that case is simply “how long does a computer take to output a decision?”, which think is rather a bifurcation from the OP. We all agree that both humans and chess computers make decisions. I argue that when, or indeed whether, the entity is conscious of its mechanical, deterministic, pre-destined decision is a red herring in a debate about “free will”.
That we are incredible computing devices whose outputs are fully causally determined by prior inputs. If that is “free will” then so be it, but it certainly doesn’t sound like it to me.
I argue that intention is every bit as much a predetermined ‘output’ as the action itself: consider the chess computer attached to a robot arm which moves the piece. The move is decided, and a series of motor control commands calculated to “realise” the move. After the decision, but before the action, does the chess computer intend the action?
Well, I say it is. I am attempting to change our use if the word.
That’s like saying that I’m not free because I live in a society of laws. I realize that some people do consider the will as “some kind” of subject that is “free” in a sense that is not what we usually use. But even that will would hardly be interesting if it did not, er, operate on inputs. As to its freedom… well.
No. But that is not to say that ‘intention’ is some non-computational activity or result thereof. It is simply to say that we do not normally suggest that computers intend results. Intention, a complicated subject in itself, can boil down, in many cases, to satisfying a motivation. For instance, I am motivated by thirst, and so I go to the faucet with the intention of satisfying that thirst. We could say, with some straining of the language, that it was my will that my thirst be quenched, so long as my thirst was actually quenched. The “underlying cause” of the motivation was not in question, and the statement of motivation is not an answer to that kind of question. We tend to use motivation as a cause of some action, which is the action we intend, and the action we intend is that which we believe will satisfy our motivations. When such actions are successful, we could say (but usually do not put it that way) that we willed the particular result. “Willing” is often like “knowing”, grammatically, in that its expression suggests the conclusion. If, for example, I say that it was my will that the bathroom be clean, then we could safely conclude that the bathroom is in fact clean. So for example, one says, “I will it, but it is not so,” as often as one says, “I know it, but it is not so.” Which is, normally, not at all. (But that is not to say we couldn’t imagine applications for those sentences.) (Also reference the preivous “account” of willing the matchbook; i.e. “willing” as “concentrating”, another possible use of the word.)
In that sense, which is common, will is like a deflationary theory of action. It does not suggest causes. It’s purpose is simply to cast aside such causes. Why did I do it? --It does not matter why. It was my will that it be done. But of course we can offer some additional support. For example, we can say that it was what I intended to do, and it was what I did. Or we can say that I had a certain motivation, and so I knew what I had to do, and I did it.
There are of course some uses of “will” that do permit results to be otherwise than what was willed. But these cases are generally those where “will” is used as a substitute for “a strong desire or wish.” And of course there is no requirement of desires or wishes, however strong, that they be fulfilled. It is quite possible that some people think of their relative “freedom” to desire and wish and conflate it with the use of the word “will” in other circumstances. Now, the use of one word in many contexts is all well and good. But to suggest that there must be some underlying sense to the word that links all these cases together is to beg for problems.
Note that this account of will is not meant to create an explanation of events. It is therefore my continued position that no amount of explanation will replace it. Accounts of motivation, intention, and will usually collapse explanation, they do not replace it. That we might receive such an account when we press for an explanation is not evidence that it is an explanation, any more than dismissing a question with a wave of a hand is evidence that such a wave is “somehow” an answer.
As to the freedom angle, I believe still that the matter lies simply in how we normally contrast freedom with an opposite. For instance, I am at work, and my boss takes me off one project and puts me on another. In such a case, was I free, or forced? Well, the choice was not mine… even if I had other choices available (say, quitting). All other choices being equal, I was forced to work on one project and not another. Now, how are we to analyze my freedom in this case? Do not think that I look for a causal mechanism as an answer. Perhaps I was compelled by physical laws beyond my ken to write “4” after “2+2=”, but that is not what you meant when you asked me “What is two plus two? Write the answer here.” And perhaps you can create something that sounds like a causal explanation but that is not an answer to my question. If you feel it is, I do not feel we are using the word “freedom” in the same way.
Some offhanded remarks: willing is not an action, in that I cannot will willing. Willing is not an action in that I cannot try to will, I simply “will”; though it is worth remarking that what I cannot try to do I cannot fail to do, and I don’t mean that in a generic sense for the opposite is true, as well: where I cannot fail, I cannot try (the expression of “trying” requires the possibility of failure). Willing is not an object, even though it often plays the role of one in sentences like, “It was my will that it be done.” It expresses a relationship between intention and the consequences of an action. Compare “my mind’s eye”, “my last chance”, “my first true love”, and so on. Many objects in sentences do not have physical referents, though they play the role of physical referents. Similarly with “my intention”, even if we can demonstrate beyond any doubt that an intention is some specific neural pattern, no object or collection was ever meant by it in the first place so such a correlation is irrelevant to its use in language.
I think the reason we don’t normally suggest that computers intend results is due to their current simplicicty compared to the human brain/body (I include body because I think the entire integrated package is important to our calculations).
I think that as we learn more about mimicing human thought processes (e.g. neural network’s instead of traditional algorithms), and build layer upon layer of abstraction into the calculations, we will indeed end up with computers where we speak about what they “intended”.
You’ll have to actually explain the particular things you mean in order for us to consider whether it is problematic. At first blush, I don’t see how it is. The argument is first of all a lot more complex than that (since it includes explainations of WHY it might be useful to identify with your own will, at least evolutionarily), and second of all if thoughts have effects, then the thought that deterministic probably has some effects as well: though likely increasing our freedom rather than restricting it (since knowledge is power).
But the same MUST be said of any ultimate explantion of the functioning of all observed phenomena. I agree that determinism as a model is uninteresting. But that’s only because no one has ever described a coherent alternative to it: indeed in part because determinism is basically a statement about the logical coherency of the world.
The problem with your first two perhapses is that I can think of explicable, describable alternative situations to them.
I can’t do so with determinism.
Again, I agree in the limited, broad sense, but what about the specific instances? What about those cases in which we clearly feel we are responsible for choosing some action, and are definately wrong about it? Doesn’t that tell us something about how our minds function? And, with enough technology, doesn’t that give us insight into the different alternative ways our minds MIGHT be able to function instead?
This is a little hard to follow, but I will try to point out that even simple robots can be logically right or wrong: the correctness of a view is a standard independant of why we came to hold or defend that view in the first place. And even if we don’t “choose” to choose what we choose, it’s still “us” doing the choosing: whatever it is that we experience as ourselves is the motivator: though perhaps until now we’ve had a far too limited concept of what our “self” is, which leads us to make errors and mistakes when trying to figure out how to change or account for our own behavior. In that sense, that’s why “illusion” is the wrong word. Misunderstanding or misinterpretation might be nearer to the mark.