the chemistry of free will?

Spiritus, you are a great guy, but you are ultimately frustrating to debate with. There is no way in hell that I am going to develop a complete case for my passive existence. That you would even request such a thing is over-the-top. I can certainly point you to the readings I have which led me to this path, but for goodness’ sake people write entire books on the problem of universals and come to no conclusion. I certianly do not have the capacity to espouse a complete theory of passive existence here.

Understand the point of my postings above were to go from an intuitive and decidedly unspecific definition of “free will” (actually, will in particular) and try and find out just what will does. I have no qualms if you choose to feel that will is arbitrary; I would then level the same accusation of disusing the word as you have levelled to me. In fact, that is my whole point in saying that will, as it is commonly understood does not exist.

There are many places that the systems come from. Some come from our biological makeup. Others are imposed on us as we age. Still others are synthesized from existing systems upong the discovery (not active discovery, of course, but the mere “happening upon”) of other systems. Many of us on this message board hold logic as one such (complicated) system and use it as a tool when we encounter new systems. There are certainly atomic systems, but they are arbitrary and are a matter of environment, both literally (our surroundings), semi-figuratively (social envoronment), and completely figuratively (sudden “revelations” as they may or may not come).

Which systems should be used where is a matter of association that is not truly a part of the system at all but is also simply forced upon us in the manner similar to that outlined above. von Neumann’s game theory is one such example; all “creativity” is in fact the use of systems in manners which are not normally associated, but those associations are, as I’ve said, arbitrary. Thankfully, too, or analogies would be completely lost on us.

The specific problem I have with compatibilism isn’t that they try to merge free will with determinism, but simply where the hell they put the will. Frankly, in any discussion of free will, even if one were to completely deny causality, I wonder where they put the will. Reflecting on my decision making process demonstrated to me that, in fact, I never really intend to do anything that I do. I feel uncompelled, and for good reason, too, since all the data I have to go off of is encapsulated in my idea of self. If I’m not doing it, then who is? Of course, I am doing things, I just don’t will it to happen. I am who I am. I am the result of a whole slew of arbitrariness. I observe what is going on around me. I am doing these things! I simply am not willing it to happen: there is no room for will, IMO. It falls apart in my hands like a sand castle, and has as much substance.

The central readings that have made the case for me (literally, if I am correct ;)) are taken in part from and also synthesized from the following books: Godel Escher Bach, The Emperor’s New Mind, and Consciousness Explained, by Hofstader, Penrose, and Dennett rspectively. GEB deals with two things specifically of interest to me: recursion (and the corresponding halting problem) and reductionism. Most importantly, I think, is the notion that ants don’t mean to make anthills, but they always do (that is, you can’t point to an ant and say, “He knows how to make an anthill”). From Penrose’s work I took some more ideas about basic computability, and specifically some notions about what consciousness and will must do with regards to scientific studies (he ends on a note that consciousness might hae something to do with accumulated quantum effects, but that just seems a lot like reaching to me). From Dennett’s work I took the case he develops about non-centralized consciousness, and the idea that the brain handles tasks in a manner that is not immediately available for reflective observation due largely to problems inherent in doubting perception.

I have no doubt that I am conscious; I am not certain what I mean by that. I have no doubt I am the one doing the actions; I am not certain that I intend to do any actions (will). I have no doubt that causality, in some form, operates on the universe at large and my constituent parts; I am fairly certain that whatever form of causality exists leaves me little quarter to have a will that is not, itself, a result of causality. So yes, in effect, I am stating that consciousness is really just an epiphenomenon, and were it not for the notion of self it would be even more apparent.

For some specific responses:

Simply ridiculous :smiley: The answer is, in effect, no. You cannot go back in time; or rather, until you can go back in time and relive the past exactly as you did then and remake a decision you won’t have a choice. You ate turkey for thanksgiving. You have no choice in that. You cannot undo that. It is now a matter of fact. To say you could have avoided eating turkey is the subtle mapping of a thought experiment onto remembered parts of your past, and saying that you had a choice is false; to say you would have a choice if you were in such a situation again is true, I won’t disagree. But what is done is done, there are no choices about it. You might even tell me, forcefully, that “Hey, I remember having the choice to eat or to not eat turkey.” That’s fine. Be my guest.

Your first post to this thread mentions that you find neither an unbreakable chain leaving no quarter for free will nor a point where you can certainly find it. Your second post then offers some alternatives for me to tackle (that you don’t necessarily hold true) and your most recent then requests that I address them directly.

No argument. There never was, IMO. I am not sure what you are trying to read into my statements. I would address this later by querying what, exactly, “wishes” was meant to imply. I have not said that it mattered whether or not there were other choices available to the agent in question. so long as there was one choice (which is actually impossible, as the evaluation is always “to or not to”) it was possible, at the point we are discussing, to exercize will.

Perhaps you feel I am, at my core here, a compatibilist in that the wishes represent the weighted systems. I would not have a specific quarrel with that. I simply feel that any definition of “free will” which relies on arbitrariness and no source of motivation apart from the arbitrariness to be contrary to what is commonly understood by the term “Free Will”. Free will is commonly understood to be a conscious agent meaning to make and then making choices, and I reject that, and so I reject the notion of free will. I have no beef with compatibilism otherwise, except that it certainly leaves the impression of activity. Hell, some compatibilists might even demand activity. I merely argue that it seems like we are active agents only because all our data, and the systems themselves, are understood to be part of the self. This is not a huge quandry, it is simply, IMO, misleading to assume that because our weighing is internal that it is somehow consciously active. If I am effectively agreeing 100% with compatibilism then so be it.

Another statement of free will you made was as follows: Free will, as I understand it in this context, is the proposition that operations of consciousness can play a determining role in future actions. I thought I addressed this previously, but I can do so again. I don’t think there is any doubt whatsoever that anyone who uses the term “free will,” including myself, has ever for a moment thought otherwise. What you offer is indeed a very intuitive definition of free will. It is also a fairly empty one, which is what led me down this path in the first place to even ramble on about the topic. Hence the following comment:
[li]But don’t think that there must be free will. I’ve never seen the determinism/free will debate be anything other than a series of assertations.[/li]
I did intend to develop my point; I did not intend to be able to do so without making assertations. I certianly could not do so without stating something more clearly than “Free will is something that does stuff.”

You simply cannot be satisfied, LOL. If i define a term explicitly you get upset because that is not the common use; if I redefine terms throughout the development of a point as “new information” comes to light you also get perturbed. Allow me, then, to simply make one big assertation/definition. You may reread the thread at your liesure to determine where you are misusing the definitions, and where I am misusing them (on purpose).

There is no such thing as free will. There are only interactive systems whose purpose is to place relative weights on data gathered through passive observance. At any particular decision point, our consciousness is aware of a certian number of possible options (it may be that we are aware of all of them, it may be that we are aware of some which aren’t viable). These options of which we are aware are choices. The choices are weighed according to the systems. Action is carried out based on the weighings, or actions are not carried out based on indeterminate weighings (or conflicting systems, or non-halting decisions, or or or). It seems as if we have a will as it is commonly understood because we are also aware of at least some of the systems. All systems are arbitrarily obtained since there is no initial “willing” agent to deliberately obtain them. Their sources include, but are not limited to, our parents, our teachers, and systems applied to themselves (analyzing religious beliefs with logical foundations). No system is required to weigh any particular data: this association is loose and as arbitrarily imposed as the systems themselves.

More comments:

There is no one to design such a system other than evolutionary habit. Modes of thought which are infinitely recursive are especially detrimental to life and would have died with those who held them. Apart from that, the appeal to a first mover is unbecoming (hey, if you can say I abuse logic, I can say you appeal to a first mover:p). Unless you are willing, of course, to say that someone must have designed DNA. I mean, how does it know when to stop replicating? :wink:

I’m not sure how to make this any more clear. I started with unfettered will, which is not required of any system whatsoever as far as I know. I stated that “unfettered” is obviously wrong. I stongly implied physical limitations on action. Since the topic of the entire thread is “the chemistry of free will” I thought it was at least partially obvious that causality was part of “physical limitations.” If we disregard unfettered and just leave uncompelled will, I’m not sure what else is left other than compatibilism: the idea that conscious will is possible even in the scope of physical reality, namely, chemistry and its corresponding causality, predictability, and entirely deterministic outlook. Do you still have a problem with this?

I have repeatedly addressed it. The systems weigh the choices of which we are passivley aware, however many that is. I do not see a problem with evaluation systems being able to handle more than one choice at a time. Why do you?

This problem is the central theme, almost, of Dennett’s “Consciousness Explained.” He goes into great detail about neurological research and what it implies for consciousness. I cannot repeat such an endeavor on a message board. If I could I would, believe me. This topic fascinates me to no end. The author does not attempt to mechanize consciousness and will as much as I have, though; he simply makes the case that it is not in any way centralized, that it deliberately misleads itself, and makes very pointed suggestions that it is a very vivid illusion but never quite comes out to say that it is, in fact, a vivid illusion.

Of course you aren’t. I was simply addressing your direct question. You asked, “Do you feel that my “system model” is somehow false?” and “do you feel that I haven’t adequtely supported my conclusion?”. I answered. Now, you may find direct responses to direct questions “ultimately frustrating”, but I consider them necessary to polite debate.

Yet at no point did you actually perform this exercise. You simply postulated a decision making model in which “will” had no part and then “concluded” (or begged the question) that will had no part to play in that system. This is really the point that I have been trying to illustrate with my last couple of responses to you.
[li]OP: Can free will exist given the chemical nature of neurotransmitters?[/li][li]ERL: No. There is no such thing as will. Also “free” can’t be used to describe anything with a physical manifestation.[/li][li]SM: Well, that begs the question, since you are just postulating that will does not exist. Also, it makes “free” a pretty useless word.[/li]
I’m not saying that you are wrong. I’m saying that your position adds nothing to the discussion of free will other than the assertion that will does not exist. You have not “concluded” this, you have presumed it in your model. Again, this is a valid response to the OP, but not one which encourages further investigation. It would be like responding to the a question about Christ’s human and divine natures by saying, “There is no duality Jesus was just a normal guy.” It might be true. It does answer the OP, but it does not participate in the debate the OP was seeking. And it does beg the question.

Now, since we seem to have driven everybody else out of the thread anyway (why does that happen so frequently?) I suppose we could always change the topic of discussion to , “does will exist?” But that wasn’t the question I was trying to investigate originally.

a few details

Apparently what we have here is a failure to communicate. Specifically a failure on your part to understand tense constructions in English. The sentence, “Did I have a choice?” is phrased in teh simple past. It enquires about a previous state of existence. Specifically, it asks whether at the time of the event I had a choice. It does not ask whether I now have a choice about the outcome of a past event. That sentence would be in the present tense: "Do I have a choice . . ."

Yes – and I have been trying to get you to understand the distinctions between the compatibilist and incompatibilist views on free will.
Compatilbilist: 1 option (strict determinism) does not impact free will. Only compulsion matters.
Incompatibilst: More than one option must be available for will to be free. Compulsion is not relevant.

You keep referencing what seems to me a bastardized version of compatibilism while dismissing incompatibilism entirely with the observation that “free” cannot apply to anything which has physical existence. Of course, you also obviate all such discussions by postulating a model in which will does not exist.

Absolutely not. I feel that you misunderstand (or at least misrepresent) compatibilism, not that you espouse it. Compatibilism is absolutely at odds with your stated view that will does not exist.

Yet this definition is generally considered to directly contradict compatibilism. This definition strongly implies that free exercise of will cannot be subject to determinism.

I cannot see this emptiness. Can you elaborate?

Yes, and the “something” and the “stuff” are rather important. I am not sure why you felt the definition that I offered was unclear in those areas.

Of course. You are the one who used the word “design”. I simply noted that the implications of “design” did not resolve the issue.

Then we dismiss centuries of incompatibilist thought on the matter of free will. I cannot think of any new way to address the idea that your reading of “unfettered” is hardly sufficient grounds for such dismissal.

Incompatibilism. It has had a few proponents over the years.

I don’t. What I have a problem with is getting you to address the issue of whether such a choice can be “free” (undetermined).

I liked Dennett’s book, too. Though it has been quite a few years since I read it, I do not recall anything in his pandemonium model which would negate the existence of “will”. The illusion he argues for is cohesiveness/central authority, not active participation.

I am really not sure I did that. I certainly did attempt to explain the operations of consciousness without requiring a will. But I feel I also attempted to leave room for it along the way so that, should I run into difficulty i could simply cry, “Thar she blows.”

Consider that at each step there was room for will. In the beginning there was complete room for will. I then added things that seemed pertinent and somewhat necessary: physical and mental limitations (your definition still holds). As you correctly noted there was room for will to operate on all the choices remaining (your definition still holds). I then presented the idea of using value systems in order for the will to determine which choices to utilize. This pushed will back a bit to be the operator which decided whether or not to follow the systems (but, as you noted, your system still holds) and could also be responsible for the adoption of systems. The question then became: why would there be a difference between evalutating sensory data and evaluating systems? I answered that there was fundamentally no difference, and here entered the halting problem, but your definition still holds as the agent which decides which systems to use, even if it doesn’t actively make the evaluations.

This brings us to the point where will is either not a system, is a system, or is a privileged system. If it is not a system, this has gotten us to wonder: in what way does it decide if not through an evaluation process? Here I suggested it must be arbitrary. If it is a system, then it is no different than any of the other systems and just as automatic, which (I believe) clearly defeats the intuitive definition of will as uncompelled. If it is a privileged system then where does that leave us? I would think this acts against Occam’s razor, personally, not to mention providing for the existence of something which can only be influenced by itself.

Before I continue, can we stick to these topics? Long posts are particularly gruelling to me :wink:

On a side note, would you consider will to be an ontological issue exclusively?

Yes, you explicitely asserted a model without will. That is what I have been saying. The fact that you allowed “wiggle room” to abandon your assertion if it proved untenable does not change that.

Yes, this is step one in eliminating will. Something makes a choice. You assert that “systems” are the active agent.

Well, unless systems are sensory data the answer would seem obvious. Why is telling time different from measuring temperature? Since you have not defined systems, the answer is more open to interpretation/assertion.

And here is step two: you assert that will does not exist. There is nothing behind this conclusion other than your assertion. You have neither defined the systems which make decisions nor specified the properties evaluated nor characterized the “will system” in any manner. You have simply asserted that it is fundamentally the same as the other systems which you have asserted are not will.

I am not saying that you are wrong. I am simply observing that your conclusion is nothing more than an assertion of your model.

This question illustrates nicely the broad nature of your assertions. Only “systems” evaluate, in your model. Thus will cannot evaluate unless it is a system.

Again, only your assertion makes it “no different” from any other system. This is also the first time that you have included the requirement “automatic” for a system. Again, this simply illustrates the fact that your model has been specifically designed to eliminate will.

Against Occam’s razor? Rather, against your implicit assumptions. Ser Willem said nothing about eliminating possibilities because they are not the simplest possible model. If a privileged system is required to evaluate other systems, then Occam hardly demands its rejection. Since you have defined neither system nor privileged in this context, it is a bit difficult to argue one way or another.

Why you demand a privileged system be influenced only by itself I cannot imagine, but the answer to that might illuminate why you reject the possibility.

I generally try to answer any points that are addressed to me. If you choose not to that is your prerogative.

Didn’t you just ask me to stick to only the topic of your model? :slight_smile:

Well, it seems to me that the ontological questions, depending upon the answers found, feed into issues of phenomenology, metaphysics, and epistemology.

that came out snootier than intended. I do try to respond to any points addressed to me, and I shall continue to do so. I will not think ill of you if you choose to limit the debate at this point, though I reserve the right to bring up past statements if they seem pertinent.

Well, in the end I do, but in the pseudo-development I do not. I merely posit that the systems are what ascribes values. Until we get to the next question the will can be just as active, can it not?

[/quote]
me: “I answered that there was fundamentally no difference [between evaluating systems or evaluating sensory data].”
You: And here is step two: you assert that will does not exist. There is nothing behind this conclusion other than your assertion. You have neither defined the systems which make decisions nor specified the properties evaluated nor characterized the “will system” in any manner.
Response: Fair enough. Perhaps (heh) I should attempt to explain my line of thought on this particular aspect more.

I was (very implicitly) relying on poorly-worded phenomenological aspects of consciousness. It is my opinion that there is nothing any more certain about being aware of a pain in my hand than being aware that I am aware of a pain in my hand. Both are open to skepticism of the highest degree allowable by our philosophies. I am not certain which philosopher I could draw on, exactly, for such a statement because honestly the idea of phenomenology is fairly new to me having just started getting into Hume. He does draw a sharp line, it seems, between our private world and the phenomenal world. I am of the understanding that Kant did a number on Hume’s idea of instantaneous perception of the phenomenal world by stepping it back a bit, but I am not sure I agree with Kant (mainly because I haven’t read him yet! He’s next on the list, though). But it seems that Hume’s skepticism of perception was never carried through to its logical “will-based/conscious agent” conclusion that (my understanding of) Sartre did comment on: when we try to perceive our consciousness we never find it. That perception cannot yield information about things in themselves, so to speak (I always hated that phrasing but it seems appropriate here). It doesn’t matter what we are perceiving; we could be examining ideas, ideas-about-ideas, sensory data, etc, and the whole time we are only grasping impressions but are never revealed the “true form” or essense or what have you.

Personally, my skepticism goes a bit further. What the hell leads us to believe there is an essence there in the first place? Well, I have my own ideas about that, and it is wrapped up in identity so perhaps I will touch upon it in the other thread (Fenn’s deliciousness and all ;)), but for now we can say that believing in the existence of essence is at least a priori knowledge.

[/quote]
Where the hell am I going with this? Ah, yes, the private skepticism.

Indeed, I have not defined systems apart from “things which ascribe value” and I certainly have not described where that value came from, or where the systems come from, until some later point. But the idea of private skepticism here is why there is no difference between measuring temperature and analyzing sensory data (nevermind that we would need sensory data from the thermometer to measure temperature, I think that isn’t quite what you meant). The act of perceiving systems or the values they ascribe is just as open to phenomenological interpretation as touching a tree trunk and being disgusted with the raw essence of it (;)).

We are only receiving impressions of ourselves!

The sticking point, in my mind, is the “refutation of solipsism” we’ve chatted over before. We cannot pierce the phenomenological barrier, but that barrier is everywhere. It is like drowning instead of being a window we look through. At this point the question of “Is there a will” becomes “Are the internal impressions I experience the result of a thing called will? Is it a part of my consciousness? Is it independent of phenomenology?”

Well, here I can see the charge of “defining will away” clearly enough. Yes, systems are defined as being that-which-evaluates. This would make will a system, possibly, except that it could be a special kind of system, a more powerful system, or what have you. That remained to be seen…

Well, it doesn’t remain to be seen very long, then :slight_smile: If a privileged system was able to be influenced by other systems, it would be indistinguishable from those systems. That’s part of the point. All “normal” systems can influence each other, take each other as arguments (perceiving a thought is not able to be distinguished perception-wise from perceiving sense data)(also consider that I may evaluate the temperature in this room as both “hot” and “bad” and both are equally correct). I can allow for will to exist as a ghost-in-the-machine, as a privileged system that only affects and is not affected. If it can be affected then it is indistinguishable from other systems as far as perception goes. If it cannot be affected then it can not be perceived. It can possibly be deduced. That being the case I would appreciate reading an deductive proof of will. :slight_smile: Until such time I will continue to assert, rather bodly and cock-sure, that will doesn’t exist.

[/quote]
This doesn’t leave will out of the picture. It could be a system which can affect and be affected. But positing or denying its existence, then, can never be the result of philosophical investigation, only philosophical assertion. Understand that I am not trying to develop the case for understanding of no-will; I do not feel it is possible. I simply do not feel we should “pass over [it] in silence,” either. :wink:

So yeah, there is always room for will, even in my above explanations. But if I can understand and explain the operations of consciousness mechanically to (at least) my satisfaction then I see no need to add things to it that are not open for philosophical examination! This was Occam’s point, I thought.

I am rather used to your concise posting. It wasn’t perceived as snooty. And isn’t that what matters? :slight_smile:

Certainly, but as the above demonstrates I tend to expound on topics quite a bit, and you tend to respond to every little thing, and this leads to conversations between us which are very difficult to follow (IMO) becuase I tend to forget at which step in different explanations we are at. So, every once and a while, I think I need to pull the reigns in. S’all.

That’s why I called this step 1.

If by “essence” you mean objective referent for our subjective phenomenological experience–nothing[sup]1[/sup] leads us there. We either remain trapped in solipsisim or we take this as a first step.

I agree with your phenomenological interpretation, but that wasn’t the question I asked (nor do I think it leads to the conclusion you reuire.) The original question was, “why would there be a difference between evalutating sensory data and evaluating systems?” This is a very different thing (under your model) from "is there a difference between perceiving sensory data and perceiving the value product of a system?"

Unless you now posit that perception also occurs “in the systems”, it is irrelevant how the evaluation process is represented in our phenomenology. So, my question is really, "why do you presume that a system which evaluates systems is undifferentiated from a system which evaluates phenomenological input?"

I agree.

I disagree. Influenced by does not mean indistinguishable from. Perhaps I am simply unclear on how you mean “privileged” to be read. Can yo define it for me in the context of your systems?

Why can you not allow it as a piece of the machine which both affects and is affected?

Yes, without cause, IMO. As near as I can tell you simply dismiss the idea that will fits your definition of system without requiring a “ghostly privilege”. I still see no reason for this exlusion other than your a priori intent to create a model without will.
[li]You define systems as that which evaluates.[/li][li]You include evaluation as a necessary component of will (as do I)[/li][li]You argue that will cannot be a “privileged system” (a category on which I remain unclear)[/li][li]You exclude the possibility of will as a non-privilieged system (for no reason that I can discern)[/li][li]Thus will must not exist. (if we ignore the two issues directly above this conclusion.)[/li]
You recognize teh fourth point, but in addressing it:

This is your model. You have already positied systems as agents of evaluation. It requires no additional assertion to acept the possibility that one (or more) of those systems is “will”.

I have not asked that you pass over it in silence. I have simply observed that you have asserted the property rather than developing it. Thus, as an answer to “do we have free will” it is a begging of the question.

You will have understood a strictly deterministic universe (assuming the mechanisms were deterministic). You would thus be forced (quite literally) to conclude that we do not have free will. Asserting that systems are the agents of evaluation, tough, is a far cry from understanding the operations of consciousness mechanically.

And, if the mechanisms you understand are not deterministic, you will not be forced to conclude anything about “free will” other than “if it exists it must function within your understood mechanisms” – if Occam ia always right, of course.

Yes, my question about your application of Occam was specific to the denial of a privileged system. I trust I have covered the more general case(s) above.

Sure–exalt your personal phenomenology above all else. Solipsist. :eek:
[sup]1[/sup][sub]“nothing” in terms of a logical deduction or chain or inference. In terms of "why do I/you/anyone make that step, well, the questions of free will & determinism are intertwined with that ,n’est ce pas?[/sub]

Ah, here is one stumbling block:

Heh, not to me, per se. Understanding that we are speaking of conscious will, that is, decision processes of which we must be aware, then systems only take sense data, including our perceptions of the systems themselves.

Yeah. A privileged system is one in which some part of its “input” is not the result of a phenomenological experience. It has direct access to something, whether that be piercing the veil of the “external” world or having direct access to the inner world. Remember that my systems are mere value-assigning entities. The associations they have—what they evaluate—is not a part of them (and is just as arbitrary). Privileged systems, having this direct (privileged) access have a non-arbitrary (and thus also privileged) association.

Now, I must admit at this point that if we said will was a completely privileged special system which could only have a direct line to other systems in an attempt to guide the existent I am not certain that I can banish it from there. Even though it has direct access to these systems, the data these systems work with are still the result of a phenomenon, and so if the being tried to will itself to examine its own will it couldn’t do so without at least one step into the phenomenological barrier. As such, any attempt to examine Will will seem just like examining any other system. That make sense now?

Now, I had also said, “If a privileged system was able to be influenced by other systems, it would be indistinguishable from those systems.” This is sort of a corallary of the above explanation.

I can, but such a system is not open for examination. I am making no errors in postulating its nonexistence, then, or so it seems to me.

Conscious? Where did that come from. You have earlier stipulated that some system, at least, must be used to evaluate other systems (to choose, for instance, which systems to apply to a given question). Is your requirement for a system now that it be conscious? That seems very different from what you initially asserted.

I really think this is another example of you getting tripped up by your intent to develop a consciousness without will. The requirement for consciousness has never before appeared in your discussion of systems, yet you appeal to it now because you are locked into the conclusion. We are not speaking of will, conscious or unconscious, in this question. We are speaking of the characteristics of a system which evaluates other systems. It may be that will is such a system, but that isn’t the question at hand.

As a capstone, I will simply note that even when discussing a consious will your equivalence does not hold. The fact (stipulated) that we perceive the result of will’s evaluation does not mean we preceive all of the elements evaluated.

No, it isn’t. You said, ‘A privileged system is one in which some part of its “input” is not the result of a phenomenological experience.’ The statement above would be a corollary iff all input to the privileged system came from “beyond the set of outputs from ‘normal’ systems”.

I rephrase “phenomenology” because unless you are predicating that each individual system is a consciouis entity then it is an abuse of terminology to speak of the inputs or outputs of the system as phenomenological events. Two separate systems, for example, regulate autonomic breathing in human beings. Neither the oxygen nor the carbon dioxide content in my blood, however, is a phenomenologic event for those systems (or me, for that matter).

That holds for “will” iff it holds for all systems.
If no systems are open for examination, then your model and all conclusions drawn from it, are meaningless.

Ah, no… rather, if we had a will it would be a conscious system. Thus, when speaking about the systems that are a part of will is to speak of consciousness. If not then will is an unconscious event, something that I think goes against what is commonly understood to be free will.

No, it was the definition of “choice” that required awareness, and hence consciousness. It is the act of choosing which requires, then, conscious will. Conscious will is what I am attempting to banish.

Then we’ve strayed. The core things that systems evaluate are choices and systems, BUT I did, very honestly, neglect to mention that all things evalutated by mental systems of which we are aware are, indeed, also the product of awareness (with the possible exception of will, blah blah blah). There can always be unconscious systems running on who-knows-what. But if they are unconscious then they are, I would think, beyond the realm of empirical investigation internally. And if they are unconcious then it is a very hard “will” to get along with, wouldn’t you say?

Now, I think, though you may have similar objections to the ones I am not addressing here they cannot be the same objections exactly, so I’d rather let you reformulate them then me bumble over them. :slight_smile:

Why? What is the requirement that the elements which make up a conscious entity must be conscious? This seems to lead to truly absurd concluisions. This quark is conscious because it is part of an atom which is part of a molecule which is poart of a neuron which is part of a brain . . .

Yes, but the question at hand is whether your conclusion: why would there be a difference between evalutating sensory data and evaluating systems? I answered that there was fundamentally no difference is valid (or even reasonable) under the model you have proposed.

I repeat: unless both the input and output of every evaluative system is sensory data, your conclusion is unsound. That holds whether the system for evaluating systems is conscious and we call it “will” or is non-conscious and we call it “Fred”.

We haven’t strayed. The argumentative structure you are using is “proof by exhaustion”. Thus you propose to examine every case for “systems which evaluate systems” and demonstrate that “will” cannot be found (or is at least superflous) in all of them. BUT in defining the domain of possibilities you are using the “no fundamental dsifference” conclusion to exclude any number of evaluative systems. I keep trying to point this out, but you keep focusing on your intended conclusion and overlooking the structural necessities of your chosen argument.

I cannot make sense of this. Are you restricting your systems explicitely to phenomenological agents? Perhaps you could illustrate how you mean this to apply to a couple of simple evaluative processes: “should I breathe in” and “should I ponder the epistemological consequences of a consciousness which is necessarily passive”?

If you are explicitely restricting your evaluative systems to the domain of phenomenology, then I think it is likely to be another flaw in your proof by exclusion. I can’t think of any reason to assume that a system which evaluates phenomenological data necessarily receives all of it’s input from phenomenological sources. Autonimic functions, among other things, would seem to argue against it. We are certainly aware of the product of those evaluations, but by definition we are not aware of the inputs.

Also, this would seem to explicitely violate your definition of a privileged system as one in which "some part of its ‘input’ is not the result of a phenomenological experience."

Nonsense. They, like any element which cannot be observed directly, are subject to empirical examination through relationships with other agents. For instance, one might manipulate blood gas levels to empirically investigate autonomic breathing. (Caveat: I don’t really understand how you mean “internally” to modify “empirical investigation”. I responded as if it were not present.)

I didn’t say “will” could be unconscious. I said that it seemed unfounded to exclude unconscious elements from the “inputs” to a system which might be called will (or Fred).

Are you asking me to rephrase the parts of my post to which you did not respond? I don’t really see the need.

Your corollary still does not follow from your definition of privileged system, though we now have the additional possibility that your definition of privileged system directly contradicts your definition of system.

Your argument that a “piece of the machine which both affects and is affected” cannot be subject to examination still holds equally well for “will” and “Fred” (and any other evaluative system which has a material basis and yields material results.)

So, are we done here? Has your system determined that you must watch passivley as this thread plummets into obscurity?