The Soul

I am in the midst of a taped lecture course about Philosophy & Religion. The presenter obviously is convinced of the existance of a soul,giving only philosophy that accepts this consept.
I’m sure that there must be philosophers who dispute the existence of a soul.I should like to hear their arguments. Any suggestions?
Prehaps, because religion almost universally accepts the soul consept, the lecturer gives only the view that souls exist.
Comments please!

Der Aldt

What sort of philosophy course are you paying for that doesn’t cover materialism? I’m no expert, but I’d like to see a good outline of the philosophical arguments on this one too.

And I’ll add my own question:

If the soul is immaterial, why are people so concerned with the temperature (a purely physical property of matter) of its environment in the hereafter?

Philosophy…
Materialism
How about the existentialists? (Camus, etc.)
Or Nietzsche?
Or Ayn Rand?

Religion…
I don’t think every religion has the concept of a human soul, although I can’t think of a specific example at the moment. I’m not sure how Buddhism views it…certainly not like a soul depicted by Christian faiths.

Well, if people don’t have free will then people are just ongoing chemical reactions. OTOH, if people do have free will, there must be something beyong the bio-chemical realm which allows people free will. Philosophies that teach there is no difference between us and an alka-selter are epistemo-logically valid, but hardly complex enough for college study.

I followed that thread, and remain thoroughly unconvinced that “something beyond the bio-chemical realm” is necessary for freewill to exist.

And if by “no difference between us and an alka-seltzer” you mean materialism, there are a large number of prominent Neuroscientists who would be amused by your assertion, many of them University professors.

If you would like to resurrect that thread to explain your theory of how such a thing could be possible, go ahead. I can’t imagine how a chemical reaction makes a choice. I’ve studied automata theory – outputs are always ultimately determined by inputs no matter what rules the system runs under or how much storage or feedback is devised, even if the rules can change based on the inputs, even in analog and mixed digital/analog systems. I am unaware of any “prominent” neuroscientists who disagree with this.

Hi, jmullaney. You and I aren’t going to agree on this, but a respectful sharing of perspectives might have some value either way.

It seems to me that you are missing out an important element of this question. Whether outputs are determined by inputs is one important point, and you make it well. Another is whether this relationship is, or can be, detected and analysed in a way which enables one to predict the outcome. In simplified systems, it can, but this is not always true.

Provided the ‘inputs’ to your own brain are sufficiently numerous and complex, so that you are unable to analyse and evaluate them separately in your own conscious, then the summed ‘output’ will yield behaviour which is effectively indistinguishable from free will, as far as you and the people with whom you inter-act are concerned. In other words, the determinism is there, but sufficiently obscured from any analysis as not to be perceived as such.

This mental process is sometimes referred to as ‘chunking’, i.e. the conscious mind can only access higher-level accumulations (‘chunks’) of the brain’s processes, but the lower-level detail - the grains out of which the chunks are composed - remain inaccessible to conscious process. Dogulas Hofstadter has many interesting discussions of this process in hs famous book, “Godel, Escher, Bach”.

Your logic seems to run something like this:

  1. Humans exhibit free will.
  2. Without a soul, free will is impossible (all is determinism, comprehensible in terms of input-output schema)
  3. Therefore, the human soul exists.

With respect, I believe the first premise is flawed. First of all, it might be said to beg the question. Some people would define ‘the soul’ as ‘that which renders free will possible’. I’m not saying you would make this mistake, but some would. Secondly, it is presented as a ‘given’, whereas I am entitled to suggest it unsubstantiated. In my experience ‘free will’ is rather elusive to define. As you will be aware from having discussed this topic elsewhere, it is very hard for opposing sides to agree what constitutes ‘free will’, or to cite a behaviour which both sides can agree constitutes evidence for it.

I do recognise that most of us feel a strong need to believe that we exhibit free will. The argument tends to get over-simplified, as if the only alternative is to think of oneself as a ‘fleshy robot’ - which of course people find repugnant. However, I for one have no problem with the
belief that I have no ‘free will’ (as that concept is commonly accommodated by those with spiritual views). I don’t think of myself as a robot, but I can very contentedly reconcile myself to the notion that all my ‘outputs’ DO have relevant ‘inputs’, albeit ones so complex I cannot consciously access and analyse them.

The related difficulty with the concept of ‘the soul’ is that, like God, even if it does exist it clearly possesses no attributes which can be detected and verified by independent experimenters. (At this point, I hope this thread is of sufficient quality that nobody will chime in with the usual low-grade detritus about “Can you prove a spring morning is beautiful, or that your SO loves you?”).

One can posit as many such entities as one desires - by definition their existence can be neither proved nor disproved. So it is with the soul. Until such time as it acquires any detectable attributes, there is not much rreason to suppose it is there, and it makes no difference whether it is or is not. For something to ‘be’ there or not only matters if it has detectable attributes. Hence the problematic (for believers, that is) question: Why stop at one soul? Why not credit yourself with two, or 17, or 1037? What difference would it make?

Anyway, while differ we may, respect to you and your views.

I’m not aware of any either, mostly because neuroscientists, for the most part, don’t specialize in the question of freewill. I was referring to the question of materialism (in one form or another) being the preferred model for explaining the workings of the human brain. I am talking about testable scientific theories. If they are proven valid, you can argue over the implications they make in regard to freewill. But to say they must be invalid because freewill exists is… well, ianzin said it better than I could have.

Ianzin, that was a terrific post. Especially:

You should consider reposting this to the thread jmullaney indicated.

I’m not sure how that comes into play. Sure, a system complex enough would be indistinguishable to outsiders from having some degree of “randomness”, directed or otherwise. But there shouldn’t be a way to fool the machine itself into believing it has the freedom to choose it’s actions when it does not.

You posit, if offhandedly, a conscious:

And I’m willing to concede that the conscious being may be limited, to some extent, to choose among certain threads which come from unconsciousness. But how would this conscious being then be unable to distinguish that he is in fact free to choose among the threads? The higher level mind is still accessing and, apparently, deciding what to do.

I’m only using the word in the main sense it has been used for as long as it has been in the English language. As I mention in the GD thread, I feel no need to make up a new word when English already has a word with this meaning, despite complaints that the word is somewhat loaded.

It just seems odd to me that people on an individual basis, could not convince themselves that they are free to make decisions. I’ll admit that they could be operating under a complete illusion and that no one can truly convince themselves of it.

I really do think that is the only alternative.

Ah – but there you go claiming you have a conscious again. But I suppose there is no way for us to entirely prove to ourselves that all our outputs aren’t predetermined.

Yet, ultimately, one only needs prove this to themselves. You seem somewhat convinced that you have a conscious mind. But of course there is no way to prove that to anyone else, but even so does that mean such a proposition is necessarily false?

But, can you detect that you have what you call consciousness? And if you can’t, who can?

Well, if I am actually making decisions, there seems to be some coherence and consistancy to them – but that may in fact be due to the “threads” I have to “choose” from more than anything. I could be a completely different soul from one minute to the next though and not realize it. But you are getting into the realm in which old Father Ockham specializes which is a little over my head.

jmullaney You write:

But there shouldn’t be a way to fool the machine itself into believing it has the freedom to choose it’s actions when it does not.

Would you care to elaborate on this? It is not obvious to me that this is true.

“But there shouldn’t be a way to fool the machine itself into believing it has the freedom to choose it’s actions when it does not.”

“Would you care to elaborate on this? It is not obvious to me that this is true.”
at the risk of misinterpreting jmullaney’s intent, i’d like to take a shot at this one.

  1. In order to “trick” a computer into believing that it possesses free will, it would be necessary, first, to build a machine that could at least APPEAR to be able to believe something about itself.
    HAL from 2001 is the classic example of this, but again, nobody was ever really sure whether HAL actually BELIEVED anything, in the way in which humans understand that term. so the argument is already beyond hypothetical… we really don’t know whether such a machine can be built by man.

  2. The means of decision-making would have to be concealed from the part of the program which produced a “belief about self.”
    Our hypothetical HAL would have to be capable of uttering (and truly, deeply, meaningfully BELIEVING) the statement “I believe I have just made a choice to sever Frank Poole’s oxygen tube,” while remaining completely unaware that a certain clause in his programming (neutralize humans if they interfere with classified Jupiter mission) has forced him to perform said action.

  3. This is where we begin to run into problems. HAL would have to either (a) be unaware that the “neutralize humans” directive existed at all or (b) somehow be unable to trace the causal connection between the directive and the subsequent termination of Frank Poole.
    Either way, HAL’s “higher consciousness” has absolutely NO IDEA of the REAL REASON it has “made the decision that it has” (in actuality, remember, we are assuming that HAL’s “higher consciousness” has actually made no decision at all).
    So consider what HAL would “experience” in this case: Frank Poole has made some comments which are interpreted by HAL’s “subconscious” as dangerous to the mission, so a “subconscious” decision is made to kill Frank via the first available means–in this case, severing the air hose. Up till this point, HAL’s “higher consciousness” has experienced NOTHING, right?
    Then, all of a sudden, HAL become aware of the fact that he has cut Poole’s cord, and one of his crew is floating dead in space.
    He has no knowledge of the actual cause of the events, so his “consciousness” must fabricate a cause AFTER the events–unlike in the parallel case of humans, where we typically fabricate the cause BEFORE the event (ie “I’m going to kill Frank Poole because he’s a punk.”).
    We would expect HAL’s “higher consciousness” to be able to provide a reason why it acted the way it did; yet, short of an outright lie, “because i have free will” seems like the best we’re going to get. This is hardly the statement of purpose one would expect from an entity that (whether justifiably or not) believed itself to have free will.

  4. So we’ve already reached a point at which this exercise breaks down as a parallel to human free will or lack thereof. Because although modern psychoanalysis and physiology have made clear that often we have no idea why we do the things we do, this is not ALWAYS the case.
    Sometimes, for example, I make a “decision” to eat because I am hungry. But I could just as easily (well, almost as easily!) “decide” to go on a hunger strike and not eat at all, even though I’m hungry.
    Now technically, it is possible that my “decision” to go on a hunger strike was actually an inevitability, given the nature of my upbringing and the circumstances I found myself in. Either way, though, I am AWARE THAT I AM HUNGRY.

  5. In HAL’s parallel case, he is NOT aware of a critical directive in his programming. In fact, in order for us to successfully “fool” HAL all the time, HAL would have to be COMPLETELY IGNORANT of the fact that he has ANY programming whatsoever—he must be made to believe that all decisions are made on a case-by-case basis at the moment they occur, and are not in any way predetermined by his programming.
    If HAL decides to “eat,” it must be because he has seen a “subjectively attractive” piece of food that appears to have “subjectively pleasant” taste and texture attributes–it CANNOT be “because I was hungry.”

  6. And thus we reach the final nail in the coffin (i think, anyway).
    A “conscious,” “free-willed” decision not to eat, when made by a human, specifically contradicts a preprogrammed instinct which we are aware we have.
    But our hypothetical HAL, lacking any knowledge of his own “instincts,” would be powerless to act against them. HAL’s “higher consciousness” could thus NEVER “decide” to go on a hunger strike, because he would NEVER FEEL HUNGRY in the first place.

Free will for humans involves the acknowledgement of a certain friction between programmed instinct and the “force of will”–since HAL’s “higher consciousness” must be, by the definition of the experiment, oblivious to ALL of his own instincts, whatever “force of will” we’ve convinced him he has would be completely impotent. It would serve only as a means to generate fictional accounts of causation after the fact.
Now one could certainly take the cynical approach and say “such is human free will, as well.” But clearly human free will is more than that, because, at the very least, we are able to generate fictional accounts of causation BEFORE the fact!

Does that help?

… processing …

Er…

You’re right – that might not be a true statement.

BickByro had me until he wrote:

And it is. Hunger is just signal from a meter in our body that says we are low on power really. It is just an input. Other inputs may end up determining whether or not you decide to eat. Or do anything, or not do anything… damn.

Well, now I’ve got the song Jed the Humanoid in my head…

… or do I???

:eek:

in both of our defenses, hunger may be nothing more than an input, but again, it is one that we are AWARE of.

in order to “fool the machine itself into believing it has the freedom to choose its actions when it does not,” the machine must ALWAYS be ignorant of its programming–in our parallel case, we would all eat at about 12:00 PM without ever having any notion of “hunger.” we’d all think we were doing it “to be individuals.”

obviously, this is significantly different from the reality of the human situation, in which we are sometimes aware of our “uncontrollable” impulses and sometimes not.

The OP asked a factual question about the whereabouts of philosophers and philosophies which might hold that humans have no soul.

“I saw an argument about it on the SDMB” is not going to cut it with his professer.

I’m going to consider the first few posts to be an answer and close this thread. Take it to Great Debates, folks.