About the illusion of free will

The separation between Accountability and Responsibility are not important for criminal Justice, but they are for Ethics. The two conditions have different consequences in various ethical systems.

You’ve said this several times. Cite, please, for combatibilsits using the “as if” formulation. I’m pretty well read on the subject and aren’t aware of any. That they use a different concept of free will from libertarians I’ll grant. As mentioned in my prior thread, I think it’s more accurately described as responsible agency. But “as if” suggest dissemblance, which isn’t accurate. Rather, as you quoted from Hallet, they argue that "[a] person’s brain is clearly fully responsible, and always responsible, for the person’s behavior. "

BTW, I reject the notion that philosophy owns this issue. In the main, it’s one of science, public policy and social interaction. Which is a good thing, as I can’t think of a single interesting or important question to which philosophy (as defined modernly) has supplied a useful answer.

Compatibilists believe in Determinism- that all behaviour is determined by non-mental processes or that mental processes themselves are determined. They deny Free Wiil as a cause in the same way that Hard Determinists do. What they claim is that it is possible to speak of Free Will even though it has no effect on changes in the world which are entirely determined.

A Compatibilist who believes that Will is causative is in fact a Libertarian.

SO when Compatibilists speak of Free Will being an object in the world, they do not mean that Free Will causes behaviour, merely that one can speak AS IF that were the case.

Adding to the above, what we have here is a separation between Ontology (what there is in the world and how it operates) and Ethics (how should one act).

Compatibilists accept that Free Will is absent from the world in its ontological sense- it is not a material part of the sequence of events that cause change in the world (as SEP has it- “the facts of the past, in conjunction with the laws of nature, entail every truth about the future.”)

What they say is that when discussing Ethics it is possible to talk of Moral Responsibility, even though effectively the only basis for such responsibility is “the facts of the past, in conjunction with the laws of nature, entail every truth about the future.” That is to say that they speak AS IF Free Will was acceptable as part of the explanation of human behaviour.

I straddle the Philosophy/Neurology divide. The best quote I have come across is from a Neurologist who says that Philosophy has produced no correct answers but is very good at asking the correct questions, whereas Neurology is good at providing answers but has problems deciding on what question to ask!

Pjen-

It is specifically against the rules to inform another poster that you are putting them on your ignore list. I’m giving you a warning for this. Please don’t do it again.

These definitions are remarkable.

“In ethics and governance, accountability is answerability, blameworthiness, liability, and the expectation of account-giving.”

Okay.

“Moral responsibility is the status of morally deserving praise, blame, reward, or punishment for an act or omission, in accordance with one’s moral obligations.[1][2] Deciding what if anything is morally obligatory is a principal concern of ethics.”

Okay.

So: moral responsibility is about deserving blame in accordance with obligations as per ethics, and accountability is about – blameworthiness in ethics?

“In leadership roles,[2] accountability is the acknowledgment and assumption of responsibility”

That’s beautiful, defining one in terms of the other. (It then goes on to talk of encompassing obligation, because of course it does.) See also: thesaurus.

You have added nothing of any value to this discussion. You have admitted previously that this is a topic which you aren’t interested in. Your entire contribution to this discussion seems to be “I don’t claim anything, :: pretend surprise:: you have done that yourself, I’m just saying everything is the same either way [insert sardonic comment], I don’t know why you assume I care.”

People have a will, but it is constrained and limited and so not free. In the same way that a person trapped in a prison cell and limited by it’s rule is not a ‘free person’, this limited will is not ‘free will’.

“Oh”, but you will say, “I’m not interested in whether it’s free will, merely noting that by your own definition it resembles free will either way. I don’t have any opinion of my own, nor any interest in this topic, I have just been in this discussion for page upon page, purely to muddy the waters of what is being discussed. Insofar as muddying the waters is my only concern, you needn’t expect me to make any cogent argument, what makes you think that is important to me?!”

Or, at least, this is what you’d say if you were being straightforward, about 25% of that is merely implied.

In the same way that a person who can go anywhere except Iran is not a ‘free person’. Limited, so not ‘free’. Or, as I’d say: that actually still seems pretty free. (How about a person who can go anywhere except one city in Iran? Anywhere except the moon?)

Muddy? Lack of interest? I’m decidedly interested in clarifying whether it’s a distinction without a difference as per the proffered definition. That the definitions given for ‘accountability’ and ‘responsibility’ so resemble each other (a) fascinates me, and (b) deserves mention.

I flatly stated the above rather than implying it. List a bunch of items one by one and I’ll expressly tell you which ones I’ll straightforwardly endorse.

Or even five minutes ago!

Suppose someone wants to predict whether or not I’m going to lose my temper two hours from now. First, they’ll have to have a perfect model of the local bus system, to predict which bus will be late. Then they’ll have to have a perfect model of the local store clerks, to know which clerk will make an error in filling out a purchase order. Then they’ll have to have a perfect model of every bird in flight, to know which pigeon is about to poop on my lapel…

To predict an individual’s behavior, you’d have to have a perfect model of every detail of that person’s entire environment. You can’t predict me without predicting my entire cosmos!

That’s a good pragmatic viewpoint. Even if it is all an illusion…we’re stuck with it. We must continue to act as if our decisions are real. What else? Just lie down on the floor and wait for events to happen? Just “hope for the best?”

If we make decisions, take actions, and engage in our world as if we had causal agency, we will find ourselves happier, safer, wealthier, healthier, and saner than if we surrender to the notion of determinism.

I’ve been staying away from the “criminal justice” aspect of the thread, largely for the reasons given in my last post: even if the criminal has absolutely no personal choice in what he’s done…we, as a society, must continue to act as if he did.

(What are we supposed to do? Oh, it was all predetermined; let everybody out of all the jails?)

(I agree with everything you said here, by the way.)

FWIW, my opinion is that the Chinese Room does “know” and “comprehend” the Chinese language, and can read and write it with full understanding of the words’ meanings.

The thought-experiment was intended to belittle AI…but I take it as an affirmation of AI instead. The Chinese Room speaks Chinese, just as the AI robot is aware and conscious.

Nope.

You have (once again) completely failed to state my beliefs for me. I’d recommend you stop trying.

Well, what is your position based on if not faith? Not religious faith, just belief without empirical support.

Interesting. There are few who now oppose Searle’s argument.

How do you empirically define consciousness.

I am aware that I am conscious (have qualia) by immediate awareness; I make the rational assumption that other adult humans with similar world histories share the same experience. I also by a process of comparison reason that many animals have qualia. I do not believe that plants do, nor non-biological objects on the grounds that the process seems to rely on complexity of information exchange together with some feature that allows qualia. Computers have the facility to exchange information, but no computer to date has ever turned around and made a statement about experiencing qualia. There is more to consciousness than merely having information; what is needed is information plus an ability to connect that information to a construction of the real world, which seems to require some form of qualia which are absent from machines- at least they have not taken the trouble to communicate that information to us in the way that every new born child does over the years of its development.

I suspect that consciousness came about in humans not solely because of information processing ability but because this ability was built on a substrate of non-conscious reactivity developed through an evolutionary process- the ability to control the internal and external environment by homeostasis and action- to be an animal with basic senses and drives in an environment which advantaged those animals with the ability to react the best. Computers and the Chinese Room lack these developmental (individual and of a species) processes that have produced entities that not only react to information, but are also able to experience qualia and to add meaning to information.

I too used to believe in strong AI til disabused, in much the same way that my adolescent acceptance of God became empirically indefensible.

I would say, personal experience, as I make decisions regarding my future on a daily basis. I would say that anyone who claims, “This is only an illusion” is the one making a declaration on the basis of faith.

Empirically? Can’t be done. There is no way to prove that another person is conscious. It is only an assumption that there are other minds in the world beside one’s own.

That “rational assumption” is a leap of faith.

AI cannot be disproven, exactly, although most certainly the technical requirements are vastly beyond our capabilities today. Given a 100-times increase in computer capabilities – not entirely beyond the reach of reason, given the increases seen in the last half century – what could prevent simulative AI modeling of intelligent minds?

Your own views seem to be faith-based as much as you think mine are.

I find Searle’s argument to be not particularly meaningful. Assume that we had developed sophisticated enough software that you simply could not discern it from a truly intelligent being – the device could claim convincingly that it was self-aware, and we would have no way to gainsay its claim. There is no incontrovertible test for self-awareness.

In fact, I know what my qualia are like to me, but other that discourse, I have no real way to compare them to yours. I assume you are self-aware in the same way that I am, but I have no way to be absolutely certain. It is like a man trying to understand a woman’s sexual experience, or vice versa, we can only guess, based on what we know from in here.

I understand, for instance, that some people are suicidal, but the idea of that is not something I can wrap my head around.

Personally, I do not think that a computer can ever truly be self-aware in the manner that we are, because biology is a crucial component of that. But I do still think that the part of us that deals with reason is pretty mechanistic – after all, we designed computer hardware and software based in part on how we know our own minds to work. Our intelligence is little more than really good dynamic logic circuitry, and when you combine that with biochemistry (our feelings), you get a fairly complete map of human behavior, with no room left for this vaporous “free will” notion.

If that is deterministic, down to the level of lepton collisions, so be it. Arguing otherwise will not make determinism go away, nor will determinism make us feel like our choices are any less autonomous than they seem to be.

Grin! In the longstanding tradition of Great Debates, I have singled out the only thing in your post that I don’t agree with – but at very least I thought I ought to mention I strongly agree with everything else you said!

(My opinion is that we can emulate every possible biological structure and process, and that the emulation would have the same “awareness” that we do.)

Definite agreement on the ultimate failure of any objective test for self-awareness. This is the joy – and tragedy – of the Turing Test. We can already produce systems that can “converse” well enough to fool young children, at least. Just as with chess-playing machines, this kind of brute-force fakery will only get better and better. It doesn’t prove anything in itself, either way.

This is one of the problems with the argument, “Would you believe in God if you actually met him?” The sad truth is that there are plenty of possible “powerful entities” that could persuade me that they are God. Why wouldn’t Loki or Hermes be able to trick me into thinking they were the Judeo-Christian-Islamic God?

Eventually, as AI tech gets better, it’ll be the “false positives” we have to worry about. (Like the movie “Her” about the guy who falls in love with his personal computer’s pseudo-personalized operating system.)

My point is a narrow one, but important. “As if” is your characterization of the compatibilist position, not how they themselves argue it. Anyone interested can confirm this by reading the rather long and technical Stanford Encyclopedia of Philosophy article on the subject (written, as is entirely appropriate IMHO, by a hard determinist). As noted in the article, rather than “as if,” the position is grounded in the argument that responsibility is compatible with determinism. I don’t find the philosophical argument persuasive, but reach a similar conclusion on other grounds. In any event, ascribing to combatibilists a position they don’t actually hold is a bit disingenuous. Be accurate and call it a criticism.

You miss my point about the difference between Ontology and Ethics. Ontologically Responsibility in its hardest form is impossible, To be responsible is to be the cause of an event; this requires that ones mental state be part of the causal chain- that desires and drives are prime causes. If this were to be the case, one would be verging on libertarianism. To be accountable is to define a social expectation- that one should reasonably give an account of why one acted in such a way- to describe ones motivations and drives as experienced by the person.

I see there being two broad arguments for Compatibilism:

1/ The brain events cause behaviour, and so we can call to account the person owning that brain (where ‘person’ is a social construct.)

2/ That Drives and Motivations are higher levels of description of brain events and are contributory to but not causative of the actual behaviour.

I think both can be fairly described by the phrase “As If” when compared to Libertarianism. I subscribe to the latter argument as I have detailed above.