I guess I should say neo-determinism, since I’m, and most determinists, are actually pretty happy to include “truly” random elements, if any exist, in their model, since they really change nothing in terms of questions of experience and volition. And again, I don’t think neo-determinism IS very interesting, because it’s basically just the pretty much unfounded yet unavoidable-for-any-discussion-at-all assumption that the world is explicable.
Well, erl, if you’ll permit me to continue to be the hyper-Calvin to your Hobbes, I would have to badger you a little about what we need to do to our chess computer to achieve all the necessary elements to “will”. ‘Motivation’? How about simple maximisation of a buffered pleasure-pain score? If the underlying cause of the motivation was arbitrary, then cannot any old thing be programmed in, and we can pretend that such a motivation evolved in a similar manner to our “thirst”?
Of course, but as ever we ask “are more explanatory entities necessary?”, do we not? Are they to you?
I can’t answer the question the way you are asking it, but I can attempt to give an answer to what I think the question should be.
As things are, we are slowly coming to attribute “mental properties” to computers, and especially to software. Now I mean this in a strictly grammatical sense: we are becoming accustomed to saying things about programs, and computers, in terms that were previously only used for other humans, or more broadly pets, and possibly animals in general. I mean things like “wanting”, that “This function wants a reference to an integer,” or, “This cell needs a value between 0 and 50,” or “I am so sick of Word being helpful.” Does this mean that we are attributing mental events to computers and the software that runs on them? Well, I think the answer depends on what you are counting as a “mental event”. “Doing a calculation in one’s head,” for example, is not an explanation of neural phenomena, so it is not a “mental event” in that way; but, inasmuch as we implicitly correlate social responses to brain activity, we could say that calculating in the head is “a mental event.” We might suggest, as a first pass, that a mental event is the pause between two public responses. I think we just need to be careful “mental events”, as with “the will”, since such terms are not explanatory in a natural science sort of way. I believe we do ourselves wrong when we approach them so.
So, it will simply take, as RaftPeople mentioned, increasing levels of complexity before we more or less fully resign our turns of speech towards computers having mental events, or intending, or willing, or any of the more complicated social phenomena that we already have a place laid out for in language we have until recently reserved only for certain living being.
There will not be a definite cutoff point, a measure of complexity at which we can definitively say, “There, now computers are intending their actions.” But that is because, as I’ve said, such terms did not select empirical objects in the first place, so no amount of explanation, qualification, or reduction will replace them.
Well, I do not use will as an explanation, so there is nothing to posit in my account. 
If I read that post and no others, erl, I would swear that you were a eliminativist! 
BTW, I think more than complexity is needed… I think for computers to be more like humans they need to activity learn and explore like babies do and so develop their own beliefs and goals. It would probably take years for it to learn to be intelligent like a human adult, which is also the case with humans. I mean I think things like language should be learnt be the artificial brain rather than it being programmed in. It would learn for itself what the words actually mean by experience. I mean there are some programs where you can kind of have typed conversations with computers like “ELIZA” but it isn’t understanding what you’re typing. And I think a far more complex version of the program wouldn’t understand what’s going on. It is it’s programmer who understands it all. On the other hand, if the artificial brain learn from its own experiences (like a human infant) then it truly would understand what it’s talking about and understand and appreciate its goals, etc. (Well to reflect on its goals/“will” in a philosophical way it would need to have the experiences and self-motivating learning equivalent to maybe a kid or teen)
I don’t know, John, I think you expect far too much from will and intentions. I believe we will ascribe these concepts to computers long before they act like babies/kids/teens. We don’t interact with computers like we interact with people. If I say, “Damn autocorrection!” and my boss says, “Word is just trying to help,” is this the same meaning of “try” as when *he tries to help me?
Something to think about.
erislover:
Anthropomorphism isn’t proof of human-level mental states… people can even talk about plants or thermostats in similar ways…
But then talking about other people having “mental states” isn’t proof of them having them, either. I’m suggesting that when a word makes a jump into everyday language in one spot from another spot we are accustomed to, one can see why the word would jump like so if only one doesn’t read too much into its meaning in the first place. Marriage. Will. Mental states.
Speaking of “will” and “minds”, think of the saying, “It’s like it had a will of its own,” or, “I’m telling you the thing had a mind of its own.” Are you familiar with this expression about objects? And what do you suppose the person was thinking of or meaning when they said that, that “Surely there was some neurochemical reaction there!”? And why would you think so? – are you thinking about brain chemistry when you ask someone what’s on their mind?
There would be lots of reasons to believe that other people besides you have mental states… e.g. when they appear to be screaming in genuine pain this implies that internally they would be experiencing similar unpleasant sensations that you would be if you were acting the same way they were. One reason to believe they experience things in a similar way to you do (feel pain, etc) is because we’re all just humans. So the idea that other people have mental states isn’t just an assertion with no evidence.
It depends if the person is serious when they said “I’m telling you the thing had a mind of its own”. But there is a slight chance the person believes that everything including rocks possesses consciousness. And a statement about the presence of a mind says nothing about the presence of neurochemical reactions. I mean an artificial brain could just rely on electronics.
“What’s on your mind?” is a question which is asking about the information or data in your brain. The mechanisms behind it all are irrelevant. A similar thing is how information can be stored in computers using optical discs or magnetism, etc, but if you’re only dealing with the software, the physics of the hardware is irrelevant… It is possible to deal with the mind (psychology/cognitive science/etc) and the underlying hardware (neuroscience) separately…
edit of last sentence:
It is possible to deal with the mind (psychology/cognitive science/etc) while ignoring the underlying hardware (neuroscience)…
I would agree with this in the sense that we can say that a person is in pain when that person gives us certain public signals. The exact characteristic of these signals is not in question: if I have learned the word ‘pain’, then I know when to use it. In normal circumstances, its use is clear and I apply it without reductively examining the situation, without appealing to biology or neuroscience: he is holding his arm and moaning, and I say he is in pain, and no one would normally question that I know that, and certainly no one would normally question what it really is that I know.
I’m not accustomed to it being a joke. Perhaps you are thinking of it as a hyperbole. “Leaves blowing in the wind don’t have a mind of their own, not like I do.” But all that means is, “I do not speak of leaves like I speak of people, except perhaps in a few outlying cases.” It is not a remark conditioned on natural sciences. But you do not normally question what someone means when they suggest that it was like lawnmower had a mind of its own. We might interpret that phrase to suggest that, in English, we say an object has a mind if we are unable to force it to behave in a way that it can behave. A lawnmower can simply be pushed in a straight line. As can a shopping cart. But sometimes they seem to just go in whatever direction pleases them.
That is not a mistake. Nor, I think, is it anthropomorphizing them. It is using the word correctly for its context. Does it mean the same thing? Well… is it used the same way? When do we, outside philosophizing, remark about someone’s mind or will or intentions? Sometimes, lawnmowers or shopping carts do act like people when they act in unexpected ways. And that is why the word is easy to transfer. But other times they do not act like people. I have no idea how to suggest that a shopping cart has a mental state, because I wouldn’t know in what circumstances I would use that phrase. So, personally, it is safe to say that shopping carts don’t have mental states. But this is not an explanation of what mental states are.
Computers, on the other hand, can exhibit behavior independent of the operator’s intentions. Often, we say physical machines are “not working right” when that happens. But in some circumstances, we will use the expressions that are often used for other people for machines. In the cases where those expressions are suggestive of “mental states”, then it is fair to say that computers have mental states. It is just a remark about the use of a word, not a proposition of science.
In what sense is it IN my brain? Do I open it up and look with my eyes to see what is inside? When you ask what’s on my mind, are you frustrated that you cannot open my skull to see? Of course not. But that is not supposed to invalidate the use of “in” in your sentence. You used the word correctly in the sense that, if you asked, I would give the response you expected. But would it be correct to say that I had to go find out what was on my mind? Is it something I have to investigate? Do I somehow ask myself, “What information is being processed in my brain?”, find the answer, and then pass it along to you?
None of these questions would occur to us… except when we are philosophizing.
Exactly. And so that is not possibly what we could be “referring to” when we use the words. So then: can computers have mental states?
You talked about shopping carts going in “whatever direction ‘pleases’ them”. “Pleases” implies a mental state. But of course, you wouldn’t be talking scientifically or philosophically there… I thought this thread was about actual wills and minds, etc, rather than metaphorical, etc, ones.
Well I am concerned with actual mental states of computers (if they exist) rather than ones we talk about in a metaphorical (or whatever) way…
The recalled memories could be sent to the visual areas of the brain so in a sense you’d “see” visual memories. (just guessing - I haven’t studied neuroscience)
The data in the brain would be encoded in the neurons - like how information on a computer can be encoded as magnetic patterns on a computer’s hard-drive. If I physically opened up a hard-drive I obviously can’t see videos or games, etc - just some metal, etc. To decode the information you’d need some extra hardware… e.g. other computer bits.
What’s “on my mind” would involve what’s in my short-term memory so it is readily accessible… it would take the brain some effort to communicate that information to others though (through spoken words). “What information is being processed in my brain?” seems to imply a more detailed response and so more detailed introspection since not everything being processed by your brain is “on your mind” (you’re not always conscious of it) - e.g. your brain processes your movements while you’re walking - but the mechanics of keeping your balancing and placement of your feet isn’t usually something that is “on your mind”.
Well that’s what I think human-level self-awareness is about - this capability. And the artificial brains I was talking about would have the same capability.
Rather than rewrite what I’ve written in another thread I’ll post a link to it…
http://www.iidb.org/vbb/showthread.php?t=106682
In my “hierarchy of intelligent systems”, most computers would be classified as:
“1. Processing Systems [or Programmed Systems]
…receive [or detect], process and respond to input.”
So they wouldn’t be classified as an “aware system”. Since “mental” is a word which relates to the mind, and processing systems aren’t aware, they wouldn’t have actual minds and so they have no mental aspect including “mental states”. They could have similar things though, like priorities and goals.
Was I being metaphorical? Since the inception of the word “mind” or “will” we’ve not had the grasp of neuroscience necessary to provide a physical account of these words, so how could that possibly be what they mean?
But that is precisely what we have to work with. 1) How we talk about 2) how things behave. The English language has been used by people for centuries. Do you suppose we’ve just been clever enough to have placeholders for concepts-yet-to-come? You want to talk about actual mental states. I thought I was.
Neither had Shakespeare. Do you suppose he was using “see” improperly for it? Was he using it in anticipation of understanding that would arrive later? Or is it possible that “seeing” does not mean “processing photons striking receptors in the eyball”? (And never has, for that matter.)
What information is there to decode? I ask you how you’re doing, and you say, “I’ve been a little depressed lately. The holidays always do that for me.” Is “depression” a “mental state”? Do you have it? Must you be versed in psychology to use the word? Maybe you should have said “malaise” instead. Or “melancholic”…
That is a strange picture of events. That we have this information, and we know what it is, we just only have to find a way to express it. That, for example, I know my own feelings, I recognize them immediately, but the expression of those feelings–that takes work. But we aren’t asking that question. We’re asking about how we recognize a will or, for that matter, a mind. To me, we are asking a question about the context of a word that we’ve used for years now without any apparent problem or significant misunderstandings.
I have never been asked, “What information is being processed in your brain?” If I were… well I suppose I would remark, “Do you mean, what am I thinking about?” If so–it is a strange way to say it.
Hmm. How would you know to attribute this ability to them? By analyzing the transistors, or by interacting with it and asking it questions?
What counts as “awareness”? We might say, “The ADT security system is capable of detecting the presence of warm bodies inside the building.” We might also say, “The ADT security system can tell whether there are intruders in the building or not.” We might also say, “The ADT security system knows when someone is in the building.” Personally, I have heard expressions like this countless times, and it never struck me as improper, and I don’t recall thinking, “Well, he’s really speaking metaphorically.” Perhaps you would do otherwise.
erislover:
…You want to talk about actual mental states. I thought I was. …
You were talking about a shopping cart that seems to have “a mind of its own”…
see (notice the “mainly humorous” part)
http://www.freesearch.co.uk/dictionary/have+a+mind+of+its+own
Let’s say the person about to use the cliche believes that only animals such as insects, fish, birds, reptiles, mammals, etc, have minds. If they then talk about a shopping cart having a mind they wouldn’t be saying that the shopping cart has an intelligence or awareness at least equal to an insect…
BTW, the presence of a mind is related to our moral treatment of the object I think… e.g. if a foetus or brain dead adult human isn’t considered to have a functioning mind then its life often has less value than a normal human. And animal-rights activitists are normally more concerned about mammals compared to insects - perhaps due to the perceived differences in the “minds” of those animals (and whether they think it is capable of feeling pain). “Sentient” is a word that can be used to describe things with minds… if that applies to shopping trolleys it implies that it could be cruel to do things like beat it up with a sledgehammer… Even without modern science, people still have beliefs about which things were sentient. e.g. in some religions due to reincarnation, even worms and insects are considered sentient and very important.
…Or is it possible that “seeing” does not mean “processing photons striking receptors in the eyball”? (And never has, for that matter.)
I think so… check out
http://mathworld.wolfram.com/ScintillatingGridIllusion.html
http://en.wikipedia.org/wiki/Image:Optical.greysquares.arp.600pix.jpg
Those additional sight sensations would be due to the processing of the receptor data… we don’t just “see” things in an unprocessed way… And we have the sensation of seeing things that aren’t in our current external environment - during visual hallucinations… (and there’s dreams, etc)
What information is there to decode? I ask you how you’re doing, and you say, "I’ve been a little depressed lately. The holidays always do that for me."
Well the speech that comes out of you isn’t just stored in a little sound-storage container waiting to be released… there is information stored in your brain… e.g. the English words that can be used to describe your emotional state, information about English grammar, etc. The configurations of the neurons, etc, are decoded and processed and this leads to you talking. (a clumsly explanation) This is similar to how 0’s and 1’s can be stored magnetically, and this can be decoded as audio, video, programs, etc.
Is “depression” a “mental state”? Do you have it? Must you be versed in psychology to use the word? Maybe you should have said “malaise” instead. Or “melancholic”…
Psychologists can diagnose “clinical depression” but it is common for people to say they’re feeling “depressed”… “malaise” and “melancholic” aren’t widely used by the general public AFAIK.
How would you know to attribute this ability [philosophisizing] to them [artificial brains]? By analyzing the transistors, or by interacting with it and asking it questions?
I’d see if they question the world in a philosophical way without being asked to or programmed to… I mean they’d start life like infants and I’d see if they’d eventually be like toddlers and spontaneously ask questions like “why is the sky blue?” and “Are you going to die? Am I going to die? What happened to Grandma?”, etc.
What counts as “awareness”?
See what I wrote here:
http://www.iidb.org/vbb/showthread.php?t=106682
(it is a little too long to repost)
…The ADT security system…
That could be classified as
“1. Processing Systems [or Programmed Systems]
…receive [or detect], process and respond to input.”
under my ‘hierarchy of intelligent systems’… but I don’t think it is “aware” since it doesn’t really act according to self-learnt beliefs, etc.
Which returns us to the previous comment: we do not normally speak of shopping carts like we speak of, say, humans, except perhaps in some outlying cases. It is fair to say that this is all very mundane stuff.
Quite so. Now, must they be natural scientists to speak of mammals having a will or intentions or motivations?
Right. Because science is not the mechanisms which grounds language, in general.
More “processing”, less “seeing”. Point something out to someone, only instead of using “see that over there?” rephrase it in this “processing” way… however that is to be done. I wonder what looks you’ll get. In what cases, generally, is a reductive account of phenomena able to replace the expression of that phenomena?
When you ask someone how they feel, is that what you’re thinking about? Binary?
Oh, well, strange for you to be bringing the general public in at this point, when we’ve been dodging what people normally say all this time, to try and get to the bottom of "will"s and "mind"s where, apparently, how people normally use these words is not at all indicative of what you’re looking for. To me, if how people normally use “will” is not indicative of what you’re looking for, then quite plainly you are not looking for “will”.
But this could be created at this very moment. For every set of actions, we may create a set of rules that will determine them. Does this mean “there is no will, only ‘processing’” (whatever that means), or does this mean “We are free to use ‘will’ in its normal way now”?
I think this can be accomplished with computers in the following manner, which in the end becomes what we would call the “programming”:
Goals, drives, motivations etc.
Learning
Interacting with/sensing the environment
(and I’m sure much more)
I can imagine that abstract analysis about one’s existence could be a by-product of the above listed items.
It is merely another thought path to be investigated to see if it will result in achieving one of the embedded goals.
No… as humans learn about socialising and empathy, they learn how to imagine what goals, etc, someone else would have. e.g. when kids get to an old enough age, when they are told a children’s story, they can be asked “how would that character feel after that happened?” and the kid would give a pretty good answer. I think this skill is also used in manipulation and maybe also deception - the kid basically predicts the other person’s knowledge and desires, etc, and takes advantage of them. In a similar way, ordinary people can observe animal behaviour and interact with the animals and arrive at a belief about whether that animal has desires, goals, etc.
I was talking about 1’s and 0’s because it is an easy to understand example where information (audio, videos, etc, based on 1’s and 0’s) is decoded from a physical source (a magnetic harddrive). BTW, within software things can also be encoded and decoded… e.g. in the case of video there are “codecs” which involve raw video being compressed and encoded in a format. Then later it can be decoded (decompressed back to raw video).
I never said the brain worked in binary. I think there are degrees of emotions… e.g. you can feel a little surprised, or slight more surprised, or a little lonely, or a little less lonely, etc.
People have a pretty good idea if they are “feeling” depressed. BTW, in clinical depression AFAIK, the patient would normally be aware that they feel depressed - since often it would be them who decided to go to a doctor about it in the first place… so our judgements on our depression are fairly relevant when it comes to clinical depression. But the word “mind” in “the shopping cart had a mind of its own” isn’t very helpful for people who want to learn more about minds. I guess it shows that minds cause somewhat unpredictable behaviours - and so would the alledged “mind” of a shopping cart. I guess the “goal” of a shopping cart could be to frustrate you by it refusing to steer properly… but there isn’t an intelligent mechanism behind this. i.e. I don’t think it has senses that take in information about your steering ability and then it decides what to do in order to make your steering harder to do…
Well there are many sci-fi stories about this… e.g. I, Robot… if it was possible surely AI people, who often are sci-fi fans, would try and do it… after all they are doing lots of other AI projects.
If the actions are explicitly programmed in, then the progammer is directly responsible for the behaviour… it isn’t due to genuine insights of the AI.
BTW, this is ALICE:
http://alice.pandorabots.com/
Its purpose is for conversations but its major limitation is that it doesn’t seem to have any memory.
But anyway, the kinds of “why” questions I’m talking about involve the toddler subconsciously checking their “knowledge base” to see if their understanding of things is complete or at least consistent. This involves a lot of computational power since the toddler could have many thousands of different experiences and there could be thousands of relationships between elements of those experiences. e.g. they could have learnt that birds and planes can fly, that planes are at airports or in the sky, that they haven’t seen a plane in real life, that planes can crash, as well as the words that can be used to describe all of that.
In order for a computer to ask insightful “why” questions it has to already know a lot… and it also has to learn from your answers to those why questions so that it no longer displays the naiveity and incompetance that usual human-imitating AI displays. Basically this can’t be made yet.
I’d say the “will” is basically a series of goals, which are a result of processing your priorities and your current situation. (something like that) i.e. I think the “will” exists, although I don’t really like the word - it seems out-fashioned or something. I prefer “goals” or “desires” or something.