Artificial Intelligence and evolution

That’s actually not a meaningful question. The interesting thing is that one can entirely bypass the quantum-mechanics quagmire about whether the universe is really deterministic or not, because it totally doesn’t matter. Consciousness – by which I presume you mean “free will” – has absolutely nothing to do with it. Whether quantum events are completely deterministic but subject to hidden variables, or completely random, or just appear that way – whatever, it doesn’t matter. We are part and parcel of the quantum universe. If I exercise my conscious free will to go to a movie tomorrow, or change my mind and decide to stay home, why would I care if there is some sense in which it was actually all deterministic? I have free will because my conscious mind thinks it’s making free decisions. Any other interpretation is a silly, unknowable, philosophical nitpick.

Some people feel they have real choice. It’s interesting to contemplate.

We do have a real choice. Nothing I said contradicts that. The idea that the universe may be deterministic is a counter-intuitive concept that really derives from quantum theory and is well beyond the level of everyday experience. The conflict between determinism and free will is primarily in the idea that if all your actions tomorrow are already knowable, then you have no real choice in them. Except that they’re not knowable. Intrinsically and fundamentally not knowable. Apparent quantum randomness, Heisenberg uncertainty, and computational resource limits all conspire to make a deterministic future unknowable. Some argue with equal vigor that determinism doesn’t exist at all. What’s the difference? Would you prefer your future to be predetermined but unknowable, or would you prefer it to be batted around by a cosmic random number generator like a bouncing ball in a pinball machine? Does one give you choices in the humanly experiential sense that are more “real” than the other?

Disagree.

Any given physiological or behavioral characteristic has to have a first instance. It might be tough to get agreement on which benchmark to set, but that’s a different matter than claiming there are no distinctions between any two generations.

But it is predetermined that they feel that way.
I think it is actually undecidable, since even if it were true that your decisions are a deterministic function of your external and internal environment, you could never compute that choice fast enough to predict what it would be before you make it, and your internal state would be unavailable to anyone or anything making that calculation in any case. So even if we don’t really have free will, it seems that we do to all intents and purposes.

In many interpretations of quantum theory there isn’t really a deterministic outcome… e.g. in MWI all of the possible outcomes become real but the universe “we” end up in is random. Or there is only one universe and the outcome is random. This would especially be the case when decisions are on the threshold and quantum fluctuations decide the outcome.

I suppose it’s the difference between just following highly sophisticated programming logic to mimic behavior and having an actual sense of self. A chess computer doesn’t know it’s playing chess.

But, again – and I keep harping on this point, but to me it’s the point – imagine that you and I are playing chess while having a warm and witty conversation that (a) is shot through with literary allusions and slang expressions and a lively debate about religion and the occasional sly jab at each other’s politics; and that (b) includes knowing references to how you and I are, y’know, playing chess.

Can you tell that I know I’m playing chess?

I mean, I said so – but that’s what I’d do if mimicking human behavior like a chatbot with “highly sophisticated programming logic”. As far as you can tell, I seem to have “an actual sense of self” – I certainly act like I do – either because I actually do, or because that’s what mimicry looks like.

Say you’re having an online chat with someone who declares “I think, therefore I am”: is that the truth, or is it mimicry of someone who was telling the truth?

I think our concept of consciousness comes from our egos. A chess playing computer is aware that it’s playing chess, it’s just not aware of anything else. Or to put it another way, awareness in that sense requires the ability to compare to a different state. I don’t think consciousness is tightly tied to intelligence either, it’s just that it should be achievable by anything we call intelligence. On the other hand there can be plenty of consciousness without intelligence. The computer I’m typing on is aware of a lot of things right now, just like the insects crawling on the ground outside, but does not have what we call intelligence.

It’s a distinction without a difference. As for “not knowing” that it’s playing chess, how do you know what it “knows”? What does that even mean? We tend to want to ascribe irrelevant human attributes to machine intelligence and then become dismissive when they don’t exhibit some irrelevant attribute like maybe getting mad when it loses. One of the reasons IBM’s Watson did so well at Jeopardy is because, as IBM correctly stated, it “knows what it knows, and knows what it doesn’t know” and strategically placed its bets based on its confidence in its answers.

You really should read the Chinese Room argument that I linked before which deals with exactly this question. I linked to a description at the Standford Encyclopedia of philosophy because it’s one of the more comprehensive treatments, but unfortunately they try so hard to present a “balanced” view that it obscures the fact that the refutations are much stronger that the supporting arguments.

In a nutshell, the philosopher John Searle imagined himself in a locked room into which someone from the outside slips in sheets of paper with Chinese symbols. Searle knows nothing of Chinese, but he has access to a computer or perhaps a large set of rule books which he can use to construct reasonable Chinese responses, and he slips these back out under the door. To the outsiders, it appears that Searle is reading Chinese and responding to it, but he actually has no idea what he’s saying.

Superficially this seems like a good analogy with AI and an argument against its authenticity, but there are strong rebuttals against it which IMO expose it as fundamentally misguided. These are not simple arguments but I’ll give you the one-sentence summaries of two of them. One of the strongest refutations is the “systems reply”, which argues that in this role Searle is just a cog in the machine, part of a larger system, and that while Searle may not understand Chinese, from a functionalist perspective the room – the “system” – clearly does.

A related refutation that takes this further is the “other minds” reply. If this argument persuades us that the Chinese Room has no actual understanding, how do we know that a person does, if the behavior is the same? What we’re really doing here is slowing down and dissecting a cognitive process, imagining its individual microscopic steps unfolding one by one. If we did that with the brain, would we not reach the same conclusions? What if someone spoke to you and we watched the individual neurons firing, memories being fetched, and rules being executed as your brain formulated a response? How is this different than the Chinese Room except in speed and complexity?

The “Standford” Encyclopedia of Philosophy? Freaking stupid typos like this come from fast typing where the fingers start hitting common letter combinations before the brain catches up. Someone should do a cognitive science study on it. :stuck_out_tongue:

its that 2nd brain, Wolfpup, like a trycerotops…(I think that idea has been discredited)
Anyways
I’ve always thought, along these lines, that language is the key driver of self, or self consciousness. The “internal dialogue” with which one thinks. Take away that, and one is pretty close to a animals way of being…IMO. Not using or thinking with language is diffficult, too, not something which is easy to do or practice. (see meditation)

Without language or some sort of “symbol” processing?, animals I believe, perhaps AI…are conscious… but not self conscious, except in a survival response sort of way. But I dont really know.

I work with about 50 parrots daily, and man, they are pretty darn smart. :eek: Little clever robots with flying systems and defense mechanisms!!:smiley:

…but aware of themselves as in “I think, they I Am” ? I dont know.

Anyways, great discussions here, as always.

::shakes head sadly::

What you want to do is, get the parrots to mimic you.

<SQUAWK> <WHISTLE> “I think therefore I am” <CAW> “I think therefore I am”

I always took the Chinese Room metaphor as an almost exact literal description of the biological truth of human intelligence. No one neuron understands Chinese, but the sum of all the neurons does.

No one “NAND” gate in my computer’s CPU understands my organizational Excel spreadsheet, either. It seems really silly of Searle to have come up with a metaphor that describes how intelligence actually does work, in an attempt to explain how it couldn’t!

I disagree. A chess playing computer is not aware that it is playing chess, in fact it is not aware of anything. It “understands” board positions, various metrics for moves, and the rules. It does not understand the concept of a game of chess, a concept one level higher than just the rules.

But in The Other Waldo Pepper’s example, you might not know you are playing chess until you break out of your conversation and look at the chessboard. I can look at a clue to a British crossword puzzle, go away for a few hours, and when I return I know the answer. Do I know I was solving the anagram or whatever? No way. Heck, when I’m at work I can write down a coding problem, then read the Dope, and ten minutes later I can write 40 perfect lines of code which my subconscious wrote while I was doing something else.
All those things are clearly intelligent, but not conscious intelligence.

Super good point: a lot of what we do is not “conscious.” Facial recognition, for instance: I’m looking at a crowd in the airport, and suddenly recognize my sister. Absolutely no conscious thinking at all: it’s something the brain does without our direction.

At this point, there’s no way to know how much an AI would do “unconsciously” by routine, and how much would be conscious. This might be a “setting” in a simulation. I would guess that early AI would not have “unconscious processing” the way we humans do. Leaving a problem and letting the unconscious muse over it for a while is a kind of kludgy approach, one given us by evolution. We might duplicate it for AI…or we might skip over it. Just because our minds do something doesn’t mean all minds need to.

Why would you guess that?

I disagree that unconscious intelligence is, necessarily, any kludgier than conscious. As you say, we do a great many things, smoothly and efficiently, without having to work through the steps in the mind’s eye.

I imagine it is possible that AI will match us in general intelligence before learning consciousness.

I’m not sure that’s my example.

My idea went like this: if a chatbot can do a passable job of pretending to be human, then I figure it’d be able to carry on a conversation about whether it’s also playing a game with you right now – because otherwise it’d fail the Turing test as soon as you tried to run that by it, right?

Like, imagine you and I are in the middle of conversing, and you suggest that each post from here on start with the next letter of the alphabet. “All right,” I say, because I’m an intelligent human and can follow along with you, “but why?”

“Because I thought it’d be an interesting game,” you reply.

“Can’t argue with that,” I say, and then continue with a good-sized post.

And so it goes: you make sure your response starts with a “D”, and I reply with an initial “E” – and you throw in a question about whether we’re having a conversation with that quirk, while we have said conversation with said quirk. And I answer by noting that we’re carrying on said conversation with said quirk.

If a chatbot can’t do that – namely, play that game while mentioning that it’s playing that game – then it fails the Turing test, right? I can do it, so you’d be able to spot an impostor in no time flat. But this is a chatbot that can pass the Turing test, so it must be able to make reference to the game it’s playing.

Now take it one step further: you and I are having a conversation, and suddenly you suggest we simultaneously play chess. Being an intelligent human, I respond with an otherwise-conversational post that kicks off with an Nf3 – and you respond to that with an equally-conversational post that includes a d5. And we keep conversing, while carrying on a game of chess; and you eventually make reference to how we’re conversing while playing a game of chess; and I reply by doing likewise.

So I figure that, in order to pass the Turing test, it has to be able to mention whether or not it’s playing that game of chess with you; if it can’t do that, you’d leap to your feet and jab your finger at the monitor and shriek IMPOSTOR!

But having two entire systems – let alone the triple-layering of the human brain with reptilian/mammalian/hominid workings – is a result of evolution, whereas an AI, one presumes, is going to be a result of intelligent engineering.

I think it is obvious that some processes in AI will be that way, but not so much along the lines of the human conscious and unconscious minds, but, rather, more along the lines of thinking processes and non-thinking processes such as heartbeat and digestion.

An AI might very well have a mathematics package bolted on, just as I can pick up a calculator. But why would an AI be built with dreams, phobias, and Freudian slips?

Defining “intelligence” as general problem-solving…then, yeah, I’ll buy that. We’ll probably build a robot that can, for example, take a washing machine apart and then put it back together again, before we have an AI that understands what clothing is for.

What about an autonomous car? Is an autonomous car more intelligent than a bug? And if so, in what way is an autonomous car running among other cars (and other potential perils) more intelligent than a bug running among other bugs (and other potential perils)?