Is Computer Self-Awareness Possible

I think so, too – imitative learning is probably a key concept to AI, and brute forcing is about as likely to result in true sentience as bundling a bunch of laser pointers is to create a floodlight. Sure, it might seem sensible at first – might in principle even be possible – but really, intelligence, in my view, depends on a hierarchy of highly ‘chunked’ concepts and their interrelations. That’s why there are no thoughts visible at the neuron level, any more than there are mate strategies visible at the move-notation level of chess (to the chess novice). Present computers are good at manipulating the move level, they are like lasers, brightly illuminating one tiny spot. To get to a ‘chunked’ level, to the wide (if sometimes perhaps dim) illumination of consciousness, it probably won’t do to just bunch these specialized task-solvers – a greater deal of interplay is needed, which is created from low-level concepts (neuron firings, chess moves…) through a dynamic learning process, analogous to how you go from viewing a board with figurines on it to viewing it in terms of legal moves to viewing it in terms of good (or bad) strategies.

Eh, I got tangled in my metaphors somewhere, but I’m too lazy to unknot…

Well, computation as formalized in terms of Turing machines, or recursive functions, or algorithms – indeed, that the first and the last items are isomorphic pretty much formalizes the identity of hardware and software machines, it seems to me.

I think I’d stand by my reply, though – after all, what is meant by understanding Chinese, if not the capacity to process Chinese sentences and produce appropriate outputs? Yes, I know – the man (when he’s speaking English) doesn’t know what he’s talking about; but that doesn’t mean that there’s nothing there that does. Consciousness is just far less united than it usually seems – compare blindsight: patients can’t see, are not aware of, what happens in their field of vision. Yet if forced to guess, they can do better than chance – much better, almost perfectly, in some cases. Indeed, IIRC some patients have been able to train themselves to guess at the right time and to themselves, resulting in a kind of roundabout effective awareness of the visual stimuli.

Maybe something similar is at work here – just because English-speaking man isn’t aware of what he’s saying in Chinese, doesn’t mean that Chinese-speaking man isn’t, either.

I’m old enough to remember when ascribing to even the higher animals any of the emotions we humans feel was regarded as anthropomorphizing for sentimental reasons: animals were really just automatons, no matter what we thought - or so we were told.

We now know that’s bullshit, pretty far down the evolutionary ladder.

Maybe it doesn’t reach all the way down to the mosquito, or maybe it does. Or more likely there’s no hard-and-fast line: the mosquito and the digger wasp may be doing more or less programmed behaviors for the most part, while still being somewhat more than complete automatons.

Regardless, the point is that volition isn’t just a property of humans. And the interesting thing about the mosquito is that, unlike the computer, it doesn’t just sit there until told to do something.

Of course it is possible; as I said there are plenty of Turing tests already out there on the net for you to converse with, you can chat with one here. That you may be fooled into thinking you are talking to an actual human being if you were unaware that you are talking to a program doesn’t mean that it is a conscious entity.

It’s the standards definition of sexual reproduction that you can find in any dictionary. Of course it would be possible for extra-terrestrial life to reproduce sexually.

How? No life is being reproduced, sexually or otherwise inside a running program. You can have a program run a simulated model of life sexually reproducing, but it has no actual existence as life, it is 1s and 0s that have been arbitrarily assigned by the programmer to represent ‘life’ for the purposes of the program. When the execution of the program is terminated, they’re just 1s and 0s waiting to be used for some other purpose in memory for another program.

You’re not being disingenuous; you are misunderstanding what is going on. Nobody says writing a program is literally building a machine. Programs are instructions that are compiled down to machine code to tell a machine what to do. That machine is a computer.

Yes, but we’re not talking about a system that can fool a retarded child. We’re talking about a system that an expert can’t identify as artificial at a rate greater than chance.

I said “writing a program” but I should have said compiling, not writing.

But, yes, people do indeed say that compiling a program is literally the creation of a machine. A computer, also, is a machine–its a machine built for the purpose of making it easy to build other machines inside it through the codewriting process.

Once I compile my adding program, there are components inside my computer now configured in a way that functions to perform activities easily interpreted as the addition of numbers.

Components configured in a way that functions to do X… what is that but a machine designed to do X?

I agree there’s no sharp line to be drawn, and that humans aren’t the only animals with volition, sentience, and the like. Hope I wasn’t implying otherwise.

However, a computer doesn’t have to be something that just sits there waiting to be told what to do. That’s the way we often build them, especially desktop computers or the computers embedded in home electronics. But there are machines that run by themselves, executing an endless loop in which they constantly compare the current state of the world (as measured by sensors) with a desired state, and take actions to bring the former closer to the latter.

There’s no sharp dividing line for that either, because whether the computer is responding to commands or to sensor readings, they’re all just different forms of input.

Even more to the point, we’re talking about a system that could never be distinguished from a human being in terms of dialogue behavior, ever, by anyone.

I assume Dissonance also thinks that that kind of machine could be built, and could also fail to have consciousness, as well. Do I have you right, Dissonance?

I don’t think we should go that far.

But suppose you have a 1 hour chat with ten entities. Five of these entities are human beings, and five are artificial. At the end of the ten hours, you sort them into two categories, human and machine. To make it fair, we promise to pay each human who gets scored as a human $1000, if they get scored as a computer they get nothing. This eliminates sandbagging, since I could simulate ELIZA fairly well, or heck, just not answer anything. The other option is to use humans scored as artificial for medical experiments, since they proved they weren’t conscious. The point is that the humans have to be trying their best to seem human.

Note that because people are stupid and lazy, we can’t just ring a bell one day and proclaim “The Turing Test has been passed!”. Passing the Turing Test isn’t an event, it’s a process. Otherwise, the Turing Test was passed back in 1970 when the first idiot typed to ELIZA without realizing they weren’t typing with a computer program.

Quoth Frylock:

Oh, this one’s easy: The book of rules is tautologically too long and complex to be memorized by a human. If it could be memorized by a human, then it could be memorized by the simulation implemented in the book of rules.

But suppose that instead of a human memorizing the book, we have an alien who’s much more intelligent do it? Well, then, I would say that that alien has a separate person living in his mind.

I don’t think that follows. You can give the universal turing machine a program which implements—the universal turing machine. So why (other than for obvious physical engineering issues) would it be impossible to give a human instructions which represent the behavior of—a human?

Give me a 10 by 10 grid with a pattern of numbers in it and I can very likely give you another 10 by 10 grid containing instructions for creating the first pattern, with plenty of room left over in the second grid after the instructions have been written in.

There’s no general rule that a system can’t contain instructions for simulating another system of equal size or complexity.

Since the question “does it have consciousness” is a metaphysical one and not just an epistemological one, all scenarios like the one you describe are open to the objection “But just because it fooled some people into think it’s an X, that doesn’t mean it is an X.”

That’s why I described a scenario as extreme as the one I described–I’m trying to be clear about what Dissonance believes. Does he just think it’s impossible to engineer something that acts human unless you’ve somehow also engineered consciousness, or does he think there could be something that really does act exactly like a human (not just that it can fool some people sometimes, but that it’s really indistinguishable) and yet still be non-conscious?

But then the problem is that Turing Test judge can simply say, “It’s a computer!” to every participant, and therefore never accidentally declare a computer to be a human.

The point I’m trying to make is that a really good simulation of human consciousness, if it’s practically indistinguishable from humans, is going to have to enter multiple trials with multiple experts, and be distinguished from humans at no better than chance rates, before it can be said to have passed a Turing Test. One trial won’t do it, because the one judge can simply guess randomly. If the judge flips a coin and guesses “computer” and it’s really a computer, does that mean the computer failed the Turing Test?

Of course, right now computers are so poor at the Turing Test that with any reasonable test and judge who isn’t an idiot can guess right 100% of the time, assuming the humans aren’t deliberately acting like computers. So the goal isn’t to reach a point where judges guess wrong 100% of the time, but rather 50% of the time.

Again, how does this mean the creation of a sentient, conscious entity rather than the creation of something that is an artificial intelligence -something that can feign self-awareness successfully and is indistinguishable from something actually self aware as long as you are willing to ignore your knowledge of what it actually is and how it is actually being accomplished. If I could fool not only a retarded child but you as well that I’ve made a coin vanish only to reappear behind your ear, does this mean actual magic has been performed or that I’ve successfully performed an act indistinguishable from magic as long as you ignore that I’ve told you it is accomplished by sleight of hand? If it turns out that I am not actually a human being but a program that writes posts on the SDMB that has managed to fool you into thinking I’m a person, does that mean that I’m a self-aware, conscious, sentient program or that I’m a program that has feigned self-awareness, consciousness and sentience well enough to fool you before you found out what was going on behind the smoke and mirrors?

You’re just digging yourself deeper. Compiled code isn’t a machine; compiled code being executed isn’t a machine. A computer isn’t a machine built for the purpose of building other machines inside it. When you compile an addition program, you have not configured components inside your computer. You’ve just told the computer how information at specific addresses is to be interpreted while this program is executing its instructions.

Have you ever taken an AI class in your life? A goal of AI is not creating an illusion of anything, it is understanding how self awareness and thought works. I’m not a great fan of AI, since they have been overestimating what they could do since at least 1959, but you are unfairly maligning the field.

What are your qualifications to spout such nonsense? I know you can program, but so can my 95 year old father in law. How much do you know about computer architecture? Have you ever been involved in designing a computer? How much do you know about genetic algorithms and other heuristic techniques? Did you know that a researcher at IBM had a program which genetically created machine language programs in 1959 or so? (Published in the Fall Joint Computer Conference, IIRC.)

Our brains consist of neurons, with lots of connections, which obey fairly simple firing rules. Obviously incapable of thought, right?

I gave arguments for the position. You’re simply adding the word “not” to it and making an assertion of the result.

And you then started saying things about computers “interpreting” things–as though a computer could understand anything well enough to be able to “interpret” it? It’s the people who do the interpreting, not the computer. I would have thought, given the position you’re arguing for, that this would be something you’d immediately agree with.

How is that different than a human brain? If you could peel back the smoke and mirrors and figure exactly how a human brain works, would that prove that human beings aren’t really conscious, and only seem to be conscious?

My point is that if, by your definition of consciousness, human brains only seem to be conscious, then how about we change the definition of consciousness to mean “whatever it is that human brains do”.

I honestly don’t think it is possible to write a program that could pass a Turing Test. A system that can pass a Turing Test isn’t going to be a program running on a laptop, it’s going to include all sorts of specialized hardware, and it won’t be programmed by humans, it will be programmed by itself. And you won’t be able to trace the state of the system’s program any more than you can for the human brain, because the human brain doesn’t work by running lines of code in sequence. Therefore, I don’t think even in principle you could construct an algorithm to work like a human brain, whether it’s implemented on a laptop or a Chinese Room.

So in other words, you don’t believe that it’s possible by definition for artificial intelligence to be created in a computer or computer-like system?

If that’s not what you mean, what do you even mean by the bolded section, because that sure reads like you’re defining artificial intelligence/consciousness/sentience out of existence by fiat.

You didn’t give arguments for your positions, you made a series of statements that are factually incorrect. My pointing out that things are not what you claim them to be isn’t simply me adding the word not and making an assertion out of it. It’s me telling you that you are factually wrong.

That you need to have it explained to you what is meant by a compiled program telling the computer how to interpret data that is being stored at addresses again just shows how little you understand how a computer works. Everything is a 1 or a 0 (not actually, but it’s commonly used as a faster way than to say the absence or presence of an electrical charge of a specific bit). Eight bits make up a byte, with a possible 256 values. When a program is compiled, it tells the computer how to interpret the bytes being held at addresses and how long they are. 11001011 has no inherit meaning on its own. It could mean the EBSDIC or ASCII collating sequence value of that number; it could mean it is a part of a variable of a signed or unsigned integer of a specified length or a non-integer numeric value that the computer needs to be told where the decimal point is. A compiled program tells the computer to reserve specific length of absolute addresses and how to interpret the data that is placed in them. And no, a compiled program is not a machine, it is the program compiled down to instructions in the form of machine code that tells the actual machine, the computer, what to do.

Artificial intelligence is possible. What artificial intelligence does not mean is the creation of actual sentience, self-awareness and consciousness. I started this by saying a computer is no more likely to become self aware than a toaster. I’m not dismissing out of hand the possibility that a toaster could actually become sentient, after all anything is possible. What I would require to believe that a toaster has become sentient, self-aware and possessing a consciousness would be extraordinary proof of this.