Is Computer Self-Awareness Possible

Yes, in a sense. The paper has very little to be aware of, and little means of awareness. So for a piece of paper, it is self aware. Somewhere in this whole thread is the bizarre idea that something not human would have human self awareness,. which if it did, would mean it is not self aware, since it isn’t human.

I think this isn’t a good criteria, since humans attribute self-awareness to all kinds of things which aren’t. Haven’t you ever asked a vending machine why it is doing this to you?

Maybe a better one is that we will test the self-awareness of our computers every day (the way we test the self-awareness of other people) and they will pass.

How is information encoded in the brain? How is information encoded in a computer? How is information encoded in the cosmos?

The cosmos is fundamental.

I would not be entirely convinced that the zombot is self-aware, but mostly in the sense that I’m not entirely convinced that YOU are self-aware.

I’ve only ever been me. I know I’m aware; I have to intuit that from the outside for all others. Including zombots.

As far as I understand it, the brain isn’t digital. Neurons fire when a certain elcetro-chemical threshold is met. This can be approximated by digital circuitry but there is a underlying variation between the two kinds of systems.

ETA: Replying to Kosmik.

In the book, the Innovator’s Dilemma - a “classic” in the business world, lauded by the Harvard Business Review (since the author Clayton Christensen is on their faculty) - he discusses how new technologies end up crowding out the old:

  • they start off by being a completely new approach to an old problem.
  • The new tech can’t compete with the old on the old tech’s terms and so is often pushed aside for a period of time
  • but then new applications emerge that can use the new tech and provide a forum in which the new tech can improve in efficiency and effectiveness
  • by the time the new tech is performing more reliably, it is also less expensive and can be invested in to make it do the same tasks as the old tech - only better.

In the business world, this explains why masters of the old tech often have trouble seeing and competing in the spaces where the new tech emerges.

Back to this topic: you are supporting my point. Humans anthropomorphize - we do it to our pets and we name the voice on our GPS. Therefore, debating whether machines are “truly self aware” is NOT the point - we are more than willing to treat machines who are *not even close to being self aware *as if they are - and we are doing that already. So the real question is: what happens when the tech achieves certain milestones of performance that bring it even closer to our anthropomorphized ideals?

If computers are not self aware what is 127.0.0.1? It is the first person personal pronoun for computers with a network interface.

What computers are missing is the ability to create, synthesize, and reason at the level of humans, not self awareness, which is a uniquely human problem. Humans develop self-awareness as our cognitive abilities develop. Computers do it when they begin to execute reflective instructions, because we designed them to have the capacity for self awareness.

Obviously questions about whether we can prove our existence are shared with computers, because that is all based on the reliablility of sensory input or reflection, not self awareness itself.

I don’t think this is going to be an issue, because by the time we have truly self aware computers we will have computers that mimic being self aware (in the sense that other people are) more and more. Before too long smartphones are going to get enough processing power to deduce our schedules and remind us of appointments we should be making like a smart companion would. When self awareness is added - possibly to allow the assistant to improve by self-analysis and self-modification of its algorithms, we’ll hardly see the difference.

A meat brain can be squishy and edible. A silicon brain can’t. :wink:

I’ve come to change my mind about this issue. I used to say “of course,” now I say “we don’t know and can’t know and it doesn’t matter.”

It’s a long argument to have why we don’t know and can’t know–and my thoughts are still in flux about that anyway–but in any case it doesn’t matter. If we are quite certain that a silicon brain will make all the same physical differences in the world relevant to social interaction that a meat brain would make, then there would be no principled reason for treating silicon brains any differently than we treat meat brains. So in my view, whether or not they are conscious is not really an important issue. The answer to the question makes no difference.*

*To add a wrinkle: Treating a silicon brain relevantly identically to the way we treat a meat brain will necessarily involve acting in regards to it on an assumption that it has consciousness the same way we do. But saying this does not amount to an argument that they will have consciousness. It is only an argument that we will have no principled reason for saying they don’t!

You again demonstrate a complete misunderstanding of what is fundamentally going on. AI is not about creating actual self-awareness, it is creating the illusion of self awareness. All that it is a machine executing code that has been programmed into it, it is only self-aware or sentient if you ignore that this is what is creating the appearance of self-awareness. Essentially you’re the member of the audience of a magic show who believes real magic is going on, even when the magician has told you it is not real magic, it is the appearance of magic caused by sleight of hand but you believe it must be actual magic because you don’t understand how he is doing it. Actual self-awareness is not going on. It is a machine that has been programmed to mimic, to feign, to create the illusion of actual self-awareness. It is only indistinguishable from actual self-awareness if you willingly ignore this. It is not an actual sentient being. What you believe in is complete woo.

That you think I am claiming magic just show how willing you are to be blinded by smoke and mirrors even when you are told that this is all that is going on. That you think there is some thing I have to actually refute or that I am claiming a vitalism again just show a fundamental misunderstanding of what is actually going on.

It is what you are saying. We know that self awareness is possible because we are self-aware; therefore it must be possible to create a computer that is self-aware. We are capable of sexual reproduction, which by the way creates sentient beings; does it therefore follow that it must be possible to create computers that can sexually reproduce? If I write a complex multi-faceted program to simulate sexual reproduction of various ‘beings’ that only exist as a bunch of presences or absences of electrical charges in your PC, have I thus created actual sexual reproduction and organic life in your PC?

You do realise that this is simply begging the question, and circular to boot? You are starting from the premise that AI can only ever create the illusion of self awareness and not actual self awareness, and from there are trying to argue that AI can never create actual self awareness.

I reject your assertion that AI can only ever create the illusion of self awareness and not actual self awareness. Please provide evidence for that claim.

And of course, you can not construct a valid argument that proves that AI can never be self aware starting from a premise that AI can never be self aware.

Define “sexual reproduction”. The standard definitions all refer to the production of offspring. So by definition a physical computer needs the ability to move physical objects in order to reproduce sexually. IOW robots can sexually reproduce. Of course they can.

Have you created actual sexual reproduction: yes. However it isn’t the computer that is sexually reproducing, it is the code within it. The computer is merely the environment in which the organisms exist.

Have you created organic life: of course not. You just told us that there is no organic material involved, so how can this be organic life?

I assume there is meant to be some sort of gotcha in there, but for the life of me can’t see what it is.

The same can be said of organic human brains - they are only executing electrochemical operations, and are only sentient if you ignore the fact that it’s all just atoms interacting with each other.

Your argument only supports the conclusion that we might not create sentience - it does not provide the basis for the conclusion that we cannot, unless there is something about human sentience that is independent of the physical operation of the human brain.

Do you understand how a computer fundamentally works? It is a machine executing code. It is incapable of thought. This is not begging the question. I am not going to prove evidence of a negative for you by proving computers are not capable of self-awareness; the burden is on you to prove that it is possible to create actual self-awareness rather than programming the ability to act as if it is self-aware. It is impossible to prove a negative, and the claim that this is actual self-awareness is an extraordinary claim.

I’m not. I’m starting from the premise of what computers actually are and what AI actually is. Of course it is possible to make a computer act as if it is self aware, but this is only indistinguishable from actual self awareness if you willingly ignore that it is a machine executing code that was written to make it act as if it was self-aware. There are plenty of Turing tests out there on the net that you can converse with and convince yourself that it is actually a self-aware entity if you ignore the fact that it is just a script that is run to mimic self-awareness.

Good god. The definition of sexual reproduction is not the ability to produce offspring; it is the ability to reproduce offspring sexually. See Asexual reproduction and Mitosis. How you leap to the conclusion that robots can ‘of course’ reproduce sexually is absurd to the extreme.

No, actual sexual reproduction has not occurred. There are no actual organisms that have any actual existence. The code does not reproduce, it has only modeled sexual reproduction, these ‘beings’ only exist as electrical charges of the absence thereof, traditionally represented as 1s and 0s in temporary memory and vanish when the program is stopped. The idea that the code within it is reproducing is a fundamental misunderstanding of what code is.

The gotcha, which isn’t intended as a gotcha is that no life has been created, organic or otherwise. This is the essence of the problem, a fundamental misunderstanding of what a computer actually is, what programming actually is, and what AI actually is.

And of course Turing himself and many since then have believed that to make a machine that can pass the Turing test just is to create a conscious machine.

Let’s get a little clearer about what you’re arguing. Do you think it’s possible in theory to make a machine which imitates exactly the dialectical behavior* of a human being in every way, but which is not conscious?

*i.e. behavior having to do with dialogue… in other words, let’s not (at least for now) worry about behaviors like walking around and moving things around and so on.

Also, following up to some of your comments to Blake, I’d like to know whether, it would be possible for there to be extra-terrestrial biological organisms which sexually reproduce with each other. If so, then what is your definition of sexual reproduction? I ask because whatever definition you give, I’d wager, could also be made to fit what’s going on inside a certain appropriately programmed computer. It is almost a commonplace* that what we are doing when we program a computer is literally building a machine. If you give me a definition of “sexual reproduction” which doesn’t refer essentially to descent from a terrestrial biological organism, then I can build a machine that does whatever it is you just described. And if I can build a machine that can do it, then I can write a program that does it.

Of course you might think that sexual reproduction does require descent from a biological organism of some kind, terrestrial or otherwise. In that case, we need to know what a biological organism is in your dialect of the English language, so that we can know why no machine constructed by human beings (as opposed to machines which have appeared as a result of natural selection) could ever count as biological.

*I’m being a little disingenuous here, in the sense that I’m not really expressing my own views in this post. I’m not convinced that writing a program is building a machine. (Also not convinced its not.) But I think the viewpoint is one that should be taken seriously.

Though people are very quick to let themselves be fooled into believing otherwise. Take Searle’s famed ‘Chinese Room’ – the intent is clearly to make its recipient believe that the operator inside the room merely compares strings of symbols he receives with a lookup table, and outputs the appropriate string of symbols in turn, or at any rate does ‘something like’ that. If however that were actually how the system worked, it would be trivially easy to defeat: just ask “What did I say two minutes ago?” – the room would be unable to produce an answer.

Of course, one might say, this is easy to fix: just add a sort of ‘memory stack’, and include special instructions about how to operate it; that’s just a fairly simple program to add, in the end, and what you have is still ‘something like’ using a lookup table.

But things don’t end here. The room also has to be self-consistent – so if you ask it “What’s your favourite basketball team?”, and ask it again two minutes later, it has to realize that it has already been asked this, and retrieve its answer from the stack (it already becomes hard not to anthropomorphize the room when talking about it ‘realizing’ or ‘retrieving’ things – the detailed, ‘more correct’ viewpoint, that its operator has to execute a comparison-program matching previously received symbol-strings to the current symbol-string, etc., becomes awkward very quickly). That’s already quite a task to solve. And it gets yet harder: it also has to realize that “What NBA club do you like best?” asks for the same thing again, so it needs to have some sort of representation of the ‘meaning’ of both questions to compare – it must ‘understand’ them on a low level.

Again, some sort of program may be added, and again, you may still have ‘something like’ using a lookup table, but you can keep on playing this game, until you realize that, far from encompassing just a lookup table, the Chinese room, in order to work properly, must actually encompass the same basic features that a conscious mind possesses – memory, understanding, visualization, a concept of self, etc. And from there, I don’t think the conclusion that there actually may be some sort of consciousness embedded in the Chinese room is all that far a leap.

Just out of curiosity, could you elaborate on that? Mathematically, both a machine build in hardware and a ‘machine’ build in software are identical, so on what grounds do you think they might be dissimilar?

To all appearances, so is a brain. It’s got neurons instead of transistors, and its signals are transmitted electrochemically rather than electronically, but these things are completely interchangeable. So how does a brain create self-awareness, if a machine can’t? If your argument against machine self-awareness held any water, it would be equally well an argument against brain self-awareness.

I completely agree. And for the moment, that’s why it’s too complex for us to construct, but none of that means it’s logically impossible.

Personally, I think the AI approach most likely to yield the best results is that of creating a self-organising system that can develop its own cognitive structures and learn to think in the same sort of way that a baby does. This is quite different from the brute-force simulation-of-high-level-thought approach that Dissonance seems fixated upon.

What do you mean by “mathematically” here?

But for Searle what’s crucial to note is that the man inside the room doesn’t understand Chinese. No matter how complicated the procedure he follows is, none of it helps him to understand Chinese at all.

There’s the “systems reply” which says “no you’re right, the man doesn’t understand Chinese–but the room itself does.” You’re articulating a version of this.

But in the works people typically refer to when talking about his Chinese Room experiment, Searle already answers this reply. Put the room “inside” the person so to speak. Instead of it being a man in a room, have it rather be a man who has memorized a huge and complex list of instructions. He carries them out perfectly. But of course, ask him in English what he just said in Chinese, and he’ll tell you truthfully that he has no idea. He still doesn’t understand Chinese.

Remember that Searle’s point isn’t “we can never build a thinking machine”–he explicitly affirms that we can–rather his point is that you don’t get understanding out of programming. No program is sufficient for understanding. You need something else.

I don’t agree with him, but I do think arguments against him are not as easy to come by as many assume.