Is Computer Self-Awareness Possible

No, you are not mistaken, a mechanical calculator and an electronic calculator are both machines. So is a computer. Frylock’s confusion is in 1) thinking that a program is a machine and 2) that compiling a program somehow reconfigures components inside the computer. Neither of these is in any way true, and demonstrates a fundamental ignorance of the very basics of computing.

I’m sorry that you’ve gathered this impression; I’ve nowhere stated that writing a convincing chat-bot is anything but difficult. However, just because you can be fooled by a chat-bot into thinking that it’s actually a self-aware, sentient entity in the form of another human being at the end of another keyboard does not make the chat-bot self-aware or sentient. It makes it a program that can feign self-awareness. Look, if I have a sex change and enough plastic surgery to look pretty convincingly like Princess Anne, does it follow that I am in fact Princess Anne? If I meet you and fool you into thinking that I am in fact Princess Anne has that made me Princess Anne? It would be an extremely difficult thing to do and pull it off convincingly, but being able to fake it would not in fact make me Princess Anne.

No, the explanation is that you don’t know what you are talking about. A program is not a machine, and a compiled program doesn’t configure components no matter how much you wish this not to be the case. Yet again for the home audience, a compiled program is a set of instructions for the machine (the computer) to execute. If it instructs the computer that the data held at absolute address FFFFEE for a length of 10 bytes is to be interpreted by it as a floating point decimal number with two decimal places and is to be a variable called x, no component has been configured in a particular way. It is just being told to reserve a space in memory to hold this variable and how the data held there is to be interperited. If you are writing in a language that is either basic enough or free-form enough to allow you to do something normally unintended like assign to x a value of “Frylock” it will happily do so, and when you next call upon the variable x it will interpret the EBCDIC or ASCII collating sequence value of the data held there (the 1s and 0s of the EBCDIC or ASCII values of the letters F,r,y,l,o,c, and k) as a floating point variable with two decimal places because this is how the program told it to interpret this data, not because any component of the computer has been reconfigured.

You would be much more convincing if you in fact understood the terminology you are using and had a basic understanding of the subject matter.

You’re not on ignore, I had hoped post 138 addressed this. I don’t know what causes sentience, and I haven’t claimed that there is no possible artificial analogue. What I do know is what a computer is and how it works. The gleam in one’s eye about the possibility of a computer developing sentience and actual self-awareness fades pretty quick once you understand what is going on underneath the hood.

Well, I’ve referenced computer music (or the Oral B CrossAction toothbrush, but nobody but me seems to think it’s amazing that I can brush my teeth with something a computer designed – and I mean ‘designed by a computer’, not ‘designed with computer aid’ – and not know it) already – essentially, you can load up a music database into a computer, have it abstract rules of composition from the data, and on this base, compose original music. And before anybody claims this isn’t ‘genuine’ novelty, since everything follows from the database (with maybe some randomization), it’s no different in humans – you need to have listened to music, learned its rules (though they may be to a certain extent innate, which isn’t really any different) before you can start composing, too.

Frylock is exactly right in the argument he is making. What a program essentially does is take the primitive instructions of a machine and chunks them in a new way, in the form of procedures, which then can be called upon in the same way as primitives. This emulates a machine which has as its primitives the newly-defined procedures, meaning that functionally, there is no difference between the emulation and the real machine – a new machine has been constructed from the primitives of the old one. This is exactly analogous to how a new, mechanical machine is constructed from the primitives of gears, wheels, and so on.

Take as an example a machine whose sole primitive is the Sheffer stroke, A↑B, defined by its truth table:



 A | B | A↑B
 0 | 0 |  1
 0 | 1 |  1
 1 | 0 |  1
 1 | 1 |  0


From it, a program defines the following functions (x, y are variables passed to the function):

NOT : x↑x
AND [y]: (x↑y)↑(x↑y)
OR [y]: (x↑x)↑(y↑y)
IMPL [y]: x↑(y↑y)

It’s easy to see that, for instance, NOT [A] returns 1 iff A is 0, and 0 iff A is 1, and thus, is the negation; similarly, AND [A]** is the conjunction of A and B, OR [A]** their disjunction, and IMPL [A]** their implication. Effectively, what you have now is a machine that is functionally indistinguishable from a machine that has these operations built in as its primitives – i.e. on the user level, it is impossible to decide whether or not these functions are primitives, or build using more primitive notions. Indeed, you could go on and, using these functions, define a new function:

SHEFF [y]: NOT [AND [y]]

Which implements the Sheffer stroke, using functions defined in terms of the Sheffer stroke – redundant, sure, but perfectly well possible.

This is exactly analogous to the way machines are build in the material world: you have certain components that you assemble together to yield new functionality. You could, for example, build a device with two ‘input’ slots in which marbles are inserted that outputs a marvel in all cases, except if each slot has a marble inserted. Then, you could join instances of this component together in the way described by the Boolean functions above; I don’t think anybody would call this anything but ‘building a machine from parts’. It’s the same as building a machine from gears and wheels, which have their own unique functional characteristics. There’s no operational difference between building a machine this way, and building it out of primitive instructions of another machine.

Thank you - Interestingly, so do I.

Exactly the same thing happens if you start examining what the human brain is made of, at its lower levels. This simply isn’t a logical obstacle to the development of ‘proper’ AI.

I can’t help feeling that you’re thinking that AI would have to be responding based on predetermined criteria, with predetermined responses. It doesn’t.
We can create artificial analogues of brain components (either realistic ones, or anything else we think might work)
We can assemble those components in a way that we think permits the kind of complex interaction enjoyed by their biological analogues.
We can permit/provoke them to organise themselves in ways analogous to their organic counterparts.
We can stimulate the resulting system with inputs - this would be a conceptually similar process to raising a child, or teaching someone a foreign language or new skill.

Of course, talking about that and doing it are two different things, but there’s nothing about the above approach that dooms it to producing something that merely ‘feigns’ intelligence, in any more meaningful sense than humans feign it.

And what is an instruction?

Answer: It’s a particular configuration of components inside the computer.

If it were not, then it would not be a physical entity or state. And if it’s not a physical entity or state, then computers are magic.

Again: Note that I’m the one giving arguments here. You are making unsupported assertions–assertions which my arguments show to be very implausible.

Are you sure you know what you’re talking about here? If so, then you should be able to answer the following question in purely physical, non-anthropomorphic terms: What is an instruction??

Computers interpret? Or is it people interpreting? If it’s the computer, then you agree with your opponents that computers can understand things, correct? If it’s people doing the interpreting, on the other hand, then you should be able to tell me (if you know so much about how computers work) what interpretation is for a computer in purely physical and non-anthropomorphizing terms.

Sorry for butting in at this point but it’s an interesting subject…

I’d like to take a crack at defining “self-awareness”.

Consider a system that

(1) Have external inputs <a,b,c,d,e,…>
(2) And produces outputs <q,w,e,r,t,…>
(3) And has the facilities of memory and learning.
(4) And the outputs of the system are fed back into the system.

Then “self-awereness” is an emerging property unique to the outputs of such systems.

Thoughts?

That’s not the question, precisely. The question is this: if you were so good a fake of Princess Anne that the two of you, standing next to each other, were indistinguishable by any test we could come up with, on what basis other than a priori knowledge would anyone be able to say that one of you is Princess Anne and one of you is not?

You’d come off a bit better if you were not apparently unaware that the machine metaphor for programming is not nearly as uncommon as you seem to think it is.

That’s a pretty trivial difference, IMHO. It’s still being told what to do. It can’t choose a new goal, and it can’t come up with new ways to meet its initial goal that weren’t in its instructions.

The commands tell the computer what to do about the sensor readings. When outside input can get the computer to do something not specifically allowed for in those instructions, we can talk.

What are your criteria for that? I mean, what is something you do your instructions don’t specifically allow for?

In general, things like genetic algorithms, for example, are pretty good at finding solutions to problems ‘the outside’ confronts them with without those solutions – or even strategies devoted explicitly to finding them – being ‘in their instructions’. Here’s a very simple GA devoted to finding a solution to a specific problem – navigating a randomly created terrain. It’s very restricted, obviously, and its capacity for variation limited – it will only ever come up with cars, not with walkers, and it won’t learn to fly. But those restrictions are imposed by feasibility, they’re not in-principle hard and fast boundaries – given enough computational power, much more general variability is possible; and given enough variation, the solutions a GA can come up with look at least as novel as anything you or me could create.

**Mangetout **and HMHW, thanks for the comments.

I can’t agree with that. Genetic algorithms work in limited circumstances and within very limited parameters. One simply cannot take these algorithms, say “Look, they work under these specialized circumstances!” and then conclude that they can surely be generalized to the point of equaling human inventiveness. That is simply a huge leap of logic – ultimately, a statement of faith rather than something which can be empirically defended.

I once made a computer feel terribly self-conscious, which I guess is a great achievement. I guess I should be proud, but I felt like such a CAD!

I think everyone here agrees more than they think they do, but the biggest hurdle seems to be a consensus on what self-awareness really means in this context.

In the traditional sense of the term, and related to computers, self awareness simply means the condition of being aware of ones awareness.

127.0.0.1 is to a computer as returning to the hive is to a bee, but is the bee self-aware? Does he contemplate his existence on the way home?
Lagniappe perspective:

http://news.discovery.com/space/does-quantum-theory-explain-consciousness-110526.html

Of course it can be empirically defended – build more and more complex GAs, and see if their inventiveness eventually bottoms out. Personally, I see no reason why it should – any problem that can be transformed into a fitness landscape – which is any problem for which you can tell better from worse solutions – can in principle be solved genetically; I don’t think I’m saying anything too outrageous here. In other words, if microevolution works, why shouldn’t macroevolution?

Of course, genetic algorithms have their drawbacks – they tend to cluster around local minima, the conditions to end the search are often not well defined, and for decision problems (where solutions are only either right or wrong), they don’t do better than random searches; but it’s not like human creativity is exactly unlimited, either.

To approach this from a different perspective, there’s a paradigm, known as the AIXI agent, which is in a sense the optimal way to solve arbitrary problems – it uses Solomonoff’s concept of universal induction to make the mathematically best possible (in an Occam’s razor sense) guess for what happens next, and then calculates via decision theory the action that maximizes the expected reward, and executes it. This can be shown to be as efficient at solving any particular computable problem as the most efficient problem solver specifically geared towards solving that problem (up to an additive constant).

Now this is only a theoretical result – Solomonoff’s universal prior is uncomputable, and hence, the method can’t be implemented. It merely provides an upper bound for the efficiency of any general purpose problem solver – which, if we aren’t capable of hypercomputation in some form, includes us. But there are computable approximations that can be implemented, that approach AIXI’s efficiency at least asymptotically; now I don’t know if there are any solutions directly using GAs, but certainly, the necessary notions – reinforcement learning, optimization – do lend themselves well to GA implementation. So if we humans are bounded by AIXI, then I would say that one should indeed expect GAs to be able to match our inventiveness.

Aw, that:

is not really fair to Penrose. I mean, I’m not a fan of either his conformal cyclic cosmology, or his orch-OR theory of consciousness, or even his idea that there ought to be some ‘objective reduction’ implementing the quantum wave function collapse somehow linked to gravity – but in the past, that man’s wild guesses have been responsible for creating whole research fields, and his ideas feature in most current quantum gravity programmes – he came up with spin networks, which lie at the bottom of loop quantum gravity, invented twistors, which are both their own qg programme and feature prominently in modern string theory; plus, he draws beautiful pictures to explain his ideas. I think even if his recent ideas are more of a fizzle rather than a bang, he’s more than earned the right to be wrong about some things.

Have you actually done that? If not, then you cannot claim that an increasingly complex genetic algorithm will indeed equal human inventiveness someday.

In other words, saying that it MIGHT be possible to empirically defend your claim is not the same as saying that the claim IS empirically defensible.

Your argument assumes that all problems can be reduced to a “fitness landscape.” Now, maybe that’s the case, and maybe not… but for your claim to be defensible, you must first demonstrate that all problems can be approached in this fashion.

And that is just one reason why I think your claim is overly broad. Genetic algorithms can solve SOME problems with limited scope. This is does not logically mean that they can someday equal human inventiveness. How would you design a genetic algorithm which can solve a typical Sherlock Holmes mystery, for example? Or which would solve King Solomon’s dilemma with regard to the infamous mothers who were fighting over a child? Or for that matter, how would they solve Gödel’s incompleteness theorems?

Saying “We just need to make these algorithm’s more complex!” is not a satisfactory answer. It amounts to declaring that a solution exists without giving any reason to believe such a claim.

That sounds like it might be more analogous to disease than consciousness. It’s possible right now to purposely set computers to the task of producing useful outputs their programmers did not explicitly envisage - if you go further than that, then it’s just an outside stimulus breaking the thing in a non-useful way.

Or maybe I’m not quite getting what you meant - could you give an hypothetical example scenario of what we’d be looking for in terms of a computer doing something not allowed in its instructions?

The problem I’ve always seen with the Chinese room thought experiment is that the “way” we process information has a profound impact on whether we call it “understanding”.

Humans receive input and create models in their mind regarding the data just received. The formation of this model with all of it’s connections to other data/models and the things that can be inferred from it represent our understanding. We think a person understands because the simulation in their mind can project the information on it and spit out the same results that we just projected, or they can arrive at a different conclusion but they can point out where in our 2 simulations we deviated.

Having a list of responses completely skips this process, so you don’t get what we call “understanding”.

If a computer program was written that processed the information in the same way, ultimately constructing this internal model and then chose a response based on similar other inputs (goals, morals, etc.) then you would have a system that we would say “understands” Chinese.
Basically, Searle constructed a hypothetical that doesn’t understand. It doesn’t really prove much.

Searle stipulates that you can put any program you like in the Chinese Room. You’ve described a program in your own post, which you’ve said is sufficient to constitute the entity following it as an understander of Chinese. But, says Searle, the guy in the room, who’s following the program you’ve described, doesn’t understand Chinese. Hence, that program (and eo ipso) isn’t sufficient for understanding.

Searle’s probably sympathetic with the thrust of your statement that the way the information is processed is crucial to the question whether the thing understands or not. I think (here I’m speculating) that he’d make a distinciton between program-following and other means of information processing. I recall he makes a point somewhere or other that a river is processing information in a literal sense, yet we wouldn’t say it’s following a program. Similarly, it seems Searle is reaching for a way to make a point that so also with understanding, program-following won’t get you it, but some other kind of information processing (because really everything is information processing) will. The way the information is processed is important–because it’s got to be some “way” of processing information that isn’t just following a program.

I think a Wittgenstinian-flavored suggestion can be made in Searle’s favor here–it may be that understanding doesn’t require just following a rule, but rather more, namely, being governed by a rule. The man in the Chinese room follows the rules in the books in the room, but he’s not governed by them. That may be the crucial difference between him and something that understands Chinese. (And we may have here the beginning of an answer to Searle–a human in a room follows the program but isn’t governed by its rules. But a computing machine programmed the right way might be governed by those rules, and certainly doesn’t follow them in the same way that you or I follow rules.)

But up to now, it is – no genetic algorithm has as yet hit a hard and fast ceiling limiting its creativity, so what’s the justification for believing there is one? It’s like those people who accept the existence of microevolution, but deny that macroevolution happens because (or so they say) it hasn’t been demonstrated in the lab; but that’s not sufficient grounds for introducing such an assumption.

As I said, any problem for which one can tell better from worse solutions can be turned into a fitness landscape – all you need to do is favour selection of the better solutions, and add penalties to worse ones. And for problems for which you can’t tell better from worse solutions, well, you really can pick any solution at all, and need not bother with GAs.

The reasons I gave for believing such a claim – that human inventiveness is bounded from above, and that GAs likely can approach this bound – were in the part of my post you didn’t quote.