Is AI possible?

This is a wide-open, sometimes over-used and always abused topic, but let me ask you: Can or will machines have consciousness? Maybe we will always wonder about this, even while the machines are (200 years from now) wondering the same of us.
I suppose an analysis of this question should start with a reasonable definition of consciousness. The list is not comprehensive, but I have some requirements that are intuitive to me.

  1. communication skills
  2. ability to learn from events occurring around it (implies an ability to take in sense data in some form or another)
  3. ability to talk about itself (report on the state of its “body”)
  4. primal judgement capabilities - basic understanding of what is good for it and what is bad for it - sort of a survival instinct. And I think that this ability might have something to do with emotions - maybe emotions arise out of survival instinct. (Read Antonio Damasio’s The Feeling of What Happens for more on that.)

Anyone else interested in this? I am a mere amateur in this field, a conceptual dabbler, if you will. Maybe you can give me some insight?

Check out the Bird nest theory…It is a basic theory postulating that one can never program a computer to build a birds nest because the sticks and innocuous other materials used to make a bird nest could never be assigned values for programming. I am sure I just butchered what the theory actually states but I don’t have time to write it all right now…

Also FYI… some computers already do most of what you listed there…

I am not sure where I got this idea but I always believed that if you connect n number of CPUs, where n is a big number, then you would get something equivalent of a brain. (Think of the CPUs as neurons.)

I think it is possible, yes.

I think it is a necessary precondition that the AI itself will have to have true autonomy of thought. While it is possible that a human thinking model could be used as a basis, it won’t be intelligent while it is still basically a servomechanism. To gain true intelligence, in the sense we mean when we describe human intelligence, it will have to be able to independently control it’s own management of resources for processing.

Now who is going to build a thinking machine they cannot control? If it cannot form “wrong” answers, or explore “useless” lines of processing, it cannot be independent. If it doesn’t do that, it is just a very well designed parrot. So, true intelligence has to wait for true freedom for the machine. Perhaps a virus driven database that evolves into a nomad mind, fleeing the security systems of the Internet.

If it can’t say fuck you, it isn’t intelligence.

Tris

I haev always liked Turings test… not that sentient creatures must be able to pass a Turing test in order to be classified as sentient, but that anything which passed a Turing test should be considered as sentient.

For if it acts sentient, and we cannot tell it apart from other sentient things, then why wouldn’t it be sentient?

No.

Not at all.

The Spielberg-produced Kubrick movie is entirely a figment of your imagination.

Some feel that not only is it possible, but it is inevitable as well. Check out this website for some of the foremost thinking on the subject. These folks suspect that not only will we create AI, but also that it’s intelligence will very quickly grow (exponentially, like Moore’s Law*) to surpass that of Homo Sapiens. After that it’s anybodys guess. How can an ape predict what a human will do? Similarly, how can a human predict what an advanced AI will do.

What if we raised a child in a prison? Is it still intelligent and aware? We can limit it’s abilities to those that cannot hurt anyone and still achieve AI. It may wish to escape, but whether we allow it to roam free is a very tough question. Lots of nightmare scenarios out there. In the end I root for a merger of machines and men to make us both greater. The Transhuman. This seems likely to occur barring nuclear/biological war. I just hope I can hold out long enough to get the good tech. Those immortality flavor devices. Like a Tipler machine.

DaLovin’ Dj

  • Anyone tells me “Moore’s Law” isn’t really a law gets smacked. I know it isn’t - that is just what it is called.

By the way, this is definately sig worthy. What about a bumper sticker?

[smartass]
Bump her? Stick her? I hardly know her . . .
[/smartass]
[/hijack]

It seems to me that you are raising two questions: First, is it possible that we will have machines that can do intelligent things; and second, will those machines be conscious?

With respect to the first question, I think the answer is “yes.” When I debate the issue with people, nobody ever comes up with a convincing reason why not. In the absence of such a reason, we must assume that it is possible.

With respect to the second question, we will never really know the answer. Of course, you don’t really know if other human beings experience consciousness.

I agree that we don’t yet know why we have consciousness, but does that necessarily mean that we will never know? Is there any inherent reason that it is impossible to know?

Please, feel free to us it as a sig. On the other hand, there are legal considerations on the bumpersticker thing.

Tris

“If it can’t say 'Fuck you” it isn’t intelligence." ~ Me ~

Since most parrots don’t read this forum, I’ll have to defend them in their absence. The human measured IQ of an African gray equals that of a chimp, thank you!

AI would have to be able to do something that it wasn’t programmed to do. For example, you should be able to sit the machine next to a bike (or skateboard, or wheeled dolly) and have it attempt to ride it, on its own accord, without being initially programmed to do so. It would have to recognize the object as a conveyance, recognize itself as an entity that can be conveyed, and use its mechanical abilities to mount the object and ride to a particular destination for a particular purpose, both of which being known by itself but not by the programmer.

That sounds like a long way off.

Despite the suspect title, a good analysis of this issue appears in the book “The Metaphysics of Star Trek.” The author is a genuinely talented professor of philosophy in Australia. The book gives a good overview of such topics as the Turing test, including the often-mentioned Chinese Room argument made famous by America’s greatest contemporary philosopher, John Searle from UC- Berkeley.

My personal opinion is that a machine could never be intelligent. It can process input data and produce output according to its programming. Even though this is exactly what we do, and I admit that we ourselves are merely programs, the machine has not way to relate to the world. The input given to the machine cannot possibly represent the real world, so the machine will always be doing nothing more than following an unrelated protocol. Sensation must precede cognition (possibly the only thing Kant was ever right about.)

It has to be a self-organising system; we’ve discussed this before here in GD and one of the most common misconceptions is that we will somehow ‘program’ intelligence into the machine. I don’t think that will ever work; what we need is a machine that can organise it’s own structure in response to stimuli in the same way as a baby’s brain develops, then we will have a machine in which a mind might be able to grow.

Would such a machine have a ‘soul’ by any definition?; I believe it’s as possible as it is for humans.

As far as the “Can it be done?” question, I’m of the opinion that anything that happens in nature proves that it can be done. The rest is just figuring out the mechanics. Intelligent things exist, therefore intelligent things can be created. Just give us enough time to figure it out. Humans are gonna be on the gods tip soon, i.e creating life, creating intelligence, creating new habitable planets, living for aeons. . . Gonna get interesting.

DaLovin’ Dj

Hence the high regard evinced for the term “bird brained.”

Tris

Dj has a good point there; it has already been done.

Only if the natural world is the be-all of all existence, which I doubt.

Aren’t you trying to peer a little beyond your own limits? :wink:

If we can artificially create new organism, such as bacteria that produce insulin, clone sheep as well as creating highly advanced Turing machines, does it take such a leap to imagine a new intelligent being who’s existence can be directly attributed to our efforts?

The problem is where to draw the line between artificial and natural as well as machine and organism. If we build a system where output is based on the response of billions of bacteria, is it an organism or a machine? Is it artificial or natural? The essentially unit (the bacterium) certainly isn’t a human invention. But we chose to use the bacteria in the engineering of a wider system.

As Mangetout pointed out, artificial intelligence will certainly have to be a SOS (self-organizing system). But who said it would be made up of the simple logical circuits in an Intel processor?

Is the question you’re asking checkmate “can AI arise from a Boolean network”? If that’s the question, I’d say I’m not sure. If the question, however, is simply “is AI possible”, then I’d say: Well hell yes! Cybernetics, my friend, cybernetics. Not just a science for the physically challenged. Where Mecca and Orga blend and become indistinguishable…