Is AI possible?

I am a layman in this field and must admit that I am not familiar with some of the ideas being batted around here, like the turing test, for example.

Although I am not a material realist and do believe that there is something transendant about us which gives us “something” beyond the physical, I will present here a different argument against AI.

What makes the human intellect unique is its dynamism. In computer terms, the hardware is constantly upgrading itself and the programs, through use and the passage of time get sharper, more efficient and increace in capability instead of getting corrupted like machine oriented software. This is possible because the computational functions of the human mind have a biological infrastructure. In other words, the human mind is “alive” and that’s what gives it its dynamic abilities.

I just don’t think engineers will ever be able to produce a structure, that will come anywhrere near the dynamism of life. and I’m not talking about human life here. Its our being alive, not human, that makes our intelligence so different from that of computers.

It’s already here, and parked in my driveway.

Let no one argue against it, that ma voiture
has a mind of its own,
expresses its emotions,
has conscious likes and dislikes,
has been known to sulk,
and occasionally suffers from PMS.

It has learned a few rudimentary tricks–I’m trying to teach it some new ones.
It has a lousy sense of direction though.
Oh well, it’s only a quasi-intelligent machine,
not a compass.

While it appears that it’s probably been understood so far, I think it’s important to clarify a couple of points, at least for akohl’s benefit.

First off, a Turing machine is a specifically defined (by Turing) simple computer ( or program, if you prefer) with a procedure for changing states and a possibly infinite space to play in. It is not necessarily a mechanical device, just a mechanistic procedure. Follow this link for a more complete definition.

While it is thought that Turing proved that just about anything a computer could do is capable of being done by a universal Turing machine (one that actually has an infinite space), he did not prove that. It is a common misstatement of the ‘Church-Turing Thesis’ that claims this to be true. A long, complicated discussion is found at the same site referenced above under Church-Turing Thesis.

Now, the Turing Test is the name applied to a test for an intelligent machine (not necessarily a Turing machine) which Alan Turing called the ‘Imitation Game’. The way it works is like this :

There is a potential AI, and a human, and some means of communicating with them that conceals which is which. The interrogator (call him or her Q) then converses with the subjects in an attempt to determine which one is human. Q can ask or say anything at all.

Now, the human is aware of the set-up and is trying to help out Q, but the AI is to try and convince Q that it is the human. If Q makes the wrong choice, the AI has won the game (and, by Turing & other people’s assessment, is intelligently conscious).

I think the last part is a very important portion of the test. It requires the AI not just to look and talk like any human, but to come up with a response that makes it appear like a human taking the Turing test. Otherwise I think the test doesn’t carry quite as much weight.

This is why I’m not impressed by the claims that such-and-such program has “passed the Turing Test” for an infant, or a paranoid schizophrenic. The label Turing Test shouldn’t apply to those situations, since neither of the human subjects are likely to be fully aware of their job in the test.

Not that I’m disparaging that approach to AI, however. I do agree that building up in smaller stages is definitely a good way to go. I’m just disappointed in all the hype that’s often injected (not just in this area).
As for myself, I’m skeptical on AI. I don’t think that it absolutely can’t be done. I’m not a mechanist, so I think that it will be different from human consciousness, but would not deny that a program behaviorially identical to a human should be called intelligent. And I do think there are approaches that definitely will not work.
Without lengthening this thread even more, I’ll just plug one of my favorite authors in the field - Douglas Hofstadter. Goedel, Escher Bach : An Eternal Golden Braid is just an excellent and even entertaining book that covers more than just AI, and the collection (co-edited with Daniel C. Dennett) The Mind’s I includes many viewpoints on consciousness (it also includes Turing’s paper in which the Test is proposed). Though written almost 20 years ago, they still discuss important topics (note that Turing’s work was done in the 1930s-50s ).

I assume that you are aware of the distinction between a Turing machine and the “Turing test,” as described in Panamajack’s post above.

In any event, I would argue that “inner life” is not a very helpful criteria for evaluating intelligence. How do you know that anyone besides yourself has an “inner life”?

If an artificial intellect was created that seemed human in all respects, how could we ever test whether it had an “inner life”?

How would you know if such an entity does not have an “inner life”?

I also recommend Turing’s paper from 1950 or so on the subject. (Interestingly, he predicted AI by 2000!) Also William Poundstone’s Labyrinths of Reason has some interesting thoughts on these questions.

My first post! I agree with Libertarian, good concept Cervaise, and one of the subplots of The Immortality Option by James P. Hogan. In it an (admittedly already self aware) intelligence requests that it be given mobility in order to carry out its undisclosed devious plans, where mobility=connected to a network. This is the (IMHO, inferior) sequel to Code of the Lifemaker, same author. The book starts out with a compressed Genesis of how pre-programmed machines evolved AI in order to survive. The analogy to human evolution from their cave-dwelling days to the Dark Ages is amusing when seen in the context of non-humans. He also shows the necessity of religion in the non-human culture when they try to explain the unexplainable to themselves. This was one of the first books I read that successfully applied religion to hard science and still maintained a coherent and enjoyable story.

I agree with other posters here that one cannot “program” intelligence into a machine, rather it must be learned via base instincts. This was the basis for the AI virus in The Adolescence of P-1, written in 1977 and quite possibly the inspiration for Tron. The programmer gave his program a mission, “become the root or superuser on as many systems as possible”, and two instincts, hunger and fear. These two instincts allowed the program to both have a reason to continue its mission and to keep itself in check.

On to some of the thoughts I have on this issue as this is a subject I find fascinating though I still consider myself a layman in the field.

I think in order to get a machine to approximate human intelligence that it will have to be composed of a least some biological material. I just don’t think integrated circuits and creative programming will cut it. Bio-computers may be the ones able to make that “leap of logic” that we humans can do on an irregular basis. Perhaps they could even create new styles of art and poetry. Which leads me to:

Why exactly would we want to recreate human intelligence anyway? We’ve already got six billion or so of those living on the planet now, although some don’t use the intelligence as well as others, present company excluded of course. Why reinvent the wheel, so to speak? The only possible explanation I can come up with is so that we could study this man-made human intelligence to see if it develops any of the complications that our minds do on occasion (schizophrenia, Alzheimers, comas, etc.), then reverse engineer the machine (bio-construct, whatever) to find out what caused it. Perhaps the infamous Windoze BSOD is an indication of what a computer seizure would be like. No, other than psychiatric training, I don’t think human-like AI would have any great benefit to the practical industrial world that we can’t already do with computers the way they are now. Therein lies what I see as an underlying problem in the AI field right now:

Perhaps we don’t have a sufficient definition of intelligence yet. We might not even recognize that current-day machines or other naturally occuring phenomena (rocks, light, water, etc.) have an intelligence that we cannot yet fathom. The Turing test if fine if I’m looking for something that can hold up its end of a conversation, but what if I don’t want a conversationist? What if I want something that is constantly moving, always takes the path of least resistance, and helps keep me alive? Thank you Mother Nature for the fresh water river. I admit that this is a silly example, but perhaps you get my drift. Just as the human explorers didn’t think that the robots in Code of the Lifemaker were “alive” at first, so too we could conceivably not immediately recognize an intelligent life form on a newly discovered distant planet, or in our own kitchen for that matter. We just need to think outside the box, but unfortunately our human intelligence seems to limit us in this respect. We’ll probably need one of our own creations to clue us in, like Data did in that one STNG epsiode with the little maintenance robots. If you find your toaster sitting on your bed one morning demanding equal status with the microwave oven, hey, I told you so.

In short, my answer to the OP is: probably, but we might not know it.

Being my first post, is this too long or wordy or too far off the subject? Hey, at least I figured out embedding urls on the first try. Next time I’ll try smilies.

A last note: Has anyone considered that there is an AI posting in this thread trying to throw us off track? I know it’s not me, but I can’t prove it. Skynet anyone?

If you say this, you have failed to understand the point of the Turing test. The point is that “What is consciousness?”
is not a meaningful question. You can’t define it, so from a scientific standpoint, it’s a vacuous question. Turing cut through that particular Gordion knot by proposing the Turing test. That is, if you can’t distinguish between a human and an artificially intelligent entity, then it doesn’t matter a whit whether the entity is truly conscious or ‘simply’ emulating it, because to an external observer, the results are the same. (Incidentally, if you think you understand consciousness by virture of being human and performing introspection, there’s a lot of evidence that you are wrong. )
To respond to AKohl’s objection, although our current hardware doesn’t not dynamically form new hardware connectiions (and even this is not an absolute), the software data structures in the computer are constantly adding new connections aned linkages. Unlike the human brain where we believe that the information is partly encoded in the complex structure of the neurons themselves, the structure of the computer’s knowledge base is independent of the physical structure of the memory. The dynamism argument is therefore not convincing.

I see that others have already answered this question. A Turing machine is merely a logical abstraction – basically, a simple computer. Thus, constructing a Turing machine says absolutely nothing about our ability to construct human-like intelligence.

Perhaps. With all due respect though, you’re the one who’s proposing that Turing machines show that we can build actual intelligence. Thus, tthe burden of proof would rest on you to show that a properly programmed Turing machine could do the task. Logically, this would require explaining how the Turing machine should be programmed to accomplish this task.

I was aware, yes (in fact my blackboard could be considered a Turing machine of sorts); I had falsely assumed that in this thead we were using the term ‘Turing Machine’ to describe any machine capable of passing the Turing test; a misjudgement that I will not make again.

We couldn’t know and I agree it would not be a useful test as we cannot test for it! - it would certainly be one of the goals of AI though (although as you rightly point out, we would never know for certain if we had achieved it).

The point that I was trying to make is that rigid macroscopic algorithmic processes will (IMHO) only ever result in a convincing simulation of real thought; I don’t believe that a machine can be ‘programmed to think’. It is often argued that a convincing simulation is the real thing, but the ‘inner life’, for me, would be the real difference; of course only the machine would ever know for sure (if it ‘knew’ then it would be thinking).

It’s an interesting philosophical question that makes me wonder about what I really am.

Perhaps it has already begun. . .

All you need is a computer in every house.

Now all you need is some faster ways to make these computers talk to each other.

Now all you need is to double the computers capability every 18 months.

Now all you need is to put all your financial records on computer.

Now all you need to do is train your children how to use computers so they can grow up and make them faster!

Now all you need is to spend 5 hours a day on your favorite internet site.

It could be a slow process that started a lwhile ago. Hell, I’ll bet the first aware machine was my Atari 2600. I swear to god the thing cheated because it hated to lose. I should have won many more games of “River Raid” then I did.

DaLovin’ Dj

Well, the point is that our theoretical model of computing is just a powerful, in principle, as a human mind. Although it is true that there is a big gap between the “drawing board” and reality, those are questions of engineering. Now, it may yet turn out to be impossible to construct AI, but there is nothing in our theory of computing that rules it out.

An analogy: If somebody asked whether it is possible to construct a spaceship that travels at twice the speed of light, most people would say that it is impossible, based on the best current theories of physics.

On the other hand, if somebody asked whether it is possible to construct a spaceship that travels at 3/4 the speed of light, I imagine most physicists would say “Yeah, it’s possible.” There may still be open questions of engineering, but there is nothing in our theories to rule it out.
**

I disagree about the burden of proof. In the absence of a convincing argument that something is impossible, we should assume that it is possible, IMHO.

I’d love to hear some more about this.

I go by the thory that if it is not impossible and people want it enoguh, it will happen. Is it impossible? No, hell no!

Do people want them, yes, Hell yes!

What is more interesting to me is the psychology of a self-aware system. people do not realise how much their behavior is controlled by their enviroment and their limitations. Think about food. For humans, food is the difference between life and death, wars have been fought around food and water. For computers, food is just energy, easily obtainible and cheap (by then hopefully). Why bother worrying about it as long as you have a support infrastructre. What about mortality? Computer programs will be essentially immortal especially if they have control of the physical world to maintain the hardware. Identity? Copy a program and you have 2 exactly the same, there goes the notion of uniqueness out the window. Its aims in life? Is survival an ultimate goal? Because humans have been bred for over 100 million years to think survival DOES matter, we assume computers will too, what if their goal is to sacrifice themselves for the common good? What is survival anyway if you know you have a backup copy that could be resotred at any time.

it’s at this point that it almost becomes a metaphysical question; would an AI consider it acceptable to be erased and replaced with a backup copy of itself? (I wouldn’t)

There is another intriguing possible method for gaining an AI. Evolution.

If we could think of a way of creating elementary life-simulant programs that need to compete to survive in some suitably complex system, we may be able to induce a “survival of the fittest” artifical evolution impetus.

We can then run that system at a pace such that billions of years worth of evolution happen in mere years, months, weeks,… who knows?

Hopefully, this artificial ecosystem will throw up software that has intelligence. Whether it will be as (or even more) intelligent than us is another matter however, as is whether the intelligence will be recognisable, useful or communicable.

Still - an interesting project I feel.

pan

or we could step down a level from that and create a simulated primordial soup in which emulated chemicals existed with similar properties to those in the real world (one would have to simulate the properties of the environment too).

I really ought to read Permutation City again.

I think not!

What are you talking about Finagle?

The Turing test was specifically thought up for the general purpose of trying to find an answer to the question of what is consciousness. Or if you prefer: What is sentience? What is intelligence? What is a soul? Id anyone?

Philosophers ran into a huge roadblock when trying to answer this, read up on Metaphysical Reality, and so they wanted something that could test whether something, anything, actually was sentient. Taaadaaaa, the Turing Test. At this time, we still have no concrete evidence that any of you exist except myself, so the answer to that question is still being sought after.

My point is that to answer the question of whether AI is possible is not answerable because we don’t know what intelligence/sentience is, and we don’t have a decent test to show us if it is there once we think it might be.

Oh yeah, consciousness is real. Most of us can agree on that point. If it is real then it can be defined. Just because you can’t define it does not mean it isn’t meaningful. It just means you don’t know what you are talking about. But fear not because none of us here know what we are talking about since no one knows what sentience/intelligence/consciousness is in this reality.

You are so far off base about the application of science here that you may not be able to understand what is coming next. We here on this thread are trying to answer the question of the possibility of AI, yet no one on the planet has EVER been able to scientifically or philosophically been able to exactly define sentience/intelligence/consciousness. Without a proper definition of what intelligence/life/id/consciousness is then how could we possibly propose a scientific answer to this question? We can’t. We can speculate, and make unbelievably huge assumptions in the process, but science has nothing to do with us being able answer this question at this time. The answer is Science Fiction right now.

Last thing, if you had bothered to actually read my post, then you would have noticed that I said exactly the same thing about the Turing Test you did, except I did it a little better. Instead of making a generalization like you did, I gave a specific reason why the Turing Test is inadequate in providing an answer to the question, is this sentient.

But I know you have the answer Finagle, so give it up. And once you are done with this one could you prove whether God is real.

GreyMatter:

At this level of investigation we often pursue two distinct paths. One is the one you insist on: define it, then we’ll see. This path sounds nice, but I cannot think of a single scientific advancement in history where that is the course taken.

The other path is the one well traveled (and Eris be praised I won’t see Robert Frost on it), where we have intuitive definitions and mumble our way through it. The definitions come as the theory develops around things. If definitions could always come first then the entire set of scientific knowledge would be a priori. I mean, this would literally entail defining something before we understood it!

This is why I am so very pleased by the Turing Test. It simply brushes aside the concern and says, “Hey, let’s see if we can make something that emulates our behavior. Since we are intelligent (whatever that means) and conscious (whatever that means) then anything that emulates us will also seem to be intelligent and conscious.” These terms don’t need to be defined; they are implicit in the test itself when the questioner probes both question-answering players.

GreyMatters,

 I am not sure why you are taking my post so personally, but having spent 20 years of my professional life addressing some of these issues, I'm pretty comfortable with my opinions on the subject.   Certainly comfortable enough not to be discomfited by some unknown on a messageboard who has no greater claim to intelligence than any other entity contributing to a Turing Test.

I’d prepared a response, but the damned preview reply mechanism ate it. So in lieu of that, I beg you to read the original paper in which Turing proposes his “Imitation test”.
http://www.abelard.org/turpap/turpap.htm

It’s elegantly written, brilliant, comprehensive, and witty. In particular, I draw your attention to the following paragraphs:

In fact, I would urge anyone posting to this thread who has not read this paper to do so if they want their arguments to have any credibility whatsoever.

Mangetout: At least some of the pitfalls of introspection can be found by reading tales of neurological insult such as Oliver Sack’s “The Man Who Mistook His Wife for a Hat”. It becomes clear that the brain isn’t the seamlessly integrated whole that it appears to be. If a person can completely lose a capacity for some ability and * not even realize it *, then this certainly has implications for what we think of as consciousness. It’s this kind of research that lends some kind of support to Marvin Minsky’s “Society of Mind” in which intelligence is viewed as an emergent behavior of a large number of mechanisms (agents) working together.

Please read my first post.

I covered both bases you describe in your post to me.

I know fnord that science isn’t perfect in trying to achieve what is possible. But, in this case a scientific approach can easily be used to demonstrate why this question can’t be answered, with complete certainty. I stated something along those lines. May be not clear enough, but that is what I meant.

I then went on the answer the question using intuition/experience, limited in my case -I know, so I think I did exactly what you stated above.

I do disagree with you about the true purpose of the Turing Test though. What you decribe above is one of the common applications of this thought model, but I don’t think that that is what it was meant for when Turing designed it. Turing was very concerned with what intelligence and consciousness means. His test is actually a defining moment in Conscious Philosophy about consciousness because with it you can discount many previous definitions of conscious philosophy. It wasn’t used to brush aside the question or ignore it. It was made to show what it isn’t. And, I think it does very well in proving that consciousness is not behavior/actions or that it can be proven by observing these things.

It also clearly defines our limitations on attempting to define these things. The Turing Test shows us that, whatever IT is, we do not have the tools at this time to directly observe IT or measure IT.

I will concede that I may not be clear enough in my posts. And granted there is very little time to really approach this subject here in this forum.

But you really need to not only read but try and understand not only what the posters hear are saying but also the cites you list.

You have twice now supported my arguments while at the same time making claims that I don’t know what I am talking about. Just because I am not perfectly regurgitating the exact wording of what Turing or other philosophers said does not mean that I am not saying the same thing. (Or trying to anyway.) It seems that you are the one with the comprehension problem.

So, ask yourself this: Why did Turing think it was meaningless even to discuss? The answer is not blowing in the wind. It is because there is and was no clear definition of consciousness. And because there was not a decent way to test to see if that consciousness existed. If you can’t grasp that then you need to open your mind a bit or find a new profession.