Is AI possible?

I think you misunderstood. I said that many scientists believe in a soul – that is, a supernatural component to human beings. I did not say that the soul is under the purview of science – that is, a component of physics, biology or chemistry. Those are two entirely different claims. One can be a scientist without believing that science can explain everything.

As for the other recent objections raised in this thread, I hope to get around to them this weekend. Very briefly though, some of them are thought-provoking, but I think that most of them have missed the boat. For example, recognizing that we did not learn to recognize the color blue proves nothing, since I already acknowledged that some components of intelligence can be pre-programmed. In contrast, intelligence itself is largely emergent – that is, it is something that emerges, rather than being pre-programmed.

Regarding neural nets – as I said before, I’m not terribly impressed by their use to support hard A.I. For one thing, their structures are specifically selected to produce a desired result. Second, they are useful for quantifiable patterns(e.g. a camera image), but I think it takes a huge leap of faith to suggest that they could explain how we learn non-quantifiable skills – how to write a heartwrenching novel, for example. And third, while neural networks can be useful for recognition (in a limited, tightly controlled sense), I think it would be reckless to assume that they can be used to produce insight, creativity, intuition or various other basic components of intelligence.

Therein lies the problem. I think that the extollers of hard A.I. have made mountains out of molehills. Neural network-based pattern recognition can be impressive and very useful, but I think it takes a huge leap of faith to suggest that they demonstrate our ability to manufacture true intelligence.

Incidentally, Roger Penrose offered some similar ideas in his text, The Emperor’s New Mind. Penrose is himself a materialist, so his arguments are NOT based on any form of theology. He has his share of detractors, but I think they have missed the boat, as evidenced by some of the reviews of his book at Amazon.com. (Example: The frequent appeal to neural networks as “proof” that hard A.I. is feasible – a claim which is clearly jumping the gun – or the unproven assertions that the human mind is “computable.”)

That doesn’t quite sound right to me, though. If something exists, it can be explained, right? Science is a search for explanation, not a, er… limited domain of topics, I guess. If souls exist, then they are not outside science. They are outside current understanding of such things as biology and physics and the like, but if proof of souls were present, and an explanation for them could be formed, then that would fit under science as well as any other topic.

However, proof of souls and the like doesn’t seem to exist, while the evidence that does exist points toward the human brain being a biological machine. Regardless of what people believe, I’m going with what there’s evidence for, and it seems to quite clearly point to the human brain being a biological machine.

(And on a personal note, I also believe that souls exist in some form. However, I also believe that souls would fit into the universe in a natural and rational way)

Of course, even if souls -were- required for something to be truely intelligent, that would only be a stumbling block assuming that they could not be artificially created (And I really doubt anyone has evidence either way on this one :slight_smile: ) or naturally “settle in” in a constructed being.

In any case, it still doesn’t address how a non-natural component can be a required part of something natural, and why said non-natural component would therefor prevent any sort of duplication of that natural intelligence. Seems about as reasonable as saying a clone or genetically-engineered being would be unintelligent because, being an artificially created being, it wouldn’t have a soul.

Same can be said for the human brain. DNA lays out the blueprints for the brain, establishes important functions (breathing, heart-beat, and other life-sustaining functions, senses, etc), and from then on develops from learning. The structure of the brain and how it opperates is specifically selected by DNA to produce this desired result.

Seems like oversimplification. But I happen to be in a good place for this example: I’m learning to write stories. Now, I’m not too good at it. But I learned the english language and grammar over many years, being told what to do untill I learned the basic skills for it (Pattern recognition; I do this, and I’m told it’s good. I do that, I’m told it’s not. So, I try to do what’s good). Then I read other people’s stories, started seeing what I like (Which was a combination of factors, but mostly from what I had experienced in the past and “learned” to be good). So now, I’d like to make more of what I like, because what I like is good (In the simple sense). From there, it’s a process of trying, failing, learning, and finally succeeding when I come out with something I like.

It isn’t a simple pattern, even for a computer. It’s a very complex pattern that takes most humans a few decades of development to reach. It’s a pattern that has developed so long and so deep that it no longer resembles a pattern from the surface.

How’s that? Intuition is mostly taking what knowledge one has and reasoning what the best course of action is, and/or what is most likely to happen. Computers already do that for simple patterns, and I see no reason why they could not develop into doing it for more complex patterns. Insight seems like a simple aplication of knowledge and extrapolating from there. Creativity depends on the viewer; One person may think something is incredibly creative, while another may have seen it done a hundred times before. If anything, creativity would seem to require a certain degree of unpredictability by others’ perceptions. I’ve played a few games where I would consider the computer “players” to display a very, very limited degree of this, not behaving in the way they’re expected to. In some cases, the computer does much better because of it.

Now, I wouldn’t consider that on the same level as human creativity as far as degree, but there are times that it is indiscernable from human creativity (Though, again, usually in a very, very small scale). Now, if the computer here were a neural net, it could learn from this. Given enough experiences, it seems reasonable that it might learn that such creativity is a good thing and strive to do it more often, much as a person might (And there are lots of people out there who are pretty uncreative and just follow along with what other people do as much as possible, because it’s what they like to do, or what they think is good).

With current tech, sure. But again, remember how quick computers advance. Some decades ago, a multi-room computer could do simple mathematical problems in a few minutes, or very simple programs if you left it running all night long. Nowadays, a computer could do that in milliseconds. I don’t think neural nets demonstrate any ability to manufacture a true AI, but the do demonstrate that the technology does have potential to do so. It might take a few decades to build up computing power to the level that it’s sufficient for the load that will be put on it, and another couple decades from then for the computer to learn up to the point of an adult, but it seems reasonable enough that it could happen eventually.

It’s not a simple program, not even as “simple” as the most compelex programs running now. With the neural-net “learn-to-AI”, it’s a program so complex that it takes the machine and the entire environment around it years or decades to program itself.

Can you offer a specific test that would distinguish between pre-programmed and emergent behaviour?

**Phoenix Dragon **, I appreciate your questions and objections, and I think that some of them are thought-provoking. However, I think that there are some pervasive fallacies that keep popping up in defense of strong A.I. I don’t have time this weekend to deconstruct them thoroughly, but here’s a brief sampling.

I think that argument is fallacious on several levels. Most notably, it’s not evidence. Rather, it is a mere assumption, and a thoroughly unsubstantiated one. In fact, I daresay that it’s inherently unprovable, insofar as there are limitations in the extent to which we can verify the accuracy of various explanations. Hence, I do not think that it consitutes.

This fallacy – confusing assumptions with evidence – pervades many of the arguments raised so far. Consider the example of neural networks, which we discussed earlier. As I said, some are suggesting that neural networks demonstrate that we can create true intelligence – but clearly, that is a matter of mere conjecture. It is belief – a leap of faith, even – rather than evidence.

Ditto for claims such as:

That’s an unfounded assumption, and a false one. Science only deals with the material world, and strict science only deals with that which is repeatable and testable via the scientific method. In fact, if one claims that science can explain everything – EVERYTHING! – which exists, then the burden of proof rests on the person making that claim.

Ditto for the claim that the human brain IS proof that we can create hard A.I. That is, quite simply, circular reasoning. It ASSUMES what we are trying to argue, and it ASSUMES that this task can be accomplished.

Many posters here are clearly convinced that constructing hard A.I. is just a matter of time. As ethnicallynot said, “If the question, however, is simply is ‘AI possible’, then I’d say: Well hell yes!” Or, consider lucwarm’s claim: “In the absence of a convincing argument that something is impossible, we should assume that it is possible.” With all due respect though, I think that such confidence is clearly unwarranted. Without a preponderance of evidence, success should not be considered the default position.

In fact, hard A.I. requires three assumptions: (1) that the human mind is entirely material in nature, (2) that the method through which it works is understandable by human means, and (3) that it can be feasibly replicated. Even if we grant assumption (1) (which I believe to be false), the remaining two assumptions are yet unproven. In fact, Dr. Roger Penrose – one of the world’s foremost physicists and mathematicians, and a devout materialist – provides several arguments for (2) and (3) being false. Hence, I daresay that such extreme confidence in our ability to achieve hard A.I. is greatly misplaced.
lucwarm asks:

I provided several such examples. The machine must be capable of learning to recognize itself in a mirror, for example, and come to the realization that the image it is viewing is itself. Of course, such an experiment would require having the machine communicate this thought to the outside world, but that should not be a severe obstacle – especially if one claims that hard A.I. is indeed feasible.

[sub](In fact, in order to make this an honest test, the machine should ideally be physically distinguishable from the other test objects, but should not be deliberately given a unique identifying mark, such as a UPC code… but that’s a separate issue from the question of self-awareness.)[/sub]

I like the way zwaldd put it, earlier in this thread: “AI would have to be able to do something that it wasn’t programmed to do. For example, you should be able to sit the machine next to a bike (or skateboard, or wheeled dolly) and have it attempt to ride it, on its own accord, without being initially programmed to do so.” We program our computers all the time, by loading the desired software into them. Pre-programming them does not make them intelligent.

As I said, time precludes an exhaustive reply at this point, but quickly…

I think that misses the point. If human beings are supernatural souls, then human beings are NOT strictly natural. Rather, they are beings with natural components (their physical bodies) and non-natural ones (their souls). In fact, this distinction is implicit in the English language, since differentiate between people dying of natural causes (e.g. a fatal disease) and unnatural causes (e.g. being strangled by a rabid ex-lover).

Hence, there is no contradiction in postulating that human beings have a supernatural soul. A contradiction only exists if we describe humans as being 100% natural. Not only is this description unnecessary, it also runs counter to linguistic precedents.

Wait a minute. I’m pretty sure they’re using natural and unnatural as in “of nature” and “not of nature”, not material or supernatural.

But it would contradict the description of humans having natural intelligence if the means for that intelligence are in part unnatural.

I would say that the reason science “only” deals with the material world is because the material world is all that there is any good evidence for. Everything in the material world seems to function perfectly within the material world, without unexplained influence (And when an unexplained influence has been found, it’s been eventually explained – Exempting current theories that are still in the progress of being proven and/or disproven, of course). If there were a non-material world affecting this material world, then the influence should be apparent by that influence, and if that influence were present here in the material world, of course that influence (And by relation, its source) would be under the “domain” of science. But where is it? There hasn’t been any of this influence seen anywhere. If there isn’t any good evidence for it, what reason is there for it to even be a factor here? The burden of proof is most definatly on the ones saying that this non-material-world influence is the reason that a seemingly wholy natural-world process shouldn’t work.

All the solid evidence points to #1 being true, and there is no solid evidence to contradict that. #2 seems odd to me… Why wouldn’t the opperation of the brain be understandable? We’ve got neurosurgeons who can do remarkable things, and there are already rudimentary brain-to-computer interfaces (Very rudimentary, the only example that comes to mind right now is a paralized person who had an implant that let him mentally controll a mouse pointer). Is there anything known (In the known, material universe) that can’t be understood?

#3 seems like the only true difficulty in the way of AI, and it seems to me to be a simple limitation of understanding and technology. Again, can you point out any single thing in the (known, material) universe that can not be replicated with sufficient understanding of how it works and sufficent technology?

Or do you mean feasible in that it may take too much effort to be feasable? If that’s so, that’s not an argument against AI, but an argument about how hard it would be to do.

It would have to have some reason to do so, though. If you put some kid who’s never seen a bike being ridden, then chances are pretty high they’re not going to figure it out, or even know what it does. The only thing my friend’s kid does with a bike is pull it over on himself, he doesn’t know that it’s something to be ridden.

Now, if that AI were set next to a bike (Assuming, of course, that it had the capability to ride it), and had some reason to try riding it (Such as seeing everyone else riding a bike and figuring that it might be better to ride instead of whatever other type of motion it has, or being told by someone else to ride the bike), then it might attempt to learn to do so.

I wish I had a cite to the article I read a few years back but, um… It’s been done. There was a neural-net/robot constructed that was not programed how to use its attatched manipulator arm, but instead learned how to. It learned up to simple tasks with it, and even started to show a little initiative (Noticing that it was always asked to do a certain task, it did the task on its own instead of being told to). It was planned to eventually add more onto it’s “body” as it learns. I just wish I could find something on it now, I’ve been curious how it’s advanced since then…

Well, the examples you provided were heavy on generalities and light on specifics.

How would you test whether a machine has “come to the realization that the image it is viewing is itself”?

If it is enough that the machine output something like “I realize that’s me,” then I agree we have a specific test. I would argue that it’s not much of a stretch to have computers capable of making such a recognition and having such an output.

But you seem to be implying that something more is necessary. You seem to draw a distinction between recognizing oneself and coming to the realization that the image one is viewing is oneself. Assuming that a device has satisfied the first thing, how would you test whether it has satisfied the second?

Well, I’ll try to finish the argument since JThunder seems disinclined to answer my last question.

It’s tempting to require certain qualities/achievements in a proposed A.I. Device. e.g. Consciousness, Realization that an image is oneself, etc.

But on closer inspection, these qualities/achievements turn out to be vague and ill-defined. Moreover, there is no coherent way to test for these qualities/achievements. If a seemingly intelligent entity appears before us, there is no way of really knowing if it has these qualities/achievements or not. This is true even if the entity is another person!

Thus, such criteria are not terribly useful. And in evaluating whether A.I. is possible, it is not very helpful to ask whether an A.I. Device might be capable of such achievements.

So, to paraphrase Turing, we should concern ourselves with the output (and input) of such devices, and see if they can produce (simulate?) intelligent behaviour.

Indeed, the point would be that simulate, produce, contain, have… these are not words that can pertain to private phenomenon. We may test appearances, and if by all appearances it is a thing, then that is the most we can say.

Searle lovers harp on the Chinese room. But the point is, Searle has never been in his own head; he has no idea if his own head is a Chinese Room. He has no idea, really, that he even has a brain, unless he’s got a detatchable skullcap, in which case: holy shit.

The question of whether private phenomena exist is as metaphysical as ever. Science can only quantify and qualify appearances. :frowning:

Patience, lucwarm, patience. I’m still catching up on a host of threads.

In brief, I think there have been some serious misconceptions here regarding the nature of science. For one thing, science doesn’t focus on the material world because that’s all that exists. Rather, it focuses on the material world because that’s all it’s equipped to directly analyze – and it can only handle a limited subset of the material universe.

As for the alleged vagueness of concepts such as “consciousness,” that is hardly an obstacle – after all, we all know what consciousness is, even if we can’t formulate it mathematically. Moreover, such arguments are self-refuting. If we object to the mention of “consciousness” because of its alleged vagueness, then one must object to the mention of “intelligence” as well – which renders the entire OP pointless.

More to come, once I’ve caught up a bit more.

Well, I kinda had the feeling you were dodging my last question since (IMHO) it’s so devastating to your argument. My apologies if I misread you. (I note that you still haven’t answered the question!)

Here it is again:

**How would you test whether a machine has “come to the realization that the image it is viewing is itself”? **

**

Do you know if I am conscious? How do you know?

Turing came up with a test for intelligence that is satisfactory to me. Can you propose a test for consciousness?
**

I’m happy to read and respond to what you write, but why not take a stab at answering my question?

**How would you test whether a machine has “come to the realization that the image it is viewing is itself”? **

i think i would like to say no, but i am inclined to say yes (theoretically it is anyway). what is the brain on a scientific level, assuming there is no god and science explain the world over, then the brain is nothing more than differnt cells and chemicals placed in with energy in the perfect order to create the perfect reaction. so if it can be done in nature, then why can it not be done in a lab using supercomputers and wires? i think before we can even begin to ponder that question, we must have a clear understanding of the bady and its functions, so such a scientific break through is still far off(and thank goodness cause, put bluntly, AI would SUCK)

I have to admit, I’m kind of curious… Why?

As I said days ago, that’s no a serious obstacle. Simply have the machine communicate this to you. This could be as simple as asking “Who or what is that image that you see in the mirror?” If the machine was not told that the image was itself, then we must conclude that it arrived at this conclusion on its won.

As I said before, have the machine communicate this info. One straightforward means would be to ASK THE THING. Alternately, program the device to let you know when it has determined who is in the mirror. (If it is a truly intelligent machine, then it should understand your instructions.)

And there is a clear distinction between “recognizing oneself” and “coming to the realization that the image is viewing is oneself.” We can already teach computers to recognize various objects, especially when these are given identifying marks. In contrast, the latter requires recognizing oneself without being told that the image in the mirror is, in actuality, its very own self.

I most certainly was not dodging it. In fact, I had already responded to your question, days ago… and even if I had not, I think the answer is fairly self-evident. Unless, of course, hard A.I. proponents don’t really believe that machines can be made to understand the question “Who is that in the mirror?”

I think you’re changing your tune, but there’s no need to debate it. If that’s your test, then I would argue that the approach I suggested would pass the test.
**

That’s an interesting distinction you propose. So, for a device to recognize itself, it’s ok to identify the image in advance? I could write a program in C in 10 mins that does that.

In any event, now that you’ve specified a test, it seems pretty clear to me that the test you propose (both parts!) could, in the near future, be passed by devices that don’t even pass the Turing Test.

Not even remotely. As I said, having the machine recognize itself is fairly easy. What I specifically requested, however, was a means for it to recognize itself and realize that the image corresponds to itself. (“Realization,” by necessity, indicates a lack of foreknowledge. That is, it must not have been previously informed that the image in question is itself.)

Remember, it’s one thing for the machine to see itself in the mirror and match that image to a database entry. Such feats, as I said earlier, can be programmed. It’s another thing for the machine to realize “Hey, wait a minute! That image that I’ve been seeing in the mirror – It’s ME!”

For the sake of argument, I would permit that. Why? Because mere recognition is not the mere challenge here. The challenge is realization – making the intuitive leap to understanding what the image means in real-world terms, outside of a mere database abstraction.

Consider the cognitive development of an infant, for example. Infants can learn to recognize a wide array of objects under varied conditions, which is itself fairly impressive. However, that does not compare to what happens as they look at themselves in the mirror. Eventually, they don’t just recognize the image as something they’ve viewed before. Rather, they come to realize that the images in the mirror correspond to themselves, which is another achievement altogether.

“Recognition” (in the sense used in computer vision) is ultimately just pattern matching. It can yield impressive results, but it is merely algorithmic in nature, and ultimately, it merely matches the image with some data entry. In other words, it does not embody any understanding of what that image means.

**

Remember, we can only judge “understanding” by the device’s output.

And I don’t see why it’s cheating to identify the machine’s image to itself along with other images. (Note that most human infants have their own images identified to themselves.)

In any event, assuming that we have basic, decent, image recognition techniques, I think there’s a fairly straightforward way that a device could recognize itself without ever having had its own image identified to itself.

The device could be programmed to move its appendages and take note of whether the image it was looking at moved at the identical times. If so, then it would identify the image as itself.

Now, you may say that this approach is some kind of pre-programmed cheat, but keep in mind that all we care about is the machine’s input and output. For all we know, the human mind may be a collection of 1000s of similar “cheats” that work very well together.

This is getting to sound like some philosophic version of “Who’s on first?”

JThunder:

How do you know?

Let me explain this to ya:

See, a mere algorythm lacks understanding.

That’s because it’s pre-programmed, not emergent

We know it’s not emergent because it’s pre-programmed.

Since it’s pre-programmed, it cannot be emergent.

Since it’s not emergent, it lacks a sense of self.

Without a sense-of-self, it cannot come to the realization that the image it observes is, in fact, itself.

Therefore it’s not emergent.

So it’s not real intelligence.

Simple, eh?

(Sorry JThunder, I couldn’t resist.)

Until we understand consciousness we cannot create it.