Is AI possible?

I’m trying to answer as many of these objections as time will permit, so I’ll try to focus on some of the major points. (Quite frankly, many of the other points miss the boat completely, as I’ll be touching on briefly.)

I especially want to address the following, as I discuss one of Dr. Roger Penrose’s arguments against the feasibility of hard A.I., based on mathematical logic (one of his areas of expertise).

How did you come to that conclusion? Can you cite any specific experiments which show that the human mind is entirely material in nature? Exactly how would one construct such an experiment? I will not accept experiments which merely demonstrate a material component to the hu
man mind.

Besides, I believe that there most certainly are scientific reasons to believe in the existence of a human soul.

It may or may not, but that’s not the point. Before we conclude that hard A.I. is feasible, we need to know that the brain’s operation IS understandable. To say “Well, of course we can understand it!” is premature and a bit arrogant, especially since we are nowhere close to that goal.

No, it’s more than that, as I’ll explain in a moment. Besides, it’s inadequate to say “Don’t forget that technology is advancing!” (as was stated in an earlier post). Of course technology is advancing, but that doesn’t mean that these challenges are solvable. Vehicular technology is advancing admirably, but does that mean that we’ll be breaking the light barrier? Clearly, that would be a premature conclusion.

Okay, you’ve committed two fallacies here. First, you’re asking about the material universe, whereas we have not established that the human mind is purely material. And second, the responsibility would fall on hard A.I. proponents to show that all material things CAN be replicated – after all, they are the ones who are claiming that the human mind is just a fancy machine.

Once again, it boils down to burden of evidence. If we don’t know how to do something, and have nothing but vague proposals of how to begin, should we really be saying “Of course it’s possible!” Or would a more cautious scholar say “Maybe, maybe not”?

In fact, here’s a challenge for you. Earlier, lucwarm argued that “our theoretical model of computing is just a powerful, in principle, as a human mind.” Well then, what of the following problem which Dr. Roger Penrose pointed out? (Once again, I’d like to remind you that Penrose is a materalist, a world-class physicist/mathematician, and a colleague of Stephen Hawking himself.)

In brief, Penrose makes use of Godel’s incompleteness theorem, which demonstrates that within a mathematical system, one can have a statement which is clearly true, but unprovable from within the system’s set of theorems and axioms. In other words, it is clearly true, but not algorithmically provable. As Penrose said, “It is these insights that cannot be systematized–and indeed, must lie outside any algorithmic action!”

By necessity, a Turing machine only allows for algorithmic actions. In other words, no matter how complicated its operations may be, it is ultimately algorithmic in nature. That is true even for a parallel-processing, data-sharing Turing machine (which, as Turing proved, would be more powerful only in terms of speed, rather than capability). So clearly, there are at least some aspects of human intelligence which can not be adequately captured by our mathematical models of computation.

Correct, but since that’s not the nature of my stance, that point is irrelevant.

Oh come on. erislover and I crushed your arguments about consciousness/realization/etc. Just concede you lost that point and move on.

**

You’re confusing algorithms and formal logic. I could easily construct a computer program that outputs the statement “1=2” But constructing a computer program that validly proves that 1=2 is a whole different kettle of fish.

Now, if you constructed a device that (1) only reasoned using valid logic; (2) started with a consistent set of premises; and (3) outputted the results of its reasoning, it is true that Godel’s incompleteness thereom would apply.

But why should limitations be imposed on AI devices to which humans are not subject? (1) People (even mathematicians) make mistakes in reasoning all the time; and (2) People hold inconsistent beliefs. (also, people lie!!)

Remove these constraints, and your argument crumbles.

Oh, and by the way, I ran a few Google searches, and it would appear that Penrose’s arguments have been largely rejected in the scientific community.

This is from his entry in an online encyclopedia:

It’s quite obvious that the brain is material (or “in part” if you want to say that). Expiriments have confirmed that you can manipulate the brain in certain ways that reliably result in a certain reaction. Early neural interfaces are already in progress (I believe I mentioned one earlier). The brain seems to work in complete accordance with the principles of science. This all seems to be pretty good evidence that the brain is a material entity in nature. Further, the test you propose (Proving that there is no super-natural element to the brain) is effectively worthless; One can’t prove a negative. However, with all this evidence, I challenge you to find any comparable evidence of a non-material component to brains. And since you want experiments, have you found any experiments that show that the brain has supernatural components?

We have evidence of a material brain, and no evidence of supernatural components to it. It seems pretty reasonable to conclude that the brain is material, untill someone can provide evidence to contradict that.

Not loading for me. Care to summarize?

And again, why wouldn’t it be understandable? Do you know of anything else in the universe that is not understandable? If not, what evidence is there that it could not be understood eventually (Didn’t I already ask this?)

And also not a fair comparison. Taking something of mass faster than the speed of light breaks the laws of physics as they’re understood now, while the human brain certainly does not. There seems to be slim chance that FTL travel will be possible, but that’s a far cry from understanding and/or replecating something that seems to fit quite well in the way the universe works.

Untill there is evidence that it has non-material components, it seems perfectly reasonable to assume for now that it is material, as all other objects seem to be. There is no evidence to suggest otherwise, and untill there is, saying that such a non-evident factor should affect this is as reasonable as saying 2+2 might not equal 4, because there might be some sort of extenuating circumstance we can’t understand and have no evidence for that makes it equal 89.

And I would point out, as I had in the part you quoted, that we seem perfectly capable of replicating anything else in the universe, given sufficient technology. Do you have any reason how this would not be the case for the human brain?

Except we do have a basic understanding of how it works, and there are already some projects underway and showing development. The evidence so far seems to indicate pretty well that it is possible.

Sounds a lot like a variation of the earlier description you had that sounded so perfectly spot-on for a human brain. And what lucwarm said, too.

Take an example computer. A very good one, say, one with basic algorithms as to how to do a number of simple actions. Further, the computer is capable of updating these algorithms on its own as it senses the results of those actions, so as to perform them better. Eventually, these algorithms would grow as more information is added to them, handling more circumstances. Some additions are ineffective or counter-productive; If the computer senses this, it may try to correct by removing that part, or by adding a new part to cover for the new circumstance. Now then, how does this differ from the way the brain works, appart from scale?

I was just asking to see if that’s what you were saying.

P.D, don’t forget. I specifically said that experiments which demonstrate a mere material component are unsatisfactory. We are, after all, discussing whether the human mind is purely physical in nature.

Moreover, your comments have been about the human brain, which is circular reasoning. Nobody denies that the brain is material, so if we assume that mind = brain, then we are begging the question.

Please try again, as I suspect that a mere summary would open up unnecessary questions that are already answered in that article. Besides which, I have enough difficulty finding time to address these myriad points, especially since several were covered in earlier postings.

One of the main points, however, is the nature of free will. If human beings are merely machines, then how does one explain free will? One does not, after all, blame a tree for falling on one’s car, whereas one could blame a human being for crashing into that same vehicle. (See http://www.angelfire.com/mn2/tisthammerw/rlgn&phil/soul.html .)

Yes you did, and I already explained why those questions are irrelevant. (HINT: If you’re going to insist that the human mind can be understood, then it’s not enough to say “Well, tell me why it might not be!” Additionally, if one is to insist that hard A.I. is feasible, then it’s insufficient to say “Well, tell me why it isn’t!”)

I think you’re overstating the case. As shown earlier, free will most certainly does violate the known laws of physics.

Moreover, even if set the light barrier aside, the analogy still holds. It’s illogical to say “Vehicular technology is advancing greatly, so eventually we will be able to travel at half the speed of light!” That conclusion is possible, but simply isn’t warranted. Similarly, it’s illogical to say “Computer technology is advancing dramatically, so eventually, we WILL be duplicating real intelligence!”

And furthermore, the barriers to true A.I. are more than just technological in nature. Rather, there is the lack of even a basic model of true intelligence. We have achieved some success in pattern recognition and rule-based systems, but that’s nowhere close to a complete model of intelligence.

Again, consider Godel’s theorem. How can we count on algorithmic procedures to find the “truth” of a mathematical statement, when Godel’s theorem explicitly proves that such statements can be (a) algorithmically provable, yet (b) clearly true?

Again, I disagree, since we are talking about intelligence, a property which other objects don’t have. Ditto for free will, another property which is absent from your run-of-the-mill objects. Besides which,

Quite simply, your premise is erroneous in several ways. We can NOT legitimately claim that we can duplicate anything in the universe, given sufficient technology. We would like to believe that, but unless we know what that technology might entail, such a claim is meaningless. It’s like saying, “I can do anything I want, as long as I find a way to do it.” Clearly, that is speculation, not evidence.

Second, we are talking about human intelligence, not the human brain. Please keep this distinction in mind. The two may or may not be equivalent, but until this is established, please don’t insist that they are.

And finally, we’ve already identified key ways in which the human mind – and thus, human intelligence – is unique from regular objects. Hence, the statement “We can build other objects” does not necessarily imply that “Eventually, we can build human intelligence too!”
I’ve said it before, and I’ll say it again. Many of the arguments raised in favor of hard A.I. are mere speculation and belief. We’ve just seen several examples of that. For example, the statement “We can build anything we want, if we have the right technology” is belief, not evidence. It sounds good, and it might (might!) even be true, but it does not make for a valid argument.

Moreover, it is a mere truism, on a par with saying “I can do anything I want, as long as I find a way to do so!” Ho-hum. When you break them down, such statements amount to nothing more that “proof by impassioned assertion.” They sound convincing, but they have no logical power of their own.

Free will and determinism are not necessarily dichotomous. Compatibilism is the idea that they are able to live in harmony. I don’t know if I am or am not a comptatiblist, but I highly doubt anyone has ever demonstrated that free will can violate physics, and certainly not in this thread.

I think that the “free will” criterion is subject to the same criticism as many other distinctions proposed in this thread.

How do you know if an entity has free will or not?

The author of the article on souls proposes the following test:

Now, I could easily construct a device that will perform an action that is totally unpredictible. Does the machine “feel” that it’s making a decision? Who cares.

I think Turing would argue that if we can build a machine that behaves as if it has free will (however we decide to test it), that’s all that really matters.

It’s not that free will can violate physics per se, but that physics can not account for free will. An object whose behavior is determined exclusively by the laws of physics has no free will.

To revisit an example I cited earlier, one does not blame a tree for falling onto one’s car, since the tree was merely enslaved to the laws of physics. However, we would blame a human being who trashed that same car, either deliberately or through gross neglect. Why? Because that person had a choice in the matter. That person has free will. If we believe that the “decisions” of human beings are merely the natural consequences of physical laws – laws over which they have NO control – then we have no business blaming them for anything.

I happen to believe that the laws of quantum mechanics allow free will to co-exist with the laws of physics, for reasons too complex to discuss right now. However, there must still remain a source for that free will, and materialist explanations can not account for that.

Balderdash. You are giving yourself way too much credit, for reasons already discussed.

**

And you’re greatly misrepresenting my claim, and that of Doctor Penrose. Penrose is not talking about arbitrary statements such as “1=2.” He is talking about statements which are demonstrably true, yet unprovable through algorithmic means. That is VASTLY different from the argument that you presented.

Additionally, Penrose (and Godel) proved that there are non-axiomatic mathematical statements which are quite obviously true, but inherently unprovable through algorithmic means. And yet, human beings can see that these statements are true. ERGO, in discerning their truth, human intelligence must use some technique which is NON-algorithmic in nature.

Ah, but here, you are putting words in my mouth. I never said that AI devices should never make mistakes, and neither did Dr. Penrose. I think that you have either misunderstood or misrepresented Penrose’s argument.

In Penrose’s scenario, the problem wasn’t that algorithmic techniques can make mistakes. Rather, the problem is that algorithmic techniques are inherently incapable of determining the truth of these statements. Obviously, this is quite different from merely complaining that computers might make some mistakes here and there.

And BTW, I’m fully aware that there are scientists who reject Penrose’s conclusions. However, I have yet to read ANY seasoned argument against them. Every complaint that I have read involves tangential claims like “But neural networks can do great things!” or misrepresentation of his argument, as evidenced by the reviews of his book on Amazon.com.

sigh Okay. So nobody denies that the brain is material. What solid evidence do you have that the mind is seperate from the brain and/or that there is a supernatural component? Can you perform some experiment that proves this? If there isn’t any evidence for it, how can you expect others to believe your argument? As for the bits you have mentioned, I’ll get to them now:

Okay, now it loads… And I really can’t believe THAT is the best support for it (Or actually, I guess I can…). It doesn’t follow reason very well, at all. The argument that without a soul, people could not be the same person after a few years is simply absurd. Things change over time, that doesn’t mean they’re someone else. They’re the same person still. Different, but still the same person. Hell, I’ve changed out almost every single part in my computer but it’s still my computer. If that article were right, shouldn’t it be something else now (I have no clue what. Maybe a toaster :rolleyes: )?

Then they go into conciousness. Basically, he says that conciousness must exist because we don’t know what it’s like to be a bat. Even with “full physiological information” of bats. Well, that seems about as absurd. Nobody contests that a computer is wholy material and physical, but I doubt you know exactly what it would be like to be a mainframe running a few dozen different programs, do you? And a brain is a good deal more complex that “simple” programs like that, so you should even have an easier time with it.

And then, of course, free will. The second article you linked to states that the reason free will must exist is because humans can start their own causal chains… But that isn’t entirely true, is it? People choose what to do based on input. Like right now; I’m not going to go typing this whole thread for absolutely no reason, am I? No, I’m typing it in responce to what you posted. Sometimes it gets incredibly complex; Something happening in one’s life creates a memory, which later starts one thinking about something similar, slowly adding or changing bits and pieces by what seems “right”, untill eventually a full-blown story is formed. “Free will” seems to simply be the ability to reason from input and select a good action based upon that. The human mind, being very complex, makes this a VERY deep process, much more complicated than current computers, as an example. The human mind chooses what it feels is the best option from experience, but not necesarily what is, absolutely, the best option.

Then there were “near death experiences.” I seem to recall that most of these were already debunked, and in any case, the examples given were incredibly vague; I’d appretiate some examples of what was said and done, rather than an opinion (from a rather biased source, I may add) that they were fully detailed. However, it’s already recognized that people will, sometimes, percieve things while sleeping, even if it’s very fuzzy perception, or not even conciously recognized (Like whispering into someone’s ear when they’re sleeping). The same thing is believed to occur occasionally for coma victims, where they don’t conciously percieve something, but their natural body functions do, and they wake up with a (usually rather distant) memory of it. This would be the most “fuzziest” part, since there’s no evidence for or against, except for the statements from people who are unconcious, likely having some sort of shock to the nervous system, possibly with reduced blood-flow to the brain (Common effect of shock, right?), and already in a traumatic experience.

In any case, proving or disproving a soul (Which isn’t something I’m arguing against, mind you, despite the poor reasoning I see in that article) doesn’t prove or disprove anything in regard to this thread, IE, is intelligence material in nature.

And the comparison of a tree falling on a car and a person hitting a car is also flawed. I doubt you’d blame a person if some external factor made a person fall on a car, either (Like, being pushed by someone else). The reason a human would be at fault for hitting a car is because a human can make choices based on what it feels is the right thing to do. But a tree falling on a car is no more of a “choice” than a human being pushed onto a car.

So are you saying you don’t have any reason why the human mind can’t be understood, or just that you aren’t going to present it? Humans are making large leaps foreward in understanding how the brain works, and it seems to have strong momentum to keep doing so. So with significant parts of the brain understood, more of it quickly being understood, what reason can you present as to why it can’t be understood? Again, it strikes me the same as that whole 2+2 example I gave. If there is no solid reason to believe so, don’t expect a lot of people to buy into it.

You contend it does, but it doesn’t have to. Free will is the ability to make choices; My computer can make choices. The human brain simply does this on a much more complex level (That is, if you’re going by the scientific reasoning of how the brain works; And without evidence for something else, it seems the most reasonable explanation).

Ammusing that you take something that is completely possible (Probably even with MODERN technology). Feasability is certainly an issue, since it wouldn’t be too effective if it took three hundred years to accelerate to that speed, but certainly possible. Especially with some of the long-endurance engines under development. So yes, traveling at that speed is completely possible, feasability asside. Kind of like how you could build a structure two miles tall, but it would require so much of an effort it would be rather infeasable to do so.

But the understanding of it is continuing to advance, and the construction of neural-net computers shows at least a basic model of intelligence (That is, learning ability, even if it’s fairly crude at the moment). It is incomplete, yes, but we’re learning more about it every day.

The same way we do. Some would contest that 1+1 can not be proven to equal 2. Yet we all know it does. So does my computer. So effectively, my computer already gets past Godel’s theorem in this case. Of course, I’ve only heard what seems like a rather simplified version of Godel’s theorem (Searching on the web only turned in a lot of results that just seem ill-reasoned, and didn’t help out much), so if you’d care to explain -how- this argues against a computer intelligence, please do.

Regardless, it is still reasonable, without any evidence in support of non-material components to the human mind, to conclude that the human mind is material in nature. Again, evidence to the contrary is welcome.

Why? What evidence is there that the two are seperate (Or, indeed, that the mind even exists seperate of the brain)?

Now you’re misrepresenting what I said. I said that we could reproduce anything we know, given sufficient technology. Might not be too feasable, but possible. Unless you have an example of something that might not be possible? Remember when you do that “impossible” does not mean “insanely hard to do”.

Basically, you’re whole argument says that AI should be impossible, because of a number of factors, with no evidence to support these factors (Many of which may not exist, or at least, not the way you portray them), and yet I’m being unreasonable by not accepting these without question? And even more, that they counter what evidence that is present to support the possibility of AI? I’m not even arguing about the existance or lack of existance of souls, but the evidence seems to favor that, regardless of their existance or lack of existance, the human brain opperates entirely inside the laws of physics, and that there isn’t an mind seperate of the brain.

:confused: I missed this part.

From what I remember of Penrose, he required that consciousness and will involved collapsing of wave functions in some manner that wasn’t clearly defined. In his book, Emperor’s New Mind he went to great lengths to challenge strong AI but didn’t seem to offer any other suggestions.

It is not certain, IMO, that humans have somehow solved the halting problem which should be logically impossible. Hell, I can make a program that will always terminate in a finite number of steps and give some answer, whether or not that answer is correct.

It is also not clear to me that incompleteness theorems demonstrate that intelligence is not algorithmic in nature. Who is saying that intelligent systems must be complete, or consistent, or able to solve all problems? That seems rather unfair since we cannot do it ourselves.

I don’t see why not. It doesn’t seem important to me whether or not you had a choice in the matter of killing a person or smashing into a car. But I don’t know that we need to get into free will debates here, do we? Or is non-deterministic free will necessary for intelligence?

Hogwash, says I, and Penrose proceeds to bring up the view of universals called ‘Platonism’ (also known as ‘Transcendental Realism’). Though I am very pleased to discuss the nature and ontological implications of different views on the subject of universals, there is no requirement that believing that algorithms are intelligence necessarily implies a form of dualism. If we accept, instead of Platonic Realism, Immanent Realism, then we are under no obligation to abandon a completely materialistic view of intelligence, and rightly so (IMO). Intelligence is a physical phenomenon, IMO, and as an immanent realist I feel that all universal properties, if they exist at all, must exist only “in” things; in other words, if intelligence exists, and multiple things can have intelligence, then intelligence only exists in things. Specifically, for the argument at hand, any algorithmic thing is intelligent in some capacity.

Now, I don’t propose to speak for the entire AI community in any respect, but Penrose isn’t digging all the way down IMO. That book was a great read, though.

Just wanted to link the universals debate I started some time ago. It was very interesting, IMO, though it got a little heated in the end (over whether or not color is real! lol)
The Problem of Universals

:frowning: Much of that thread got eaten in the Great Board Hack… it used to be three pages! Damn it all…

Among other things, I described a way that a device could pass the “self-recognition” test you proposed. As far as I can tell, you didn’t bother to respond.

**

**

I used “1=2” as an example to demonstrate that there is a distinction between the output of a device and the result of formal logic. This example shows that algorithmic processes can give outputs that are not only unprovable, but actually false!

But if you think I somehow “cooked the books” by choosing this example, why don’t you come up with an example of something that is truly “demonstrably true, yet unprovable through algorithmic means” and we’ll discuss it.

**

Well, why don’t you give me an example of a statement you can see is true but is inherently unprovable. I’ll try to think of a way that an AI device could output that statement.

Well, you’re relying on a mathematical theroem - Godel’s thereom. Godel’s thereom applies only if a certain set of assumptions is true. If you agree that some of those assumptions do not hold, then the thereom does not apply.

Obviously, you cannot avoid this problem by leaving the assumptions unstated and, when somebody raises the problem, accuse them of putting words in your mouth.

**

Ummm, I’m not “complaining that computers might make some mistakes here and there.” I’m complaining that humans make mistakes (and hold inconsistent beliefs), and therefore by arguing that Godel’s thereom applies to computers, you are implicitly holding computers to a more difficult standard than humans.

A computer that made mistakes, held inconsistent beliefs (and lied) would not be subject to Godel’s thereom any more than humans are.

If you disagree, then why don’t you tell me a statement that is true but unprovable, and I’ll try to show you how an unconstrained AI device could output that statement.

**

Well, apparently it’s more than a few scientists. It’s the majority of the scientific community.

Note that the only reason I inserted this “argument from authority” is that you spent some effort building up Penrose’s qualifications.

I agree. IMHO, humans tend to optimistically hope that there are simple solutions to complicated problems. This is true in some cases, but not always.

If you consider Lewis Carroll’s famous “pork chop problem” for a few minutes, it becomes obvious that there are very strict limits on human “insight.”

Similarly, we can imagine a halting problem with 100,000 lines of code, full of interconnecting loops, gotos, and if then statements. 1000s of variables all interracting in 1000s of ways.

If there is no elegant way of simplifying the problem, then our only hope is to trace out the program’s steps for a while and see if it loops returns to some previous state. And of course such an approach is not guaranteed to work.

**

Agree.

Um.

Pork chop problem?

Ok, what conclusion is consistent with and demanded by the following statements:

Have fun!!

[The Wrath of the Swarm]: “AI? I’ve yet to see natural intelligence on this world.”

Better late to a thread than never.

Did anyone ever get around to defining intelligence? Or at least attempting to define the prerequisites of consciousness?

The Turing test, as presented, seems specious … I can’t tell the difference between a diamond and a cubic zirconium … are they the same? But “behavior” can be used to probe the internal world of how a thinking machine operates … and that is a whole kettle of fish of another color!

I like Steven Pinker’s stabs at definitions in “How the Mind Works”:

He goes on to point out the differrent meaning of the word “consciousness”:

Others are less pessimistic in the ability to define how sentience emerges out of information acccess. Hofstadter, in his famous GEB, takes the view that a self-symbol (see Pinker’s consciousness meaning #1) evolves into a self-subsystem and discusses how various cognitive subsystems report to it and how it interacts with the outside world.

He goes on to suggest that consciousness (in that sentient sense, meaning #3 of Pinker) is an emergent result of self-referential feedback loops:

(Sadly, he does not credit the huge body of work by Steven Grossberg on Adaptive Resonance Theory, a neural network system which proposed much the same thing, along with much else) Varela and Churchill (see the “Self - an urban legend?” thread for references and links) also propose the that the loopiness of the system is the origin of sentient consciousness.

If this line of thinking is valid, then consciousness can, in fact, be quantified by measuring the degree to which a dynamic informational system is self entagled in these “Strange Loops.”

This does not, however, imply that such a consciousness is at all understandable to ours, any more than we can understand what it is like to think like a dolphin. Intelligence and consciousness may be present in a thinking machine, without being very human-like.

Here’s the links:

The “Self” thread.

Churchill’s article in Science: “Self-Representation in Nervous Systems” (You can get the abstract for free, the article needs a subscription or payment, I’ve pasted in some pertinent bits in the “Self” thread.)

And an article on “self” and “consciousness” by Varela

My point with Pinker’s defintion of intelligence is that with such a definition intelligent machines do exist, even if their domains of intelligence is not the same or as large as humans. (Similar to whales who keep track of thousands of individual creatures and objects in huge volumes of ocean in order to accomplish tasks of salience to them, according to rational rules, and whose intelligence we can’t understand) To define intelligence as what we do that others cannot do is arrogant and self-serving.

“Creativity” is another thing altogether, the application of a pattern from one intelligence domain (or subsystem) into another (novel) data set with a surprisingly good fit, as one means of accomplishing goals. I don’t know of any AI that models this in any rigorous fashoin … Napster for AI algorithms?

I see that my Science link doesn’t work, so here is the abstract and snippets as placed in the other thread: