Is AI possible?

Well, one way to do it would be to look at a problem that requires “genuine intelligence” to solve, and propose the outline of an algorhythm that would solve it. IMHO, this would make for an interesting debate. However, I don’t want to be accused of “cooking the books,” so I leave it to you to propose a problem that requires “genuine intelligence” to solve.
**

I would really appreciate if you would identify a few. Why not?
**

Personally, I believe that “intelligence” is the result of massive “brute force” processing in your brain. You are not aware of it, but it happens nonetheless.

**

Well, I’ll take a look (with an open mind) if I get a chance.

Well, ya got me there. Certainly I think that ethnicallynot is confused.

Oooh!, oooh!, I know! - how about getting it to try to convince a person th… (only joking)

This is a good suggestion lucwarm Perhaps something that would require innovation and truly inventive/creative reasoning, but I can’t think of a suitable task

I don’t think that determinism in the machine need be a limiting factor; not if it interacts with the non-deterministic outside world in any way.

Non-deterministic outside world? Where is that?

How about this one: Learning the concept of object permanence, without having this concept pre-programmed. This is something that infants learn to do.

Here’s another, related challenge: To recognize oneself in the mirror AND to learn that the image in the mirror corresponds to oneself.

Hell, my computer already knows about object permanence. It keeps insisting I install software for the same stinking integrated sound card that I’ve decided not to use. I wish it would forget about it! :stuck_out_tongue:

Funny, erislover. Funny indeed. :slight_smile:

How about abstract concepts? How would a computer envision, say, the concept of an imaginary color? Or better yet, the concept of a concept?

IMO, such (ahem) concepts are beyond the ability of our computational models to encode, much less generate. One could create a data structure and say “I define this to be the concept of a concept,” but that strikes me as an act of blatant cheating.

All things considered, I think this brings up a very important question:

The question being, what’s the difference?
In essence, a human can be defined as a very, very complex organic machine. If that organic machine then creates another machine, organic, inorganic, or something in-between, which is indistinguishable from any other human being (In the long term, including daily behavior, etc), then how would it be reasonable to call one of these very complex machines intelligent/sentient/concious/whatever and the other not? And what difference does A.I.-vs-I. have?

Artificial intelligence is artificial. Mere intelligence is not.

The point behind “artificial intelligence” is that we are trying to duplicate natural intelligence, using the resources available to us. That’s why it’s fallacious to say “We already have artificial intelligence! It’s inside our brains.”

At this point, I would consider that to be a gross assumption. In fact, many people – including a great number of scientists – believe that human beings are MORE than just their material bodies. Many scientists and philosophers also believe that a supernatural component (i.e. the soul) is the best way to explain such phenomena as emotion and free will.

Thus, the argument “Well, human beings are just machines, so surely we can build a comparable machine that’s also intelligent” strikes me as subtle form of question-begging.

I see what you are saying, but if the outside world is deterministic, then it shouldn’t be a problem that the computer in which the AI resides is also deterministic.
So do you really think that the universe in which we live is deterministic at every level?

I beleive in a relatively homogenous universe. If it is deterministic at one point, it is deterministic at all points. But I don’t know if it is deterministic at even one point.

JThunder:

We don’t know. In fact, when a computer does do this we probably won’t be too sure either. We don’t know how we do it, either, but of course the hope is that once we understand the machine we’ll understand ourselves, too.

Any computer with a relatively sophisticated compiler, like for C++, say, already has a very powerful method for expressing data abstraction in our terms. I’d say that’s one way to have a concept of a concept… of course, you presume that a “concept” is a real thing when it needn’t be. I mean, by the time we get down to the nitty gritty of intelligence, the number 8 isn’t floating around in our heads. And it wouldn’t be on a computer, either. At least in a computer we can say that the number 8 is probably represented by 00…01000. Is that a concept?

Why not?

Well, I object the problems you propose to the extent that they require us to “look inside the box.”

Certainly the problem of recognizing an object is a fair challenge. To put the problem in very concrete terms, our hypothetical AI device would be placed in front of a mirror, with its visual element (a camera?) pointed towards the mirror. Our AI device would have distinguishing characteristics, such as a serial number, to make it unique. We would input the command “Identify” into the AI Device, and the proper output would be something like “me” or “me reversed”

I’m not an expert on pattern and image recognition algorithms, but it doesn’t seem so far-fetched that a computer system could accomplish this task, even without being able to pass the Turing Test. It could use one of the more successful edge extraction / image recognition algorithms that are floating around out there, combined with lots of processing power. The system could be supplied in advance with images of itself, in reverse, from many different angles.

Well, if one is to propose that our mathematical models of computation can solve this problem, then it’s only fair to “look inside the box.”

I don’t think that’s satisfactory. For one thing, that would basically amount to programming the device to recognize itself – and as we discussed earlier, that’s not true artificial intelligence, since it involves directly imbuing the computer with the necessary algorithms. Second, such techniques only amount to recognizing a key and deliberately distinct feature, which is quite different from recognizing oneself. Third, such techniques would only amount to recognizing a given object. The computer must do more than that; it must come to the realization that the given object is actually its very own self.

I’ve had a great deal of experience in such matters, and while pattern recognition algorithms do exist, they all require fairly tightly controlled circumstances – especially where visual input is concerned. Moreover, none of these algorithms imbue the computer with a sense of “self” – of recognizing that an object is oneself – unless this capability is directly hard-coded into the pattern matching routines.

In other words, it’s nothing but a vague hope… which ultimately demonstrates a point that I raised earlier. We can’t conclude that it must be possible, simply because we don’t know it to be impossible. That would not be a scientifically honest claim to make.

In your reply, you also talked about how it might be possible to represent “the concept of a concept” numerically. I have several philosophical objections to such, not the least of which is that this involves reducing the concept to a mere number – in effect, pre-programming the concept into the computer. I think that the pursuit of true “intelligence” would require a lot more than that.

sigh, then you missed my objection. the number “8” isn’t represented as a number in our heads, and it isn’t represented by a number in a computer’s machinery, either. What we call the number eight is electrical signals interpreted by other electrical signals, producing electrical and electromagnetic effects. Is there room for a concept in there? Is there no room for a concept in there but there is in our heads?

I’m not sure I understand your philosophical problem with preprogramming. We are preprogrammed to understand objects through interaction with a specific portion of the electromagnetic spectrum, a specific frequency band of sound waves, and a specific level of force (in touch). We also interact with certain molecules in our noses and tongues. This is all preprogrammed.

So?

We don’t know what makes a concept a concept. As GreyMatter stressed several times (under the mistaken impression that we didn’t get it the first time) these terms aren’t really defined.

We don’t doubt that they are there. The question then becomes: in what circumstance could we say that a machine could have a concept? Turing said, “Well, if it convinces us it has a concept then that is certainly where we should start.” (well, maybe he didn’t say exactly that ;))

Well, it’s one thing to talk about how an algorithm solves a problem. It’s another thing to require that a computer “knows” or “feels.” We can discuss how a computer might meet a challenge. But, for reasons presented so eloquently by Mr. Turing, the challenge itself should present the device with an input or set of inputs, and look at the device’s output.

I don’t see why that’s cheating - if the program accomplishes the task, then what’s the problem?
**

How do you think you recognize yourself? It’s not a conscious process, but there can be little doubt that, on a sub-conscious level, you are looking at each feature and comparing it to a mental image of yourself.

I agree that I gave our hypothetical computer a deliberately distinct feature, but that was only to level the playing field, since computers can be likened to sets of identical twins. So I don’t think this is a critical point.

In any event, I’m not sure what you mean by saying “recognizing oneself,” as if there is something metaphysical about it. You observe an image, and you either conclude “yeah, that’s me” or “no, that’s not me”

**

If the computer outputted something like “yeah, that’s me,” would you believe that it had come to that realization? What if your friend looked at a picture of himself and said “yeah that’s me”?

See, we’ll never know the inner consciousness/feelings of AI Devices. So there’s not much point in worrying about them.

If they process input and give intelligent output, that’s all we should care about in deciding whether we have AI or not.

**

I agree, but with more programming and more processing power, those circumstances can be broadened, no?

**

See, I think that the Device’s “sense of ‘self’” is irelevant. How do you know whether anyone besides you has a “sense of ‘self’”? If somebody built a working AI Device, how would you know that it does not have a “sense of ‘self’”? What specific test would you give it to decide?

I fully understood your point. However, the very questions that you raise underscore the weakness of the hard A.I. position. We don’t know how a concept might be formed within one’s electrical interactions, and so it’s foolish to presume that this CAN be done using computer circuitry. Once again, the burden of proof would rest on those who claim that it can be done.

No, they aren’t. These are things that we learn how to do, just as infants learn such concepts as object permanence and the significance of their reflections. Therein lies the difference. Unlike computer programming, human intelligence is an emergent quality. There may be an element of preprogramming involved – the optic nerves being properly wired to the visual centers of the cortex, for example – but much of what we consider to be “intelligence” emerges throughout life.

Consider a desktop PC, for example. We can give it the capability to understand a Microsoft Word document, simply by loading the appropriate software (i.e. by pre-programming it). Does this constitute true intelligence, though? No self-respecting cognitive psychologist would claim that it does.

There is a very limited sense in which computers can learn, through neural networks and similar procedures. However, such techniques don’t impress me as examples of genuine intelligence. A successful neural network, for example, gives the appearance of “learning,” but in reality, it is hardwired in a very specific and deliberate way, so as to ensure that its self-adjustments will (probably) yield a satisfactory result. Ditto for inference-based schemes and other traditional forms of A.I.

Moreover, pattern recognition is but a tiny component of intelligence – one which even lower animals can perform, in their own special ways. What about learning new skills, such as the ability to write entertaining and profound novels? What about learning novel ideas in one’s field, such as science or art? What about learning abstract concepts (such as the sense of self), instead of having these directly pre-programmed into one’s data structures? For that matter, what about learning to compose intelligent, well-researched and persuasive arguments regarding the nature of human intelligence?

For these reasons, and more, I think it is rather foolish to assert that our computational models must surely be up to the task of developing true intelligence.

I don’t see how. The strong AI position is that intelligence is revealed through complicated algorithms, including our own. There are some interesting arguments against algorithmic behavior representing intelligence. That’s fine. What does this have to do with AI, exactly?

We measured temperature before we knew that heat was caused by molecular motion. Any other proposition flies right in the face of all scientific inquiry.

We know. That’s why we are asking our detractors to develop a test that would convince them. We already know what would convince us. (not that I am a supporter of strong AI)

I didn’t learn to see blue as blue. My eyes gave me no choice.

I am so utterly confused by this statement that I don’t even know where to begin trying to understand it.
~Do you mean that programs aren’t getting more complicated?
~Do you mean that research in artificial intelligence isn’t searching for the creation of more complicated learning algorithms?
~Do you mean that humans don’t have a learning process, that intelligence just “happens”?
~Do you mean “if it can be expressed in an algorithm then it isn’t learning”?
~Do you mean that computers can never act in a manner that would convince you it is intelligent?

A beautiful description of the human brain if I ever read one.

Nice eris.

DaLovin’ Dj

A great number of scientists? I have a very hard time believing that non-natural elements are commonly accepted as valid scientific reasoning. Havn’t seen anything about supernatural components in biology, chemestry, physics, etc. And of course, from the scientific POV: Where’s the proof?

And, um… Is it just me, or does requiring non-natural (supernatural) components for “natural” intelligence seem a little odd?

In any case, there are neural nets and other learning computers right now. Chances are, if (when) AI is developed, it will most likely be created first as a relatively simple learning program as a basis, and taught much as a human would. And IIRC, there are already projects under way doing just this, and showing that computers can learn skills, instead of just being programed to do so. With how computers continue to advance in capability (Such as, I have more computing power in my calculator than major universities did some decades ago), I imagine neural nets will most likely continue to improve further.