Constructed Intelligence

Yes, but the domain was obviously restricted. After all, I wasn’t asking what you thought about the price of yams in Uraguay, or about the latest “Star Wars” movie. I was asking what people thought about the OP, or, more precisely, about the topic of the OP. The topic of the OP did not deal with consciousness. I thought I was being clear in the OP by saying: “The question is: Do you think that entities can, at least in principle, be engineered in such a way that they exhibit general intelligence in their actions?”

Well, of course people can say other things. The point is that no being can ever be certain that other things are conscious. They can believe they are certain, and can say they are certain, but they will be mistaken. Unless, of course, you define certain as a measure of belief, and not knowledge (i.e. justified true belief). Then of course people can be certain of anything, including that 2+2=3.14159265358979323846.

But this is really off-topic. I suggest you start a different thread if you are interested.

Well, deliberately, to be sure they didn’t make those assumptions. The problem comes in when we create a machine and then ask ourselves, “Is this thing intelligent?” The Turing test, for example, is based on human intelligence via interaction as the criteria. What if we created a whale- or dolphin-like intelligence? The assumptions we make are in the things we think intelligence is: like us. Like our brains, with parallel pathways. In our speech, with abstract definitions (like intelligence or blue ;)). In our interaction.

The question remains, how would we verify that this machine is intelligent? To verify it we almost need some standard which it must equal or exceed. That standard, implicitly, is human intelligence, gauged on what we want computers to do in the first place: human tasks. We don’t want computers chatting with monkeys, we want them analyzing the stock market (which they already do), playing games, knowing proper grammar (which they almost do, but that quest is futile), spell checking, running factories, etc etc etc. We want machines to do things for us: that is the essence of technology.

The question still remains, though: Is that intelligent? I don’t want to bring in the “Bald Man Problem” here, but I feel we will have created intelligence before we recognize it as such. At each stage we will have created something better, something better, something better, interactive databases, etc etc, and then one day some guy is going to amaze the world: AI has been here for x years. We just never noticed it.

Does intelligence refer to self-awareness at all? Can one have intelligence without consciousness? (for the religious) can one be conscious without a soul? [my answers to above: sort of, no, and obviously[sub]since I am not religious[/sub]]

And, of course, once we have created this intelligence (which I feel is both inevitable and desireable), what rights are afforded to it?

BlackKnight,

I would agree with ARL that these were implied assumptions through the stated goals and predictions of AI that are well documented. If they would have sat down and said, “we want a superfast small computer to drive a car and play chess using a strict formal processing model” this would have never been labelled AI to begin with. It was when they let their sci-fi imaginations loose and approached a very philosophically complex problem with simple machine language that critics pointed out their theoretical failure. The fact that computers are where they are today is not due to AI assumptions about intelligence, it is simply the case that these tasks are computable.

Note: I am impressed with computers, but not amazed because they still can be used against us very easily, and computers do not have a potential ethics capability of identifying with their victims. This all kinda reminds me of the exuberance of the medical profession claiming to cure all those diseases, when most are preventable in one way or another. As such, strident medical research might actually lead to undesirable patterns of behavior involving governing or cloning. If they had a pill to cure weight gain for instance, would this lead to gluttony? Do we really want to prolong old-age? It begs many questions. Likewise, one may ask, AI is an attempt to cure what? Humanity?

To answer your question, my whole point was to suggest that AI may be an example of raising the value of humanity through failure to artificially reproduce it. AI could be seen as an attempt to worship the machine, to make it master. There is the meta-ontology and major assumption of it all, that we humans are mere tools of our own tools.

Interesting point. My roommate has an intense hatred for the time when we humans will have largely automated much of our lives. He feels largely threatened by computers ala The Matrix, and definitely ascibes a personality to evolution, stating that we must evolve to stay ahead of the computers.

Hardly.

Even if evolution had a personality, which it doesn’t, I’d side with you: we were designed to design the next step. (not that you implied evolution had a personality either, of course)

1 Thanx to BlackKnight. I was indeed thinking of Brooks, Attila and Genghis. Though I had heard of Cog, I had no idea he was made by the same man. This approach seems most likely to me. Input data via the senses and allow the inteligence to self-organize.

2 I think Bunny’s list of assumptions is relevant. We have to agree on what inteligence is before we can agree on whether an entity is inteligent. An AI researcher Must have an idea of what they are trying to build. If I were building a boat I would have to decide between sails or engines. Nite’s link clearly illustrates researchers proceeding from two very different assumptions.

I’m assuming that Dopers are familiar with universal qualities of life, what would be the universal qualities of inteligence?