Are the problems with creating AI practical or theoretical?

Not sure if this is GQ material.
From my position as a larval hacker, it seems to me that everything that a human can do, a computer can do, if it’s abstracted enough. Take ‘knowing’ what a sentance means. A computer can tell us that the sentance “White are black.” is screwey, but will let “White is black.” pass. Could we not hook up a grammar checker to a database of concepts, that includes exclusive definitions for white and black? It would be difficult, of course, but I see no theoretical reason why it wouldn’t work.

Or take reading, f’rinstance. Could we program a camera to constantly scan for a certain arrangement of pixels colored differently than the ones around them?

Tell ya what ** robertliguori** try programing a computer to build a birds nest. And when you get overly frustrated, do a google search for AI and Bird-Nest Theory. That should explain why ‘theoretically’ AI as seen on T.V. is not possible.

side note

I actually had the chance to speak with a prof. at CIT who was working on a functional proof to disprove the Birds Nest theory. She was having a very difficult time to put it in Occum Razor terms.

I’m going to take your title more than you post here, :

  1. Theoretical: We do not really “know” what sentience, consciousness, and sapience are. So trying to create them articificially is likely to be rather difficult

2)Practical: If you are trying to create a humanb-like (or even animal like) intelligence, consider the limitations of computer hardware compared to the human brain’s storage space, the lack of adequate sensory input, and the lack of anything “useful” for an AI to do (they are not programmed to do anything from birth, just to exist).

Beyond that, how does one create a database of concepts? How do you express the thought, not the color, “purple” in 1’s and zeros? How how do you decide which concepts are accurately described? How do you create an exclusive concept of “itchy”?

This has been done. Its called a text-sensative scanner or something like that. Its a program that takes scanned images and peels the letters. But that is a very simple [and expensive!] trick. Trying to get the computer to understand the letter has not yet been accomplished.

Well, neither are we. And yet, we learn, and do useful things. We can program a computer to ‘learn’ in simulated environments. Computer power is growing, and quickly. Besides, human minds represent things much differently than computers. How many pages of text can a human remember? And we have digital cameras. Why not just set one to take a picture every 30th of a second?

Well, I don’t have Adobe Photoshop handy, so I can’t give you the RGB or CMYK representations of purple. But, a computer doesn’t have to ‘know’ what purple is, as long as it can recogize it as a color, and ‘look up’ colors in a digital encyclopedia.
This has been done. Its called a text-sensative scanner or something like that. Its a program that takes scanned images and peels the letters. But that is a very simple [and expensive!] trick. Trying to get the computer to understand the letter has not yet been accomplished. **
[/QUOTE]

What do you mean by ‘understand’ the letter? We operate by a few simple rules of grammar. A computer may not ‘understand’ these rules, but they can still proofread papers for us.

But we eat, sleep, dream, and form emotional attatchments to ou parents before we can even understand what those things are. “Useful” was probably not a good term, but it was the only one I had.

As to your other response, I question how an intelligence can exist that has no actualy mind. You want to talk about A.I., but you still need an understanding of what you are talking about. In any case, look at grammer engines. They still cannot accept uncommon modes of expression, even though they aren’t particularly complex. And while it can “read a letter” it has no knowledge that the larger word group (which it does not even realize exists, just prints) is actually a visual shorthand for an idea, a concept.

Now, AI may well be possible. It just may not be very easy.

This problem occupies my mind quite often. On one hand, I am sorta mystical or spiritual in that I don’t believe in the Strong AI theory… on the other, the external (objective) manifestations of consciousness do lend themselves to a behaviorist (and essentially functionally describable) explanation.

Humans are sort of notoriously inconsistent in many ways (the way they make choices, for example), which (I think?) demonstrates that our brains handle information which seems the same in different ways.

I am currently of the mindset that the AI limitation is a theoretical one, but practically speaking we will be able to create any number of AI routines (eventually) which suitably mimic our forms of being. Does this leave us with a distinction without any difference? [unsatisfied shrug]

I just don’t know what to think.

One of the classic AI sentences (possibly due to Chomsky) is “Colorless green ideas sleep furiously.” Syntactically correct. Semantic garbage. Will a computer let the sentence pass? Not necessarily. Even fairly simple representations of this sentence wouldn’t pass muster. E.g., if you had a slot for “color”, you’d notice that you were trying to fit two things in that slot. And you’d probably notice that “dream” requires an actor with attributes (consciousness, animate and so on) that the subject, “ideas” does not possess. The computer would probably respond in the same way a human would, e.g. “Huh?”.

The mechanics of reading are pretty much solved, that is, we have pretty darned good OCR (optical character reading) devices that will transform text into ASCII internal representations with very good reliability. Is it perfect? No, but what remains is mostly detail.

Computer vision is much, much harder. (Put a diagram in the document to be OC scanned, and it’s another story.) But we don’t need to get into that right now.

To really answer the OP, it’s a lot of both, but mostly it’s practical issues. Theoretical issues would certainly come up, but really since the early 1980’s we’ve had techniques that could handle language, discourse, learning, deduction, reasoning under uncertainty, and many of the facets of “intelligence” well enough to be getting on with.

The biggest problem is that most AI is currently done in academia (by starving grads students :slight_smile: for specific projects or to test a specific theory. Building an artificially intelligent entity would be (IMO) a Manhatten Project-level of effort. It would be very expensive and it would take a long time. (Think how much effort is involved in creating a modern operating system and recall that we know what an operating system is supposed to do.)

There’s also some question about whether we have powerful enough hardware to handle the task. (My opinion is that we probably do, as long as we relax real time requirements A LOT).

If you invested this effort, would you get something that was human or anything like it? No. But you could probably get something that could handle very complex computational tasks, e.g. understanding a conversation well enough to generate appropriate responses, program based on verbal instructions, and so on.

{B}Phlosphr{/B} Can you give a cite for “Bird Nest Theory”? I googled it and didn’t come up with anything relevant. But I don’t see any particular obstacles to building a bird’s nest as long as you recall that it’s largely an opportunistic task rather than an algorthmic one. That is, just as sculpting an elephant is removing all the bits of stone that don’t look like an elephant, weaving a nest can be considered as adding a twig or bit of string to every part of the nest that doesn’t look like a nest.

Hmmmmm. Alright. Declare a series of twig objects in a 3d modeling enviornment that resemble twigs, pine needles, etc. Make a 3d graph that expresses the general shape of a bird’s nest. Have the computer position the twigs randomly within the graph. Count the amount of e-twig in the graph, versus the amount outside the graph. Repeat until the nest is shaped to within the desired number of digits of precision.
It would take a fraking long time, but so what? Computers are getting damn fast nowadays.

There is a diference between true AI where a computer is able to learn and simply creating a huge database of every possible response to every possible situation.

Yes, we can program a computer to randomly pile sticks on top of one another into a birdhouse-like shape. I would hardly call that intelligence if it is simply apeing a set of pre-written instructions.

I am not an expert on the Bird Nest Theory but algorithmically speaking, there is no step-by-step method of finding the components of a birds nest. Bird nests are made of nothing that is exactly the same. A piece of hair, straw, plastic widget, piece of a leaf etc…etc… And it is impossible to program a computer to operate within a non-distinct parameter. (I think)

I’m searching for a site…

Well this rather odd. But I started a thread on this one year four months ago to the day and only 9 minutes off the hour of my last post to this thread.

Anyway I am unable to link to it, but do a SDMB search for ** bird nest theory**, anydate, with my username. You should get three threads. In the one I started is a link to the site for the bird nest theory.

It is very interesting stuff…

robertliguori: With respect, I think you are missing the point of AI entirely; sure it’s possible to create systems that are adaptive within the parameters of their assigned task, it’s possible to create algorithms that arrange things sensibly according to a set of complex rules that we define, it’s possible to make devices that store far more raw information than a single human ever could.
But none of those things necessarily requires sentient intelligence.

The ultimate goal of AI (as far as I’m able to tell) would be to create a structure that is able to think for itself in a creative way that was not necessarily anticipated by its designer and also (although we would have no sound way to verify it) would have ‘inner life’ - it would have an internal sense of ‘self’ just like we do (well, I do for sure and everybody else appears to :wink: )

While you’re searching for bird’s-nest theory, you might as well also search (here or on google) for “chinese room searle” -here’s a random explanation of the argument in question. Pretty much fatal to the idea of AI, IMNSHO. Of course you can argue (and lots of people have) that the main sticking point there is our own inadequate understanding of consciousness. All you have to do is resolve that completely, as a general case, and the question of whether AI is possible will be solved as a side effect. Of course, that won’t help you to figure out how to build one.

I’d bring up Occam’s razor in response. If a computer worked as a functioning Chinese room (could process data from the physical world, manipulate data, etc) would it matter on what level ‘understanding’ took place? My eyes don’t “understand” the text I read, and my hands don’t “understand” the text I type.
(This is probably evident from the fact that I’m not getting this.) But I, as a whole, do understand it. So, what’s the hang-up here?
Re the bird’s nest: If we have a finite set of starting conditions, a finite number of ways of processing those conditions, and a finite number solutions, why not just tell the computer do random stuff till you find a solution, and then analyze how you got a series of solutions?

The Chinese room sounds just like a Turing test. Turing says that if an observer who is interrogating the computer can’t tell whether he’s talking to a person or a computer, we can declare the computer to be intelligent.

If we can’t tell whether it’s a Chinese speaker inside the room or an English speaker with a fancy rulebook, is the distinction even important? He can answer questions and carry on a conversation in Chinese; for all intents he is a Chinese speaker.

Now, if the rules don’t cover certain subjects, you might be able to tell that the person inside the room really doesn’t know Chinese; but that just means you need a better rulebook. (“What do you think about stem cell research, Mr. Room?” “I do not want to talk about that. Do you like shopping?”)

Exactly! I remember reading somewhere back when IBW mas using Big Blue to play chess that the computer considers every possible move on the board out to the next ten moves or something like that, the human master only considers one or two pieces on the board at one time. Theoretically, given a fast enough computer you could ‘solve’ chess, that is compute all of the possible moves ( Or is this one of those unsolvable problems because there isn’t enough matter in the universe to build such a computer?) for every possible game and create a DB out of it. I don’t think such a brute force approach to thinking gets you close to being intelligent.

Well for one thing, that would be a decidedly unintelligent way to approach the problem. Brute force may work, but I would hardly describe it as intelligence.

Based on the other responses in this thread, I see that I’m hardly alone in that assessment.

Any read stuff by Daniel Dennet on this? He seems to have plenty of good things to say on this subject.

I agree that Brute Force isn’t very intelligent. The Chinese box example is an example of a Brute Force solution to using language to communicate. If AI has any hope of creating even a Weak example, it cannot simply rely on this sort of lookup method.

However, robert has a good point in that it’s not entirely clear that humans beings aren’t themselves just glorified Chinese Boxes in some way in regards to language. I may be able to pull up an “understanding” when confronted with a phrase in English, but I have no idea how I do it: for all I know there is some underlying recognition algorithm at work. Whatever it is, it’s not Brute Force, because it’s fairly sloppy. To draw an inappropriately conclusive analougy: it’s not at all like “Perfect Chess (wherein every move is predicted)” but rather a much more like the simplified algorithms for chess playing that most computer chess programs use.

We have the same problem with math savants who can calculate extremely large sums or products in their heads without (according to them) knowing how they are doing it. Clearly, they don’t have the famed “understanding” of what they are doing. But they can do it all the same: just like the Chinese box example suggests a computer does (at its best). Some strange mechanism is at work here, yet it runs on the same general hardware/software as all the rest of the human mind: is there a connection?

Along these lines, we have plenty of fascinating examples of what happens when whatever it is that makes the human mind function as it does partially breaks down. The title story of Oliver Sacks’ “The Man Who Mistook His Wife For a Hat” is very illustrative here. Are stories like this evidence for the human mind’s rooting in complex algorithms that can come undone, leading to massive and uncorrectable misunderstanding? If there is no “mechanism” to consciousness, then how can it “break” in such a strange way?

Not a lot of good answers here, I’m afriad. Searle hasn’t even begun to convince me (I have no idea what he means by “mental contents” and I don’t think he does either), but then his detractors haven’t made much of a case for their own shill.

Robertliguori

After reading some of your comments on responses, I’m wondering whether you meant artificial consciousness, rather than artificial intelligence.

And thus we illustrate a reason that this debate will never be resolved.

There is no consensus as to what are meaningful definitions of “intelligence” or “consciousness” or “self”. Let alone how to measure if such have been achieved.

In my book AI HAS been achieved as I agree with Pinker that all intelligence means is the ability to solve novel problems in pursuit of a goal. Not that it looks or acts human (ala Turing). Nor as good at particular kinds of problems as humans are.