What is the state of AI?

Yes, but there’s quite a bit of difference between what we don’t have and what we can’t have.

Agreed about “search”. I am often incredulous that so much AI is nothing more than that. However, the following strikes me as odd (and I apologize in advance for hijacking):

One of the things about many philosophy people (disclaimer - I have an undergraduate degree in philosophy) that drives me crazy is when they’re not precise with this argument. Clearly, there’s a difference between “a difficult time” and “impossible”. For instance, I went to a philosophy colloquium recently, where someone said (paraphrasing), “Why don’t the AI people just give it up already?” Which was infuriating to me; I mean, computer science - as a discipline - has only been around for (about) 60 years. I was tempted to respond, “Why don’t the philosophy people just give up already on the mind/body problem?” How long has it been since DesCartes?

And I’m not saying that there aren’t good arguments against the very possibility of AI. Just that many philosophy people tend to be (groundlessly) overly dismissive.

See, this is another one. I’m sure there are other possible foundations for this view, but here’s a few. If one believes biological sentience “evolved”, I’m not sure why - in principle - the same can’t be done on a computer. If one believes that “God” created (that is, programmed) biological sentience, then the argument used above can be applied. If one wishes to claim sentience is “magic”…well, then I don’t think there’s a suitable counter. But then, there’s not much argument possible when you rely on such an axiom to begin with.

As far as “other inferences about life” go, I’m curious as to what they’d be. Granted, acknowledging the possibility of AI may require a severe reorganization of the “web of belief”, but so be it. So did a heliocentric universe.

A further note: I’m not trying to be confrontational, nor do I really want to hijack this. I’d just like to hear whatever there is to be said. <fezik>Perhaps start a new…thread?</fezik>

It’s not the hardware that’s the issue, but the software. Not even the most fervent proponent of AI claims that we have come anywhere near self awareness today, but this hardly means that we can’t ever do it, with the proper software. I know many of the philosophical arguments against AI, and it seems that as we learn more about the brain, it is looking more like the entities people like Dreyfuss call non-intelligent. But that doesn’t mean that it will happen soon - we might be at the same point as a Victorian space program - not even being truly aware of how hard it is.

Or perhaps she was guarding against the common fallacy of thinking that there is just a short hop from doing something intellectual (like addition) that people do and being intelligent. As we have grown more accustomed to computers this fallacy has diminished, but in the '50s people were expecting computers to become aware any minute. (Not computer scientists, of course.)

Did you know that IBM has a policy against naming hardware components after body parts? Memories are called arrays in IBM, for instance. I don’t know when this policy started, but I expect it was to prevent the same sort of fallacy Ada Augusta was guarding against.

Do you think intelligence is Turing computable? If not, why not?

Is this a new one squeezed in between NM and AZ?Where they think no one will notice :wink:

Didn’t Turing propose arguments for why this is not applicable in his initial paper? It’s a while since we did the philosophy behind it, but it was definitely either Searle or Turing who countered this argument.

Von Neumann showed how it was possible in a common Conway Game of Life (and easier in other Celluar Automata with different rules) to “build” AND, OR and XOR gates and “runners” (to simulate wires) and so forth, and how from that it, in theory, would be possible to do everything any computer could do - although a simulation of a full computer would be gigantic. I.e. there would be nothing different, in principle, of a Celluar Automata and an electric computer. A Celluar Automata can be done on a paper block. Would you consider such a one sentient? The idea seems absurd but would that not be the conclusion one would be force to assume?

Yes absolutely. The paragraph immediately before the one I before quoted states:

This site contains the complete Ada Augusta Notes.
http://www.fourmilab.ch/babbage/sketch.html

I’m trying to make a distinction between intelligence and sentience or consciousness. I don’t know if intelligence is possible without sentience, but no longer think sentience is Turning computable (though I’m certainly not without doubt). I’ve come to this tentative conclusion not from any insights that will absolutely prevent it happen as much as a complete inability to see how it could be done, perhaps inspired by Searle’s Chinese Room gedanken experiment (Chinese Room Argument | Internet Encyclopedia of Philosophy) and similar thoughts - I just can’t see how this sense of self can ever be programmed. And there’s just no getting round that consciousness is something very odd that needs dealing with, and going by the if-it-walks-like-a-duck-and-talks-like-a-duck variety of the Turning Test just doesn’t cut it for me. I know of the Penrose bio-quantum theories (btw. is a quantum computer Turning compatible?) and the arguments that the consciousness would reside in the whole or the system or something equally diffuse and find them both unconvincing - and really, if you’re going to believe in a soul you might as well believe in a soul.

Also it seems to be un-provable. As Digital Stimulus said, functionalism seems to be the only possible proof, which I consider massively inadequate (Turning Test even moreso – I’ve known nerds that couldn’t pass the turning test on their life as well as people who were fooled by Eliza). Though this is not a problem only for computers as nobody is ever able to prove anybody else are conscious - for all I’ll ever know I may be the only sentinent being on earth.