How far can programming go?

You are correct. These boards are awesome. I should subscribe.

Hmm, I never interpreted a computer’s potential pass of the Turing Test as proving the computer can think, just that thinking was not necessary in order to simulate a convincing conversation. Similarly, the computer that defeated Kasparov didn’t prove the computer knew how to play chess; it proved that chess could be reduced to a (admittedly large) set of simple instructions and procedures that could be executed rapidly enough on a fast computer to defeat a human world champion in a reasonably short period of time.

If you don’t mind my asking, what would you consider as proof that a computer can think? Or do you feel that it is not something which can be proven?

Honestly, I don’t know. I predict there will come a time when a computer will prove so adept at simulating thought that it may be easier to treat it as we would another adult human.

I expect proving the existence of free will, though, which I’d assume is an essential part of independent thought, will remain purely a philosophical matter. It’ll be debated but not resolved.

Perhaps some mainstream AI researchers feel it’s crackpot work, but certainly not most. Now, granted, attaining anything approaching true human-like activity is pretty far off IMO and perhaps that’s what you mean. And I’d agree with you fully, if that’s the case.

However, for example, there’s plenty of speech processing work being done. I’ve used Festival (related site here) for speech production and Sphinx for speech recognition. I’m working with a group that’s using them and a package called WordNet/VerbNet for parsing and semantic interpretation. Lots of linguistic work is underway - just recently, there’s been a flurry of research regarding deriving semantics from statistical associations of sentences/articles/etc (sorry, no cites handy). I’d think that the aim of all of it is to eventually reach human-like capabilities, even if the claims at this point are meager. (One would hope we’ve learned some sort of lesson from the overblown claims of Minsky, Turing, and Samuel. At least, I think it was Samuel; perhaps it was Newell or Simon?)

Now, granted, this leaves open the debate as to whether a computer that can identify semantic similarities actually knows or understands anything in a meaningful way. But, IMO, that just leads to establishing one’s beliefs – if you believe there is something “extra” in human consciousness, you’d say that AI is impossible. Personally, I believe the opposite. If I have time, I’ll follow up later with more. We’re trying to get an entry together for the AAAI05 social robotics competition in a few weeks and there’s way too much to do…

I’m not to sure that free will in humans will ever leave the relm of philosophy. And with all the talk of destiny, fate, and especially karma I hear, I’m not to sure how much of the world really believes it anyway. I myself tend to go with the idea “if it looks like a duck and quacks like a duck I’ll call it a duck until a better match comes along”.