Poll: will sentient machines akin to those in science-fiction ever exist?

…you left out the situation leading to what I’ve thought was one of the best exit lines in S-F’dom, from Fredrick Pohl’s Gateway. (The original story was pretty good, but then, alas, sequels happened. Sigh.)

I won’t quote the actual line, but I will spoiler a description of the situation:

[spoiler]A machine sentient or sapient enough to know it’s a machine - or at least model the knowledge - and know that although there’s no reason it can’t live forever, it will never “really” be alive.

(Me, I’m going to forget those sequels ever existed. What sequels, you ask? Exactly so, I answer.)[/spoiler]

No matter how sophisticated, it will always be a mechanism, and it’s behavior will be limited to what it is deterministically programmed to do. Life is unique.

Oh, that’s ridiculous. If you really think it’s necessary to to be non-deterministic, just make decisions weighted by a random source, like the decay of a lump of uranium.

Call it what you want – sapience, AI, or something else. As far as I’m concerned an acceptable defintion for an intelligent machine is one that can make a better decision than an average human being with a given amount of information.

Personally, I think we’re already there. Consider the thermostat. When it gets cold the thermostat turns on the heat. A human being might turn up the heat, or bitch about how the weather bunny didn’t warn us it would get so cold so soon.

For every person who thinks we are underestimating how far we are from achieving AI, there is one who is overestimating the intelligence of humans.

We are mechanisms. The only difference is that we’re made of carbon and water instead of steel and silicon, and our gears are smaller. Have you taken a close look at the tiny cellular machines that make us up lately? They’re every bit as deterministic and pre-programmed as any other macroscopic object in our universe.

We are intelligent machines debating whether or not it is possible to build intelligent machines. The answer, of course, is yes. The real question is whether our kind of intelligence is worth the effort to replicate in silicon. If we do not develop human-like AI, it will not be because it is impossible, but because our technology never made it sufficiently practical to build as opposed to specialized AI mechanisms which are simpler, cheaper, and more useful.

But I think we’ll get around to it someday, if only for the novelty.

Yes, but those decisions are determined also.

By what? If you can answer that, I think there are a lot of physicists who would love to speak with you. :dubious:

As far as we can tell, our thoughts and behavior are more deterministic than radioactive decay. That may very well be completely random; we’re just intricate biological clockwork.

The machine must respond to the randomness in some fashion. Random information does not translate into action on it’s own. Say I am writing a slot machine game. The outcome of the pull of the lever will be determined by radioactive decay. It still has only a limited range of predetermined actions based on that random seed: triple sevens, bar blank bar, etc. It isn’t sentient.

Just like our own neurons. A neuron can fire, or not; it can relase a few specific neurochemicals, or not.

You are presuming our innate superiority to machines by assigning us qualities we don’t have.

The poll is a red herring. I do believe that intelligent machines are inevitable (assuming we don’t nuke ourselves or the like before then) but what we call AI these days will have nothing whatsoever to do with it. It’s not a software problem. What makes the human brain capable of its feats is not its programming. Newborns are not capable of much, to give the obvious example. What gives the brain its capability is the vast amounts of data storage and parallel processing. It’s pretty much a given that we will, eventually, be able to produce something of similar capability, and at that point, ta-da, intelligent machines. It’s a hardware challenge, and the fact that -we- exist proves that it can be done, by the exact same methods if nothing else.

I think the answer is yes, but it will involve much, much pain and will require much more interdisciplinary cooperation than I see now. As of now a good portion of AI (or AI as we’re thinking of it here, not like Googlebot AI), is actually pretty split even within itself in between the goal of using a computer to pretend its a human brain and try to provide a model for why we may do certain things (this is generally the branch people more interested in human psychology focus on), and another branch that attempts to replicate the results, regardless of whether a human brain could technically operate in such a way. But of course, there’s even less crossover with outside fields that could increase our results significantly, there are certainly neuroscientists and the like participating, but even talking with one for a few minutes can provide a lot of insight.

Of course, if you saw the work involved in making these entities you’d see why we’re somewhat hindered, it’s effectively programming every single atomic unit of rules you can think of (including what to/not to ignore), the side effect of this is that in the end its severely limited in its applicability. This hangs up everything, from single-application AIs (“play soccer” or “drive to a point in the desert”) to more complex machine learning ones. The trick, I think, is finding the more fundamental rules. One of the fascinating things about human intelligence is the ability to actually intuitively understand rules even if they have little relation to what you’ve seen before. The classic example is old Wile E. Coyote/Road Runner cartoons. After seeing one or two you can almost always predict what’s going to happen once the initial “something goes wrong” phase occurs (i.e. nothing is ever ignored, if you see it, even if Wile E. gets rid of an obstacle you WILL see it again and things WILL get worse). Even our best efforts so far really work for a limited set of scenarios, and don’t come close to that adaptability. If you could figure out a way to effectively tell it not how to figure out how Loony Tunes works, but how to figure out how to figure out (that wasn’t a typo) how Loony Tunes works you’d probably only be a couple stones throws away from the end result.

To fix this I think we need more crossover with neuroscience and biology fields than we already have, I think perhaps an extra focus on analog computing could also help, though I wouldn’t defend that last one in a debate. I’d say that with AI research right now we’re essentially fumbling around in a dark room looking for the lighswitch, we know we what we have to do, turn on the light, we just can’t find the right switch to flip, and finding it will require either a huge stroke of luck or a big breakthrough after observing an existing model enough (the human brain). Though I’m not sure which one will hit first. It’s entirely possible we could get true sapience from a typo or two (giving “it’s not a bug, it’s a feature!” A whole new meaning) or a wild guess that’s then isolated and studied, though I would place my bets on the breakthrough coming from the neuroscience side of the equation if pressed (though applying it and figuring out the correct language to make it so would still be a royal pain).

Also, most of the current Cognitive Architectures quite frankly suck, most research is done with an existing architecture and it can be a huge limitation. I’m working with Soar right now and it has some definite flaws. For example, if a set of instructions is always executed Soar can “chunk” them as a universal rule given certain stimuli to simulate “learning”, but it only works for those specific stimuli. If you add one to a number on the input even if the exact process would be the same it wouldn’t be able to tell it’s similar and would have to execute those instructions for that specific data set a few times before it chunks it. For example, if you taught it addition with 2 & 3 it wouldn’t used the chunked ruleset for 2 & 4, or even 3 & 2 (different order). You can get through some of this with going into source and hacking some probability theory stuff in there, but like I said above, it’s basically the duct tape and bubble-gum solution. As soon as you get into another scenario, or expose the machine to a scenario that won’t operate on real world rules (and as such, probably won’t have the same probability of occurring a certain way) it’ll break down, we’re basically waiting on something that allows us to tell the computer how to calculate its own probabilities and how to best network all its experiences with minimal hardcoding.

Of course, I say that now, as soon as said breakthrough comes to pass (if ever) we’ll probably just find something else that we can bang our heads against the wall figuring out. Emotion may be it, though I think if you get the rules fundamental enough the emotion will evolve naturally out of the reasoning, but that’s just a purely personal pseudo-philosophical viewpoint, not necessarily an entirely reasoned one.

Oh, and as a slight post-script I should note that I do know a little more about the cognitive development side. There are certainly sides that don’t even try to make basic rules, I’ve seen papers that basically record every possible unit of data about voice pitch, timing, etc (the one I’m thinking of specifically was from Japan (Osaka maybe?) that dealt with Aizuchi, filler words) and effectively code a very, very strict input/response tree that appears very intelligent and quite able to converse but is about as realistically smart as a premium toaster, depending on how you define “smart” at least. Granted you may get a Turing-test passing entity out of this method eventually, but I don’t think it’s a very elegant solution. I’m prepared to eat those words some day, if necessary.

Disclaimer: 1 AM, barely proofread post.