Poll: will sentient machines akin to those in science-fiction ever exist?

But alas! You left out ‘Yes, but I know not how.’

I agree. Long before we have thinking machines, I’ll have my brain ripped out and stuffed into a Destructomatic Killtron. My new body, with it’s armor plating, chainsaws, and modular cannons*, will be quite useful in the inevitable zombie apocalypse, not to mention the war against self-aware military equipment.

*Also, a mechanical penis. I intend to be a man machine in the truest possible meaning

Yes. You have to edit the OP, which means you must do it within the first five minutes of posting it. Unless, of course, you’re a mod.

Fool of a Khazad! That is covered by the “whole new approach” and “don’t know” options.

I think the achievement of artificial sentience is inherently unknowable. Fundamentally, we can only ever be sure of our own sentience as individuals. We infer the sentience of others because their behavior is similar to our own, and we assume that their outward similarity evidences internal similarity. If one were to design an electronic system that could pass a Turing test, one might infer sentience, but could never be certain of it.

But only in the same way that we can’t be certain of the sentience of others: in other words, a situation in which the sentience of machines is doubted only by the solipsists, and since they do not exist they can be freely disregarded. :wink:

I picked number one. But I can think of one took option: we might develop intelligent machines through the back door of putting our intelligence into machines. As we develop life-extending technology, one of the paths will be to design prosthetic replacements for decaying brains - an artificial storage medium for a human mind. And once you’ve transferred a human mind into such a device (and assuming you hook it into some basic communication technology) haven’t you created an intelligent machine?

How about system that passes a Level Two Turing Test? That’s the one where you explain what a Turing test is to the system and then have it conduct one. Any system which is capable of asking questions which would test the intelligence of something else would be judged intelligent itself.

It’s really quite sad to see how highly the first two options are currently being rated.

The AI scam grew to become a really big item in the late 1970s, 1980s and early 1990s in Japan, the US and a few European countries with stupid and naive civil servants who didn’t want their country to be “left behind” in the AI “race”.

However, by the mid 1990s there were enough civil servants in middle and upper management levels who had at least half a clue about computers, and managed to protect the more idiotic politicians from the smooth talking Marvin Minsky frauds of the world scrabbling around for yet more government research funds to develop real “thinking machines”.

Actually, I believe that Marvin Minsky (is he still alive?) will go down in scientific history as one of the greatest frauds and confidence tricksters of all time. The amount of money he coaxed out of the US government on the computer AI nonsense is really quite staggering.

You left out “Yes, I am an AI”.

Not that I am. Yet. But right now you’re discriminating against the very thing you’re asking about.

:raises hand: I want to be on your team. But only if you put on some pants.

You misunderstand, though perhaps it is my fault for not being specific enough. What I meant was, ‘I am confident that it will be either option 1 or option 2, but I don’t know enough about the current research to say which one it will be.’

Oh, there’s no need for pants. When it’s not needed, the mechadong will be safely secured within some of that armor plating. No point in putting it where it can get bitten by zombies or shot off.

I think that is is possible, but it may require the equivalent of machine hormones: something that induces machines to do things, like eat, please itself and reproduce. I think that intelligence requires strong non-voluntary impulses.

I gotta go pleasure myself now.

I think building to our blueprint would actually be slower. I also think that the baggage you mention stabilizes us in the sense that it keeps us from killing each other. Building AI with strict ethics will be very important.

But that doesn’t mean they will ever be sentient–the reason I voted for two things was to say that it will take a completely different model to make semi-sentient AI, but true sentience will turn out to be impossible.

Well who do you think is going to fly your flying car? Not some mere human. It is bad enough on a Saturday night when the drunks come out that you have to watch x and z axis. You’d have to watch the y axis, too.
No, only AI’s will be allowed to pilot flying cars.

No, human beings are not even close to engineering that level of intelligence and will never even get close.

Agreed.

Folks, if you think that the current state of AI suggests that we are anywhere close to creating sapience – or even that we have a rough idea of where to go – you’re fooling yourselves. No honest AI researcher would suggest such a thing.

“But what about neural networks!” people often protest. “They look like they duplicate the human brain pretty well.” No, they don’t. They don’t even come close. They are useful for pattern recognition and a few other applications, but only in severely limited situations. What’s more, they don’t give even the slightest indication of duplicating other sapient functions such as creativity or self-awareness.

There is absolutely nothing in the current state of AI that suggest that we will someday be able to create sapient machines. Nothing.

I selected both “inevitable outcome of current research” and “emerge on its own, not invented”. Let me explain. There’s a lot of research going on in AI right now that’s not geared towards trying to create sapience, but instead towards trying to make computers into better tools. And it’s going quite well: Think of Google, for instance. And I think that sapience will never result from attempts to produce sapience, but that it’s highly likely that Google-like efforts at making better tools will result in sapience spontaneously emerging.

Note that the poll stipulates “EVER”. Not “some time next tuesday”. So what’s your estimate? It’s not possible at all?

But that’s not what choice 1 or 2 suggest at all. See above.

AI is an hilariously broad field. That was one of the reasons I studied it; I had no clue what I wanted to do. Anyway, if there is any chance of creating sapient software, I’d be surprised if current AI insights would be completely uninfluential. On the other hand, I expect more immediate results/breakthroughs from the biological/nanotech research than from “classical” software/language influenced branches.

Saying that sentient robots will be a consequence of current AI research is a little like saying that the internal combustion engine was a consequence of cavemen rubbing sticks together. Yeah, it’s related, and one happened after the other, but it’s not reasonable to claim that there’s anything but the most tenuous causal relationship.

That said, the only way you can really claim that it’s actually impossible is to believe that there’s some requirement for sentience that is not present in the physical brain. Otherwise, it’s just a matter of wiring together enough neurons in the right configuration and figuring out how to run them. That’s possible, but certainly not happening in my lifetime barring a biotechnological revolution that will make the current internet look like a footnote to history.

Read the poll again. Option #1 specifically says, “Yes, they’re an inevitable consequence of current AI research.” Not some future unspecified research, but CURRENT research.

I already pointed out that #1 specifies CURRENT research. Option #2 is slightly better, since it says “Yes, but it will take an entirely new approach to AI.” Nevertheless, there is simply no way to give such an emphatic answer without at least SOME idea of how it can be done.

The breadth of the field is irrelevant. There is simply nothing in the current research which suggests that we can ever generate artificial sapience. The people who insist that this will happen are stating this based on wishful thinking and/or blind faith rather than empirical evidence.