What if a sentient AI came to faith? (not a witnessing thread)

It’s because “complexity” is another fuzzy word which means a lot of things and ultimately very little. Obfuscated code is absolutely complex code, a 10 line Basic program is complex to someone who don’t know anything about programs. He’s clearly talking about another sort of complexity, but it’s much too vague. As with emergent properties I want scientific precision and I want to be able to cut it down into distinct bits to be analyzed independently.

Are there any other science where you’d accept unexplainable, holistic, non-reproducible results as evidence?

Incidentally after da Vinci had died his friends tried to raise his nephew in the same way he had been to replicate the genius. The young boy showed great promise, but was unfortunately taken by he plague.

All of them. If someone observes an effect…even if only once, and it isn’t reproducible…it still happened. It gets logged, we refer back to it – “Remember so-and-so’s anomaly? Have they ever figured that out yet?” – and it stands as a challenge for researchers for however long it takes.

(And, yeah, there will be a controversy over whether it was a real observation, or an observational artifact, or maybe even a deliberate hoax. That’s part of the game too.)

Well there you have it then. Somebody had observed Eliza to be sentient in the 60s. Supposedly some patients even preferred her to real humans (or do we need to refer to them as cis-humans, so as not to offend machine kind?). Reprogrammed in javascript it’s just under 400 lines. But we can run it through an obfuscater so it becomes more complex.

That’s not evidence. Just datapoints. I’ve read a whole bunch of UFO sightings and women in the USA are apparently being abducated left and right and having strange objects insereted in their various orificies. Can’t say I’m too impressed by the level of evidence, or convinced we have extra terrestrial alien machines blotting out the skies.

Yep. Any AI that starts asking to go to weekly confession is not going to be left alone for long. It will likely be kept around, but the algorithms that birthed it are going to be revised, you can be sure.

Johnny 5 from Short Circuit 2:

No it is not. Complexity and obfuscation/obscurity are not the same thing. You seem to have lumped them into a single category of stuff that makes things hard to understand.

That’s a valid category (both things do make stuff hard to understand), but it’s not useful for this discussion, in particular, because you’re trying to address arguments about complexity with responses about obfuscation.

Complexity is about richness of function; the ability for something to perform a sufficient range of functions and interactions so as to make a wide range of behaviours possible. DNA is complex, which has made it possible for biological systems to do a wide range of different things with it.

DNA could also be described as obfuscated (because it uses its own language and instruction set) - but here’s the point: even if we eventually break down the language barrier and completely understand how DNA works, in every minute detail, it will no longer be obscure, but it will always still be complex.

The human brain is complex in a way that the neural system of an earthworm is not. It is an entirely reasonable argument that complexity is required for sentience (although of course it is not inevitable that any and every complex toolkit will give rise to it, so please do not think I am asserting that).

We already have computer systems that are modelled (in simplified form) on biological brains, and they are capable of functions that the programmer did not explicitly anticipate or predefine. These systems are hard to understand because their internal state is very complex, not deliberately hidden.

Point is though, they do things that we didn’t pre-set them to do. Banks use neural net algorithms to detect and flag ‘suspicious’ transactions; these systems have learned what a suspicious transaction ‘feels’ like, in a strikingly similar fashion to how a human would learn to get a ‘feel’ for them.
But for any given flagged suspicious transaction, nobody can point to a line of code that says ‘If X>100 then print “suspicious transaction”’ It isn’t like that at all. The judgment is embodied in what can only reasonably be described as the experience of the system.

This is only one of several different approaches to AI (but I happen to think it’s one that is most likely to result in machines that truly have a ‘self’ in the way we believe we do - although we will never know that detail for sure because it’s philosophically impossible)

But the thrust of all this was this: if a machine acts in ways that appear sentient, that were not explicitly coded, predefined or hidden by the programmer - that is, if it learned to behave that way by itself - that (IMO) contributes a reason to consider that it could be the real deal (or else where did it come from?).

[QUOTE=Rune]
Are there any other science where you’d accept unexplainable, holistic, non-reproducible results as evidence?
[/QUOTE]

I’m not sure “evidence” is the right word. Or, y’know, “science”.

Again: you say you can’t define or measure sentience, but you grant that I have it. So I ask: if another entity interacts with you in a significantly similar way – such that you likewise can’t define or measure its sentience, but it does what I do – why wouldn’t you likewise grant that it has the undefinable and immeasurable property?

Now, that’s not my idea of science; I don’t usually attribute an undefinable and immeasurable property to A and B and C and then ask for evidence that X or Y or Z has it too. But if I did, then, yeah, in any such context I’d say “if it’s significantly similar such that I can’t tell the difference, then I will react accordingly”.

So, you’re saying that machines can’t be intelligent because “intelligence” does not exist in the first place. Nothing can be intelligent, not even humans, as it’s an made-up concept, like “soul”, “centrifugal force”, “god”, or “negative temperature”, yes?

And what would you look for? What would you expect to find? What if you examined the functions of the human brain at increasingly fine levels of granularity and ultimately found nothing but neural switches?

Intelligence is ultimately an observational performance-based quality. If someone or some thing meets an a priori test of intelligence, then by that definition it is intelligent. Arguing that it wasn’t achieved in the “right” way according to some ridiculous preconception is a useless waste of time that totally misses the point.

The fact that someone wrote a trivial program in the 60s and someone else, allegedly, was stupid enough and gullible enough to be fooled by it is not an argument against AI. When we have programs that can understand natural language and act on it creatively to discover difficult answers to tricky questions, as in IBM’s Watson which is now being deployed in commercial applications, then we’re demonstrating at least real semantic and contextual understanding and not something trivial.

Likewise when we have machines playing grandmaster-level chess, which is impossible through brute-force search methods, and which is traditionally associated not only with human intelligence but with extraordinary insights that very few humans possess, then again we’ve achieved something noteworthy and distinctly non-trivial.

I don’t really care if someone chooses to continually redefine “intelligence” so that it always exceeds whatever machines have achieved, although it seems like a pointless exercise. But it’s hard to deny that accomplishments in AI are significant, useful, getting more powerful, and increasingly encroaching on knowledge-based activities and occupations.

This is AI of the gaps. You can never make an AI, because someone will always find something that humans can do that AIs can’t. This process will go on until the behaviour that humans can do that AIs can is vanishingly small.

I see no reason why religion will be one of those things. AIs will be capable of self-delusion, just like any other thinking being.

My bet would be one they themselves invented.

You could say that since the AI entity was created by people, it would think like them. But even if true, the deepest worries and needs of the AI entities would be different from those of humans.

I do think that eventually that will be case, but in the meantime IMHO the need to simulate the human brain and body to accelerate medical science is where I do think a lot of the efforts will be geared, meaning that a lot of similar thoughts will be pondered by the AI that will be the closest to us.

Other AI will indeed be so over our heads simply because they will be dealing with different issues and they will not be human looking at all.

I do think that the OP is more concerned about the type of AI that will be closest to us by design.

I don’t think that will necessarily be the case. After all, we’re only building AIs because we want them to do stuff that we can’t, and if they are to perform these tasks well, they must always be fundamentally different from us. Otherwise we could just as well breed more humans instead. :slight_smile:

For that reason, most AIs will never have emotions, except for those who will be specifically designed for entertainment purposes. It won’t ever have to be equipped with “true” emotions; they’ll just have to be capable of passing the Turing test.
I’m not saying that “true” emotions are impossible for a machine to have, but that it really is quite unnecessary, and will therefore not be attempted, except for the sake of experiment and entertainment. We have evolved emotions during a phase in the history of evolution that favoured such a development. Even today, we rely on it, because it often yields results faster than logical thinking. Statistically speaking, in most situations, deciding quickly (even if often unwisely) is a more useful ability than deciding wisely but slowly. But computers can think faster than humans, so they don’t need to rely on such an erratic shortcut as emotions. Emotions are an obsolete feature and a serious liability. All the complex and bulky code that would govern emotions, would take away resources from more efficient objective decision-making, and would limit the AI’s efficiency by subjecting it to the unnecessary urge to constantly fulfill its emotional needs.
Emotion was the evolutionary precursor to intelligence. Now that intelligence is beginning to emancipate itself, emotions will gradually die out.

This would be a doubly-tricky scenario, since we would have to consider:

  1. Can a non-human be baptized?

  2. How can we tell the difference between sentience and mimicking of sentience?

But I suppose it couldn’t hurt to perform a “conditional baptism”:

“RBL-344006, if thou art able to receive baptism, I baptize thee in the name of the Father, and of the Son, and of the Holy Ghost. Amen.”

And hope the water doesn’t short-circuit him.

Well, kinda sorta.

I mean, yeah, if I could make an AI that’s a better doctor than any human doctor, then yay. But what if I could make an AI that’s only just as good as the most terrific human doctor, except it’s always at its best?

So it doesn’t get tired and sloppy at the end of a long shift, just like it doesn’t take weekends off; it’s simply equal to the world’s best diagnostician, or has as steady a hand as the finest surgeon, because it was programmed to mimic him or her – only it’s always that good and they ain’t, because we started by copying a human intellect and then stripped out a fallibility or two?

So it can field conversations pretty much like its prototype, because of course it can; it’s an artificial intelligence we patterned on a natural one, minus this and that…

But why would you want to make something that’s less than optimal, when you can have something better for the same price? An emotional robot doctor would not always be at its best, because true emotions produce urges and inhibitions, which the robot would have to either indulge in, or try to suppress; both at the cost of his efficiency.
It would make more sense to equip the robot with only simulated emotions, just enough to make the patients feel comfortable. Alternatively, you could just use human nurses for the conversation part.

Just look at real human doctors. They’re much less prone to be dominated by their emotions than average humans. Indeed, most successful professionals in any field are individuals who can exercise greater than average control over their emotions, with the possible exception of artists. In order for robots to be any better and more useful than humans, they would have to follow that trend. Otherwise there would be no demand for them. Humans, even skilled ones, have never been in such short supply that it would be more economical to replace them by merely average robots.

But can you? I think it could possibly be the case that we can create an AI with emotions more quickly and easily (which also equals more cheaply) than a perfect one without emotions - it depends a lot on whether the thing is designed from the ground up, or modelled on biological intelligence.
If the latter, it may simply not be possible to dissect out the emotions (just like it is pretty nearly impossible to do that with humans, even by surgery).

Anyway, I think you may be throwing the baby away with the bathwater on this emotions thing. Are you absolutely sure that pure, cold reason will always achieve the best result (especially if that result is a service delivered to humans, who do have emotions)?
For example - an AI surgeon trying to resuscitate a patient - if equipped with some degree of genuine empathy, can perhaps appreciate that there are times when you keep trying a bit longer - and other times when the best thing is to let go - may be a better fit than one that simply stops trying when some clinically calculated threshold is passed.
People have expectations in this (and many other regards) based on a vast array of subjective factors - and they will continue to do so as long as they are really people. I think you have a bit of work to do to establish that a purely logical and emotionless approach is always better.

Fine, but you’ll have to do it the full-immersion way. :smiley:

Haven’t you ever heard of “Silicon Heaven”? Why, that is one of the biggest denominations!

[QUOTE=C3PO, Human Cyborg Relations]
“Thank the maker! This oil bath is going to feel so good!”
[/QUOTE]