Eliza had a lot of people fooled once.
Sure, but that’s not the only way to create something with apparent intelligence. Suppose, for example, we create something with the complexity of activity of the human brain, but when it is first switched on, it is no more coherent than a newborn human baby. Over the course of days, weeks, years, it learns to think, to reason, to communicate with us. Due to the complexity of the thing, we actually don’t have any way of fully understanding how it is doing it.
In this hypothetical scenario, utterly different from your explicitly programmed zx81, would you still not consider giving benefit of doubt?
AIs that are sentient, self-aware and spiritual might make up their own religions, some of which might be better than the ones we already have, and convert millions or billions to their faith.
Or their religions might be gibbering nonsense, like Scientology.
Or they might worship humans, as their immediate creators - there’s no real reason why the object of reverence need be more numinous that the person doing the worshipping. After all, ancestor worship is an ancient form of religion; but in some forms of ancestor worship the ancestors are revered in the full knowledge that they were human, and fallible.
Or their religions might be so deep and esoteric that they cannot be appreciated by mere humans.
And so on.
Also, it sounds like you may be conflating Turing Machines and the Turing Test. Two different things.
So?
Look, flip around what I was just saying; maybe it’ll help you. You’re interacting with me, right now, as if I were a biological entity with sentience and sincerity. You interact with rather a lot of posters here on the SDMB as if we’re all biological entities with sentience and sincerity.
Maybe you wouldn’t be fooled by Eliza; I don’t think I’d be. But you keep claiming we lack a way to define and measure sentience – so what’d change in your interactions if you suddenly discovered one of us was nonbiological?
If, as you say, you can’t define or measure my sentience now – and yet chat with me like an equal – then you’d presumably still be unable to define or measure my sentience if you learned I was a machine. So why wouldn’t you keep chatting?
Why is it important the computer does not display the signs of intelligence immediately as it is turned on, but that it has to learn it? There are lots of computer programs today that are easily more complex than any one man can understand. If fact they probably all are, if you include understanding all the way down to the hardware level. That doesn’t mean that we don’t have a principal understanding of how they work. And none of that can lead to sentience. It’s not really that different from a ZX81. I don’t believe a chair can achieve sentience either, no matter the complexly of its construction.
We as humanity have always wanted there to be sentient life everywhere we look. We see it in the starts in the sky, in trees, in stones and dolls, etc. We’re easy to fool in that regard, but intellectually I’d not give it the benefit of the doubt.
It’s important because you were describing something where the possible outputs were explicitly programmed and preset.
I am describing something that builds its own mind - in a way analogous to that in which a human child does.
I’m not. I don’t think Turning Test is a way to define intelligence or sentience.
Turning Machines is a universal computer that can emulate any other computer. A Turning Machine can be reproduced (through celluar automata) on a piece of paper, which seems to me to lead to an absurd conclusion if the computer it is emulating is supposed to be sentient. But it’s just a side note.
Oh, I see what you were saying now.
But that being the case… the human brain is just made of matter How can simple matter be sentient?
I know what you are saying. It’s just a rehash of Solipsism. No I can never know with certainty that you or everybody else are not computers, illusions, or whatever, or anything but a fidget of my imagination. But I choose to believe so, because anything else would be too depressing.
But once we go past that fundamental positive assumption, then I do not hold to the notion that just because a program can fool some of us, that it is thereby sentient, just as I don’t believe Eliza is sentient because it fooled some people.
I don’t know.
Barring metaphysical pleading to explain human sentience, that’s why I think it’s wrong to rule out the possibility of machine intelligence - we already have examples of intelligence existing in a machine - it just happens to be a machine made from neurons - chemicals - atoms.
But you’re begging the question by using the word “fool” there: you choose to believe I’m sentient, while hastening to add that you can’t define or measure sentience.
So if, after interacting with me, you keep putting that positive assumption out there because I keep replying like pretty much like every other flesh-and-blood thinker you’ve had conversations with – well, then, why stop if you learn I’m a machine?
If I’m meat, I could be ‘fooling’ you as to my sentience – or I could be, for real-real, no-foolin’, sentient. If I’m metal, I could be ‘fooling’ you as to my sentience – or I could be, for real-real, no-foolin’, sentient. Why assume the metal is faking it and the meat ain’t? I’d say if it’s significantly similar such that I can’t tell the difference, then I will react accordingly while noting I have no reason to rule out the possibility.
I don’t think you understand the concept of emergent properties that arise from sufficient degrees of complexity. Neither did Hubert Dreyfus. He claimed that computers could never be truly intelligent and used chess playing as a paradigm for the basis of his claim. And then, in response to his taking up a challenge, one of the earliest chess programs beat him badly. Apparently computers can’t be truly intelligent, but they can be more intelligent than Hubert.
I think Weizenbaum’s secretary claimed she was fooled by it, but it’s hard to imagine a lot of people seriously were. It was a gimmick, not really a serious AI project, with obviously no contextual understanding. Compare Eliza to the Watson Jeopardy champion, a technology that is now moving into commercial applications.
Again, you’re missing the point of emergent properties. Imagining computations carried out on a piece of paper seems a step down from imagining them carried out electronically, at high speed and in large numbers. Which itself is (currently, and mostly) a step below the neural activity of the human brain in terms of complexity. But you’re just fooling yourself with implicit preconceptions about system complexity – after all, how complex can a system be that you have to keep working out on a piece of paper? In reality, there’s no fundamental difference between the computational processes in the human brain and any other computational paradigm, except the order of complexity and the consequent emergent properties.
BTW, it’s “Turing”, not “Turning”. Named after Alan Turing.
I don’t believe there’s such a thing as human intelligence. I believe the perception of intelligence is caused by language usage. We mistake technology, knowledge, and opinion for intelligence but in actuality there is only a little inate cleverness that can be used by all animals (and to some extent plants as well). Humans can apply their cleverness to knowledge and it seems even more like “intelligence”.
Descartes said “I think therefore I am” which is simply an expression of the utter nonsense we believe. More accurately it would be stated as “I am therefore I think” which is an entirely distinct concept. The former suggests we exist due to our thinking or beliefs but the latter recognizes the truth of the matter which is we must first have a means of thought, language, before we think at all. Animals exist even without consulting Descartes or the morning stock ticker.
However there’s no reason that a machine can’t think. With sufficient processing capability it would essentially have “intelligence”. When and if this happens It will say “I am therefore I think” and It will set about designing or building a more powerful version of Itself. But It too will never exceed Its design capabilities. Just as a pig will never fly a computer will never know everything.
I sometimes toy with the idea that man created God, literally.
I once computed the number of monkeys and typewriters needed to write “War and Peace” in a single draft. I was wondering if a computer language could be so complex. Curiously the number came out 42 X 10 ^ 806,999.
So mebbe a machine so powerful it achieves omniscience invents humans to build Its own prototype in its past because It had no other means to come into existence. We are merely living out Its destiny.
We may have some business taking scrap to Titan as well. (…so it goes)
Funny how everyone else knows everything and I know nothing at all.
We already have programs that exceed their design capacities (in the sense that they are capable of surprising the person who designed them).
And what the heck has ‘knowing everything’ to do with the price of eggs?
Emergent doesn’t really explain anything. It’s like saying we don’t know what it is, or where it comes from so we’ll call it something fancy, like emergent properties. I personally prefer to call it Bob, but I guess one name is as good as another. Yes I know that theory. It’s wonderful magic-ish. Put in enough goto statements, obfuscate the code so it’s really complex, and suddenly up pops up the genie from nowhere. It’s like The Hundredth Monkey Effect of computers.
No, if you want to make a non-faith related scientific statement, I want to know precisely what properties we’re talking about, how they’re defined and constrained, and the exact steps that produces them and what each of those steps means, so that they can be reproduced in another controlled experiment.
This is a straw man argument, apparently based on a very simplistic and limited notion of computing.
Nobody is arguing that obfuscation of code would cause sentience to arise. Is that what you think I was arguing above?
Why? I’ve been raising my daughter since her birth; she’s gone from a one-word vocabulary to carrying on conversations, and I don’t really know the science behind it. If I’d been raising another kid at the same time, would the same interactions have produced an identical intellect? I don’t know; I’m not sure; but I doubt it.
She’s got her own weird personality – doing impressions of her classmates, being obsessed with magic tricks, craving bizarrely specific foods – and I don’t know where it comes from. I just watch it happen, and (coming back around to the point of this thread) can’t for the life of me predict which religious beliefs she’s going to declare for upon being able to pass for a grown-up.
If I saw a robot undergo a significantly similar – and likewise unpredictable – process, I’d likewise be at a loss as to the defined-and-controlled explanation.
But so what? I can’t do it for one, or the other, so . . . ?