:smack: Whoops, I’m embarrassed. (I did think “overwhelming misguided agreement” was a bit of an odd thing to say…)
I’m not sure about this analogue-vs-digital business. Neurons are somewhat digital in nature – in response to whatever the conditions are, a given neuron is either fires or doesn’t fire. There’s no in-between state that I’m aware of … is there?
I’ve basically stayed out of the philosophy and stuck to the math, but all this red herring analog vs. digital discussion makes me want to say:
Meh. As far as I’m concerned, quoting Dijkstra, “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
When we get to the point where computers act in a manner externally indistinguishable from those beings which are archetypes of “intelligence”, then linguistic norms will settle one way or another as to whether the term gets applied to computers as well (in most contexts or in some contexts or whatever). But it’s not as though it’s some concrete external truth out there for us to discover. It’s just a question of how people employ their language.
Indistinguishable, Dijkstra was showing no imagination. If a program passes the Turing Test we have to decide if we have moral obligations towards it. You may dismiss this also but many will not. If it is decided in the affirmative then a situation could arise that a human is legally punished for abusing a specimen. If it is actually self-aware then it could be just. If is is not then a wrong will have been done.
Hahaha, man. There really is truth to what you were saying.
Actually, there’s no reason to bring Cantor’s sets into this. 2^x (where x is infinite) is not the only situation that is > x. In fact, (and this is where I expect you will disagree), the correct way to formulate infinite numbers is that x+1 is also > x, as well as 2*x or x^2.
The proofs that claim things like x^2 = x (such as quantity of integers vs quantity of integer tuples) are all based on the confusion between a true “infinite number” and the concept of a “boundless” one. It used to be that the difference between the two was understood, but some prominent early mathematicians overstepped themselves in excitement to claim the two the same. (Back then, a common argument was that infinties do not exist at all, only boundlessnesses. The mathematicians wanted to proclaim infinities exist, which they do. But they went too far and called everything infinite.) Unfortunately, they did not realize the many subtle discrepancies that equating the infinite and the boundless causes. The proofs that x^2 = x are not invalid, but they deal with the case when x is “the boundless concept,” not “an infinite number.” The quantity of integers is boundless, it is not infinite. Boundlessnesses can be multiplied by two and you’ll of course still get something boundless. You can square it, you can add 1, etc, and you’ll still get back to “boundless.” It is not a number. It is a concept. But any actual number, any real infinity, behaves ordinarily. An inf^2, not just a 2^inf, algorithm will also solve the halting problem for inf inputs.
Do you think this is different from the operation of the human brain? Do you think the brain is not deterministic?
The brain is a computer, it takes external and internal input and produces output. It’s a mathematical function. Same as digital computers although they employ different mechanisms to map input to output. Discussions of morality and subjective experience are interesting at one level of abstraction, but don’t change the underlying fact that it’s just a machine mapping input to output.
There is more and more evidence that conscious awareness is the “explanation” of what the brain just determined for the benefit of the inwardly pointed observer. The real action appears to be happening before we are aware of it.
Phew… wipes sweat from brow
I agree (barring the Generalized Continuum Hypothesis, which I am not inclined to accept for typical contexts), but I chose 2^x as it would be the number of entries in a database of hardwired answers for every program of size x bits.
I do not necessarily disagree. Which is to say, there are many different systems we might talk about where it is useful to speak of “infinite numbers”. In typical systems of cardinality, we have that some infinite quantities satisfy x+1 = x. But in systems like the hyperreal numbers, all the usual arithmetic properties continue to hold of the infinite numbers, including x+1 > x and so on. There is no one-size-fits-all notion of “infinite numbers”, or even of “numbers”. There are different notions for different purposes.
I might not be as prejudiced towards your views as you suspect. But if you would like to help bring me to a sympathetic understanding, I think you will need to put some effort into explaining the careful details of your views more accessibly.
It may or may not be just a question of how people employ their language. But even if it is, those kinds of questions can be very important–even matters of life or death in certain cases. It’s a question of how linguistic norms will settle this thing one way or the other. Will there be violence and rioting in the streets, or rational discussion and negotiation, or decision by fiat from the powers that be, or what?
We can’t just take a faithful attitude toward the working out that linguistic norms are wont to do–because the working out of linguistic norms is very much something we do. It doesn’t happen independently of us. We participate in that working out. And it seems best to me that we take responsibility for that.
I know that seemed a bit melodramatic, and I have no adequate way to answer for that. About some things, I’m melodramatic.
Without assuming there’s a metaphysical fact about sentience that settles how the language ought to be used, I can characterize the problem as follows. I myself tend toward the more “liberal” view that if it seems sentient, and contributes in the same way sentience does, then we ought to treat it as sentient if only to be careful that we don’t accidently kill a fully human-like self-consciousness when we delete its program. And I can adduce non-metaphysical arguments for this position taken from the liberal tradition, concerning the value of diverse ways of life coexisting together and so on. It is useful for me to treat my fellow human beings not as things that ought to be used. If a computer program can be like that in relation to me as well, then there you go–I ought to treat it as though it is not a thing to be used at my whim.
Then there could be the more “conservative” view that since it’s not really like us at all, it shouldn’t be regarded as one of us, etc etc. (“Conservative” is really not a fair word to use here, sorry Conservatives!)
So that’s kind of a “political” argument that avoids metaphysicalizing the issue by asking if the thing is “really” thinking. But even this “political” argument at least (by being offered) acknowledges that the issue is an interesting and important one, one that it’s worth arguing about. If I don’t offer it, then for all I know, “linguistic norms” will “end up” working things out such that sentient programs are actually only quasi-sentient and should be deleted at our whim. That would be bad.
Anyway, that said, I’m not convinced there’s not a job for metaphysics to do here, if only in order to explain why even a radically different (physically) sentience is still like us, indeed metaphysically, in all the right ways.
-FrL-
I completely agree with this point. To point out a recent fictional example among many: in Robert J. Sawyer’s book Mindscan, a variant of Searle’s argument is used in court to claim that humans whose minds have been transferred into “mindscan” robots are merely zombies who act like conscious humans, and the mindscans are stripped of their rights to control their property and whatnot. Sure seems wrong to me – if an entity acts like it is conscious, we should treat it as such, I say, unless it can somehow be shown that it really isn’t conscious and experiences no qualia or emotions or whatever.
Frylock: I guess I can’t disagree with any of that. I mean, we could end up with our moral practices not tightly correlated to our linguistic practices in this area, in the sense that we might end up calling computers “intelligent” but still not feel any qualms about damaging them, or refrain from calling computers “intelligent” but still have taboos on damaging them. But I suppose it’s likely that the two will end generally be correlated to some significant degree.
And I agree that discussions like these are part of how linguistic norms get settled. But, I guess, the point to me is that, if the only substantive reason we have for being interested in the question of whether to call computers “intelligent” is because of its indirect coupling to our interest in the values of various moral stances regarding computers, then we are liable to confuse our discussion by focusing too much on red herrings. If what we’re really interested in is mores, then let’s discuss mores directly, and not worry about such illusory debates as whether an analog carrier is requisite for proper application of the adjective “intelligent”; that is, let’s not worry about them except insofar as we might have some independent reason to suspect or want to say that whether an entity is analog or digital matters has any moral relevance.
What if, to simulate choice, it makes some decisions based on a RNG seeded by time of day, exact temperature, etc. You could then reproduce its behavior in a very certain circumstance, but not for all.
The question is, how can we tell if someone else has an interior consciousness? Only by applying the test, which we do all the time without thinking about it. I still don’t see how you can tell that a computer does not have consciousness, besides being a priori convinced it does not. Do you think our minds are anything other than a type of process running on our brains? If you do, what is not physical about them?
The “spirit” of this simulation would run wonderfully slowly - but so would the pencil and paper simulation of the computer. I don’t quite get where the coffee enters here. If we simulate a brain, we’d also have to simulate the inputs to the brain - the feeling of hunger and thirst, the smell of coffee, the sight of the coffee. The mapped neurons would already know how to drink, and whether it wants cream or sugar. And where is our self-awareness located? Do you think it is in the brain, or in the soul? I don’t believe in God, so no, I don’t think God has anything to do with it. My kids didn’t seem to be self aware at birth, but grew into it. It seems to be a function of the increasing complexity of our neuronic connections. Is there any evidence for another cause?
To me, maybe. But life existed on this planet for nearly a billion years without consciousness. I’m not sure we have any more intrinsic meaning than that.
A friend of mine, a famous analog expert, used to laugh at us digital guys and say that when you get fast enough everything is analog. Given Planck time everything is digital at the quantum level. So we may be living in a digital universe after all, like Ed Fredkin said. In any case, connections in our brain have multiple but not infinite levels, so even they aren’t totally analog. The bottom line is I fail to see the real basic difference between a big enough computer, with enough parallelism, and the brain.
I’ve got two dogs, one of whom is a genius with the ability to plan ahead to trick the other out of a bone. But as far as I can tell, neither is aware. Any evidence a dog is. A chip is another story - I’m of two minds about whether they are aware, even mistily aware.
When you dream, can you direct the dream? I can’t, which I find as evidence that our subconscious is not self-aware. It gives me answers, but never actually communicates, which is more evidence.
It all boils down to where you stand on the mind-body problem. If there is a soul, then AIs can never be truly intelligent. If not, I see no reason why they can’t be.
(Just to head it off, to say “Well, whether an entity is analog or digital might well have moral relevance because of course whether an entity is intelligent or not has moral relevance, while, also, it is plausibly reasonable to limit our application of the term ‘intelligence’ to analog phenomenon” seems to have a tail wagging a dog somewhere. It equivocates on the fact that the “of course” half is a statement about one sense/use of the word “intelligent”, while the “also” half is a note about a narrowing we could choose to undertake, for reasons quite possibly detached from that first sense/use, in carving out another sense/use of the word “intelligent”. That is to say, while there are moods in which we might say the first half, and there are moods in which we might say the second half, we don’t necessarily really mean the same thing by “intelligent” in both of these. Yet, by framing the discussion this way, we risk losing sight of such possible distinctions and context-dependencies. That is the sort of danger I think we run into by spending too much time on questions like “What is the nature of Intelligence?”, to the detriment of many of the applications for which we might have thought these questions to actually be useful.)
[It’s like spending a lot of time arguing over whether chess is a “sport”. Well, there are times when it’s reasonable to speak that way, and times when it’s not, and situations where no line has been drawn, but you can draw one if you like, but maybe it’s questionable as to what that really gains you?, and … . Trying to sit down and figure out the essence of “sport”, once and for all, is fruitless. For most applications for which it may be thought that knowing whether chess is a sport might be useful, it is far more productive to forget about drawing unjustifiably sharp boundaries around the word “sport” (whether narrow or broad) and simply tackle the underlying question directly.]
Descartes believed in phantasms. He thought the mind was a machine, just balls bouncing around, but on top of that existed something else that did the feeling, the observing. The phantasms did what soulless particles could not possibly do.
His idea had a fatal flaw. The phantasms weren’t meant to, couldn’t, affect the balls. They were passive observers, immaterial spirits. If the phantasms (or the observesence, as I called it, or the soul) was the only thing capable of feeling, what did the talking? The balls and gears talked about feeling, but the phantasms were the ones actually doing it? What a disconnect!
A machine that goes through the motions, actually feels as well. It must be so. (Although you could argue how many motions it’d have to go through, and that some trivial ones, like throwing out sentences by way of a lookup table, wouldn’t suffice.)