I confess to being a little lost here. You quoted my response to post #18 which stated…
I’m not sure why you took issue with it - especially since it conveyed little more than a forecast of my gut reaction the situation being described.
PC
I confess to being a little lost here. You quoted my response to post #18 which stated…
I’m not sure why you took issue with it - especially since it conveyed little more than a forecast of my gut reaction the situation being described.
PC
I’m not going there. That’s why I was trying to leave the “soul” out of it to explore the mental aspects of humanity.
A machine, I though, was a good mechanism since I think the vast majority of spiritual believers would agree a machine has no soul.
The OP has since clarified that he did not mean in post 18 to say that the machine is sentient, but rather, he meant there to be saying the machine itself would say “of course I’m sentient.”
But on re-reading the thread, it’s become unclear to me whether the OP is asking, in general, about sentience, soul-posession, or whether s/he’s sometimes confusing these two, or what.
Right now I’m revising my reading of the thread as follows. I think maybe the OP’s trying to ask “What if there were a sentient thing that had no soul? Should we still give it rights etc?”
But I’m not sure about that reading, because in some places in the thread the OP has said s/he doesn’t mean to build it into the scenario that the machine is sentient, just that it acts as though it were sentient.
In either case, my answer to the OP is in my first post in this thread–I advocate the “befefit of the doubt” approach.
I took issue from a point in your post because I thought you were presupposing an answer to the question of sentience which the OP didn’t intend to have presupposed. But now I’m confused about what the OP intended so never mind.
-Kris
-FrL-
I think post #39 lays very clear my underlying motives (although I thought the first post still contained these thoughts).
In other words, what behaviors make us human? Are those behaviors enough to define humanity? If you do not have those behaviors (as in the brain damaged case) are you any less human?
The machine was a way for exploration without resorting to aliens or similar biological beings.
To be honest Frylock, I didn’t read your first post. The “human” terminology struck me as an issue that had already been addressed. I never made it as far as your clarifying note at the end. Now that I have read it, I don’t see much difference in our positions.

PC
I quoted the bit I think is the false dilemma, and that was all I was addressing - you asked Snarky_Kong an either-or question, when the answer could just as well be “both” - it requires a combination of the machinery and the behaviour for a entity-type to be considered truly human. Each is necessary, but not sufficient, as it were.
I understand now. Thank you for explaining.
Is this your opinion, then, that the meat part is important, too?
If so, then what is the meat doing that the silicon cannot?
I miss SentientMeat.
The more I think about the more it’s clear that what matters is the ‘spark’ of intelligent self-awareness that each of us has, and which each of us assumes that all other people have. We make this assumption through obervations of complex behavior in people and ourselves, and have learned to assume that all living humans have this ‘spark’, even when they aren’t acting in intelligent self-aware ways, such as when they are sleeping.
Machines, on the other hand, are not percieved as having this ‘spark’. Even as the behaviors and simulations that computers and other machines are able to exhibit increases, we are able to watch this incremental increase, and recognize that at no given technological increment has been this magical unexplained ‘spark’ added. So, we are naturally reluctant to assume that even the most erudite robot would be more self-aware than the average toaster.
I think that the reason people will be resistant to call machines people, equal in every way but squishiness to humans, is that they won’t percieve the machines as having this ‘spark’ of intelligent self-awareness. As noted above, we assume that all living humans do, pretty much regardless of their particular behavior or state; we assume machines don’t. I don’t think most people take the identification process any further than that, which is why we think that even brain-dead vegetative people in perpetual comas still have this ‘spark’.
Machine intelligences would have a very hard time convincing most people that they were ‘alive’, ‘people’, or anything else of that sort. Perhaps a generation that had grown up around them would have a better chance; and if we ever did definitively nail down how the physical process in the human brain that creates cognition works, that would probably help too (particularly if the first robot intelligences were based off of it).
Hmmm. Depends on how it is implemented, I suppose. I don’t see why a quantum computer could compute anything not Turing computable. And I haven’t thought about non-deterministic Turing machines in decades.
I’m involved in reliability, and quantum computers give me the willies. Happily I will be long retired when they start getting used (if ever) so I can attend conferences and tsk tsk.
I don’t have one. The reason I avoid free will debates is that they depend on what I see as a somewhat arbitrary definition of free will. Practically speaking, they are equivalent.
Determined here mean determinable post hoc, with a full record of the event.
How much quantum uncertainty, or the randomness of molecular motion, propagate to the macro world to make it impossible to predict the exact course of a paper airplane?
If everything in our world is deterministic, how could there be free will except by assuming some “spiritual” force outside our world? And why would that be any freer than we are? However, if our actions have a random component, then they are at least not predictable. Is that free will? that gets back to my question. It is functionally indistinguishable from true free will, in being unpredictable.
That means every if statement has free will. That’s not a very good definition.
I’d say free will implies intelligence. Otherwise dice have free will. Given that, would a purely deterministic computer that passed the Turing test have free will? How about one which makes some decisions influenced by the value of the one ten thousandths place of air temperature - a value way beyond the accuracy of the thermometer and therefore random?
If not, do you believe that any computer can exhibit free will? Do we? If unpredictable things do not lead to free will, I don’t see how we get to it without a spiritual component - and I think we share the same view of that idea.
I agree with that. The snails I step on are alive, but not very intelligent. So far, intelligence implies life but life does not imply intelligence.
But we’re allowed to pull the plug on the truly brain dead - in fact organs can be harvested from a brain dead person with a still beating heart. We do want to make absolutely sure before we do that. The fact that we treat humans with respect doesn’t negate the point. Some people have funerals for their pets, but that doesn’t make them intelligent.
She was intelligent at one point - your cat has never been. And, like I said, we want to be extra sure. But in the end, she was terminated like the way a sick cat would be.
The big difference is self awareness, which would be a part of any reasonable Turing test. My very smart dog doesn’t have that - a developmentally delayed person does.
Definition 1: impossible to predict, even with godly knowledge.
Definition 2: self-determined.
Definition 2a: self-determined, mostly deterministic but with random elements.
Definition 2b: self-determined, and fully deterministic.
Definition 3: completely undetermined, even by your inner state.
(There may be other definitions I don’t know about; I’m no expert.)
The first definition exists to excuse a god from creating bad people and setting them loose on the world. The second definition is more aligned with the common use of the term; its ‘2b’ variant is occasionally used to bypass the omnitience/free will paradox. The third seems silly and clearly doesn’t apply to humans, but I’ve heard people argue for it anyway.
I don’t see how these defintions are ‘equivalent, practically speaking’. 1 and 2b are contradictory, even. (And 3 is of course nuts.)
Isn’t…everything determinable post hoc? “Which way did the pachinko ball bounce off that peg?” “It says here in the full record it went left!”
Typically I’ve heard the question being whether the universal state at time T+delta is fully calculable based only on complete knowledge of prior times, T and earlier. That is, whether randomity exists or not.
Assuming your butterfly effects are happening in the right place, any randomity is theoretically enough to make it impossible to predict the course of the airplane. Which may or may not matter to whether you have free will, depending on your definition.
Depends on how you define “true free will”, I suppose. I like definition 2 myself; as noted, it seems to me to most closely match the common use of the word. And definition 2 works whether or not we have randomity - either way we have free will. The only functional difference is in whether a theorized God is not omnitient, or if he’s ultimately responsible for all evil (and one of those must be true).
I’m placing this part first, since my answer to it is important for the definitions.
For the pachinko ball, when you rewind the tape you can tell which way the ball is going to go before it bounces. Closer to humanity, don’t we try to determine what made criminals or political leaders or historical figures do things? On Thursday we read reports of why the market reacted the way it did Wednesday, but we never can figure this out on Tuesday.
Similarly, since we seem to decide to do things before we are aware that we have, if we could observe and interpret the state of the neurons in a brain, we should be able to predict a person’s actions. But can we predict the state that caused the subconscious action? If the state arrives through some degree of randomness that is inherent in the brain, what mechanism either predicts this randomness (no free will) or creates the impulse to action ignoring it (free will).
Now, for your definitions.
I don’t think this says anything about free will. If god does not play dice, and so can’t guess the results of a throw, an action might be determined from the outcome of a roll, deterministically. It can’t be predicted, but still isn’t free. Yeah, it can be predicted at the last second, after the roll, but I can predict that the pissed off guy is going to hit me also, and that doesn’t say anything about whether his impending action is freely determined or not.
We can reject 3, since we observe that no human is totally free to do anything.
Let’s consider a non-free will case, where actions might be determined post hoc, including from the results of random events, but with inputs from the external world. You could, post hoc, explain every action as a function of these. It is self determined in the sense that no entity is forcing anything on you, but there is no self as an ego to determine anything. How do you distinguish between the case that our actions are caused by a purely mechanical process, with randomness, and that they are caused by a homonucleus sitting in our brain and willing things? When you say self determined, you are assuming a self, but we’re wondering about if this self is free or not. In either case, we cannot determine if an action was caused by an autonomous self or soul, or from non-autonomous chemical and physical action.
If we found some sort of non-physical driver, then we could say that this is not controlled by physical action, and is the self in self determined. Such a thing doesn’t appear to exist. But I’m not claiming that no self exists - just that it is undecidable.
As I said, that’s my brother; I personally don’t believe that free will exists at all. Not for a human, a computer, or a god if there are any.
That’s a good catch, as was Voyager’s pointing out the (my) misuse of the term “Turing machine”. I was gonna let the latter slide without mention, as the whole thread is bound for semantic wrangling; one more imprecision would make nary a difference.
Of course you’re right to bring up the halting problem. But I might take issue with it…at the very least, with the way you’ve described it. In principle, no machine exists (or can exist) that is able to determine whether an arbitrary program will complete. A particular program may well not be subject to undecidability. Not that I expect it would make a difference for a machine of the caliber required for this thread.
That said, it’s still fascinating to me to consider levels of introspection. In addition, machines have certain properties that may allow clever (or even not so clever) hacks to get around such issues. For instance, duplication of a program (as is done at the calculation level in fault tolerant systems and combined with a voting scheme) may provide a mechanical analogue to Dennett’s Popperian creatures that allow an image of themselves to “die in their stead.” Or, at the very least, interrupt programs that run long enough to be considered undecidable (i.e., “dead”). A favorite paper of mine (not “mine” in that I wrote it) that explores related issues can be found here.
But none of the above really contradicts your point. So I’ll express my gratitude to you for bringing it up, concede that I overstated my case, and leave this (probable hijack) be.
Being meat. We are not just our brains - we are our brains in our flesh bodies, and also our unique cocktails of hormones and the like. Being human is a process within a context, not an object.
The Halting Problem is a red herring and I say meh. For example, humans are subject to the Halting Problem too: thinking of humans as capable of carrying out deterministic procedures clearly specified by instructions in your language of choice (let’s say, some kind of English), the same proof goes through: no English instructions describe a procedure by which a human can determine, of any given input English instructions, whether they describe a human-procedure which eventually terminates.
Like I said, meh. It’s just a little parlor trick (well, not just, but as far as its relevance here goes).
Seems like a purely semantic argument to me. There’s a bunch of difference senses of the word “humanity”. Maybe a silicon life form could duplicate the behavioral aspects, but you’d have to ignore the definitions that include our biological specificness. The silicon lifeform is what is it. If you want to put a label on it, it’s probably more useful to use a more specific term like “sapient” or “sentient” that’s more precise. I don’t think a Turing robot would count though - those are just programmed to have similar responses to humans, but they don’t have the same self-recognition or experience as humans or human level artificial intelligences.