Then we’re all just machines? You, me, my smartphone?
At that level, yes, we’re all just machines. We’re just machines made of meat (as are the hosts, mostly)
I-phones don’t fall in love, like the Ghost Nation leader did.
Logic 101: “I-phones are machines. Some machines are sentient. Therefore I-phones are sentient” is terrible logic
This ridiculous “then all machines must be sentient” strawman argument didn’t get any traction in the Season 3 thread, I’m not going to rehash it here.
So I should treat my phone like a human? Does it have a right to freedom? To own property? Vote?
Is your argument that humans aren’t just meat machines? Because if you’re coming at this from some position of mysticism, I don’t really want to waste any more time discussing it with you.
And the fact that he fell in love without being programmed to was evidence of his emerging sentience, which was the point of the show.
Except that wasn’t my logic. What I was saying was “iPhones can simulate human behavior. The fact that they simulate human behavior makes the hosts sentient. Therefore, iPhones are sentient.” It’s been years since I’ve taken a course in Logic, I admit, but I don’t think it’s terrible, per se.
Look, I don’t know about all you zombies, but I happen to know for a fact that I, personally, am a sentient, self-aware being. I also know with about 99.99% certainty that my phone is not a sentient, self-aware being. Therefore, I have to assume that somewhere between myself and my smartphone, there’s a line - maybe a clear one, maybe a blurred one, but definitely a line - between non-sentience and sentience. You claim that the hosts, in their original state, are on our side of that line. I claim that they aren’t. It’s a reasonable debate.
And honestly, I think you’re being kind of rude. We’re having a theoretical debate on robots in a television show, nothing more.
They simulate human behaviour to a much higher degree than Siri does. That is the key. They simulate it all the way to being indistinguishable.
Definitely blurred, but yes.
I’ve yet to hear an argument for why they’re not, that can’t be refuted by analogous human examples. Like “Oh, their emotions can be controlled, they can’t be sentient” “Here, have some Lithium”
If you think pointing out strawmanning is rude, don’t strawman. You brought other machines into this, I was just talking about hosts and humans.
I don’t know exactly what sentience is. I don’t think anyone does, really. But I believe than it is more than the ability to mimic human thought convincingly. It may be a quality of the organic computers we call brains that we haven’t been able to recreate in our crude electronic computers, but whatever it is, it’s more than just behavioral routines. In my opinion. You’re welcome to think otherwise.
I’ve made several arguments. You may not agree with them, but you’ve definitely heard them.
We were talking about computers capable of simulating human behavior. I mentioned a computer capable of simulating human behavior with a lesser degree of success. How is that a strawman?
Yes, it is also the ability to have emotions. Which the hosts explicitly have.
When you post things like this:
At no point had I indicated that I thought ordinary machines are sentient. Only the hosts. So you’re interrogating me about something I never even hinted at.
Did they have emotions, or did they act in a manner in which they appeared to have emotions? To get back to Siri - and no, I’m not going to drop my smartphone parallel - if Apple programmed it to tell me to go fuck myself after I asked the same stupid question 5 times in a row, would that be proof of emotion?
They always acted in that manner, absent outside control. What’s the difference?
No. One canned response wouldn’t be proof of anything.
If it consistently responded to every action you made with a relevant emotional response, then that would be strong evidence.
I know that when I unleash my swarm of networked kill-bots, you won’t be debating whether or not they be sentient, or have free will, or whether they really really love you or merely simulate various emotions. Their ratiocination will be as inscrutable to you as an ant contemplating Cthulhu.
They may make for some fascinating conversations if you try chatting, but for several reasons I do not recommend attracting their attention.
I wouldn’t be confused - I’m certainly not debating the sentience of the Riot Mechs in Season 3, for instance.
If you put the brain of one of the Riot Mechs in Dolores body, would it be any more or less “human”? Or Maeve’s brain in Caleb’s construction bot? One of the conceits of Westworld is that the hosts are physically indistinguishable from humans. And of course they are played by human actors. But a human shaped robot capable of even very complex (albeit scripted) conversations or actions does not make it “sentient”. Mostly because it doesn’t know what it is beyond what it’s programming tells it. You could just as easily program a non-sentient host to laugh hysterically whenever you cut off it’s fingers or any number of other bizarre and decidedly non-human responses.
Ford retells the debates he used to have with Arnold regarding what constitutes “sentience”. His conclusion was that there was no hard and fast line between “consciousness” and “suitably complex simulation of consciousness”. Where the show seemed to land on the debate was consciousness was achieved when the hosts had the ability to instruct themselves.
Less
More
We see hosts acting without scripts all the time. Not just the lead “enlightened” ones, but also the ones they interact with, even with no humans around. We do see scripting override that, as well.
That doesn’t make those hosts not sentient. It just makes them sentients who are being controlled at that time.
You could “program” a human to do the same. Breaking people isn’t even a technological thing to do. A sharp stick or water-soaked rag can do it.
So these are exactly the never settled Great Debates fodder the show plays with.
And it can be fun but the wheels begin to spin in the muck pretty quickly.
Do our meat machine minds have something that makes their programming fundamentally different than what is theoretically possible otherwise? Is there, for us, a ghost in the machine?
Is the Turing Test a valid one? If it walks and quacks like a duck, is it a duck?
Does Free Will exist?
How does any of us know that anyone, let alone anything, else is sentient? Is self-aware? Is self-awareness just an epiphenomenon, an illusion emergent of complex self-referential nested loops of programming?
I submit that debating these questions are best left for GD, and that here the debate is whether or not he show uses those questions entertainingly. Or just makes a mess of it.
I think in the last season more the latter.
There is a background graphic in one of the seasons that “proves” that the hosts are basically programmed to be human: a list of literal “stats” with names like “bulk apperception”, “curiosity”, “self-preservation”, “charm”, “tenacity”, “empathy”, etc.
I am saying that a real neural network trained (on what!?..) from scratch would not necessarily have human-type thought processes. OTOH the Hosts in the show are evidently supposed to. Also they were at least working on “uploading” a normal human mind into a host body, so the hardware must be compatible.
For me, the question that directly relates to the show is whether it’s immoral to be cheering on Team Robot or not. That’s about our interaction with the show, not so much the underlying philosophy. And it always comes up.
Humans are programmed to be human, too, even if not so overtly.
I don’t think it is. Do you think it’s immoral to be cheering for Team Human?