-
We have already had this discussion.
-
If I were planing to murder you, I would not warn you first.
You only wrote “real physicall me.” Which is my point. You’re ceding that the flesh you is authentic in a way that the Virtual You is not.
We have already had this discussion.
If I were planing to murder you, I would not warn you first.
You only wrote “real physicall me.” Which is my point. You’re ceding that the flesh you is authentic in a way that the Virtual You is not.
As pointed out a few posts up by Disposable Hero (so aptly named for the subject of this thread),
What could possibly go wrong?
Ah yes, Evil!Skald The Evil (did we mention how Evil he is?), always moving the goalposts and fighting his own hypotheticals! No response can be right, evah, when he does that!
Even if the digitized Chronos were not at the least extremely akin to me, the moral choice would be for meat-me to accept a great deal of pain in order to spare the existence of another entity. Of course, I can’t guarantee that I would make the moral choice (I have thankfully never endured torture, and so cannot know how strong my resistance is to it), but I certainly hope that I would.
Except, of course, that my real choice, as you no doubt know from many previous threads, would probably be to fight the hypothetical, and attempt (with an unknown degree of success) to prevent both the torture and the erasure. After all, I have no way of knowing that you’re not just going to torture meat-me and then afterwards erase digitized-me anyway: It’s not like I can exactly take your word on the subject.
Oh, that’s not fighting the hypothetical. Fighting the hypo is introducing elements into the situation specifically disallowed by the premise. Saying “I will try to overpower the throttlebot, I mean, hell, it probably looks like that chick from SARAH CONNOR CHRONICLES, the little one I mean,” is not fighting the hypo. Saying “I will trust to Evil!Skald suddenly turning good and Scottish is female” is.k
I wasn’t moving the goalposts. I was trying to clarify what Chronos’s position was regarding Virtual!Him being bascially him. Or even basically a person
That’s pretty much my attitude.
I voted ‘probably not’ because something could always happen to change my mind, but really, I can’t imagine what.
Is this in reference to my post? I’m only guessing so since I mentioned that I am a theist and you start out bashing religion. I feel like either I did a poor job communicating my point, or you didn’t understand my point. I only mentioned that fact because many theists DO believe in an afterlife, and I don’t.
The idea of morality responsibility holds regardless. A parent who raises his child with a poor sense of morality bears some moral responsibility for the actions that child takes. In the same way, whether I am alive or dead, I would bear moral responsibility for the actions taken by a digital copy of me. The point I was attempting to get at is that a digital copy, even if starting exactly with all the knowledge I have now is NOT fundamentally the same because of artifacts of simulation.
That is, in any simulation, as the errors in the simulations accumulate, even though it starts out as a reasonable approximation of me and with my morality, those artifacts compound and at some point it simply isn’t me. This really isn’t all that different from the idea of a runaway artificial consciousness, except we can’t even be reasonably sure that this simulation of my consciousness is artificially conscious. As I’m sure you’re aware, there are a great many concerns about the potential dangers of a runaway artificial consciousness, but as this particular artificial consciousness would have my consciousness as it’s seed and I created it of my own volition, I bear moral responsibility for everything it does, but due to the artifacts of simulation, after a certain point I can’t even be sure that it’s actually still operating under my own moral code.
IOW, it’s one thing if I can be sure that, as I believe murder is wrong, and I know that it still believes murder is wrong, then it is bound by my moral constructs. But without an external aspect managing the simulation, akin to, but not identical to, Asimov’s laws of robotics, then there’s nothing to ensure that those moral codes don’t deviate in an unpredictable way.
And yes, it’s possible I could have a psychotic break and do something tomorrow that the current me would find morally repugnant, but I think it’s fundamentally different. First, there is absolute continuity of consciousness, and this example definitively does not have continuity of consciousness. Second, if I did have a psychotic break, it would be because of a breakdown of the hardware of my body in some manner, like a stroke or whatever. But in the case of the simulation, it can be working in perfect order, but simply due to the compounding errors it will eventually have unpredictable results. It’s the difference between getting a wrong calculation because the system malfunctioned (eg, a chip fried) or because of a fundamental flaw in the way in which the calculation is handled (eg, floating point rounding errors, etc.).
No, IF I could be reasonably guaranteed that the underlying code is a perfect replication, able to match or exceed the rate and accuracy of the human brain in running the “code” that is my digitized personality, and not just an extremely complex and extremely accurate simulation, then I would withdraw this objection.
That all said, still aware my first objection still remains.
I think Shawna ripped off this idea from The Venture Bros but hasn’t yet married it with cloning tech.
We have?!? It must have been with my virtual copy because I don’t remember it.
Being killed I can deal with, we all have to die sometime, a virtual copy of me being killed then resurrected and killed again as many times as Evil!Skald feels like…that’s something else entirely.
That’s the problem with this scenario, its hard to talk about with the current limits of language, the physical me would have no real connection to its virtual copy, they would be different people, but I have no doubt that given sufficiently advanced technology a ‘real’ person could run on software and experience something indistinguishable from outer physical reality (I don’t really follow computer games but from what I can tell they’re becoming eerily realistic in some respects, it wouldn’t take many more generational leaps in technology until simulated graphics and physics wise they will be indistinguishable from the real thing).
I’ve also read Permutation City by Greg Egan which also put me off the thought of ever ‘uploading’ myself.
Id sign up, as long as my copy doesnt have to work in some shit job inside the virtual world.
Ain’t THAT the truth!
Anyway, I voted yes…but with some reservations. I’d like a way to opt out if life becomes unbearable, and I’d kind of like an exclusivity contract, so no one is out there making lots of extra copies of me and possible experimenting on them horribly.
Now, let’s face it, this would be a DAMN fine way to learn TONS about mental illness. Take a downloaded mind, and tweak it until it’s depressed, or psychotic, then work out what it takes to fix it again. Just like historical anatomists learned by dissection and vivisection. Extremely good way to acquire knowledge. But kinda nasty for the subject of the experiment!
And my objection remains - what’s the difference between an error prone “simulation” and an organic defect in your brain causing you to make cognitive errors and commit sins or crimes? They are the same thing, except you can fix a simulation or roll it back. It’s probably possible to transition your brain to the “simulation” version with no loss of continuity, it’s just a lot harder to do.
Either way, a theist being - some type of powerful entity that most theist religions believe in - almost by definition has to be smart enough to tell when you “meant” to do something or an error in your brain forced it or it’s too stupid to bother worshiping. Over the years many many practices have been instituted to “trick” god, but they are similarly pointless. If god is so weak and blind that he gets fooled by strings declaring an area all holy so you can break the Sabbath, or college students having sex with no friction, he’s not capable of doing any of the things that religious people expect of him either.
I’d go for it if there was an option to delete all of Skald’s hypotheticals from my mind.
The very idea of an android as fucked up as me, with all my dark memories and crazy ideas is beyond disturbing. Worse, without my body-thetans harassing it constantly, it would be perpetually perplexed as to how I could function like this at all.
Since it’s free, the fact that he won’t be me is not a consideration*. The question is whether or not he will be happy. A lot of the end of life stuff seems to be really bad, and my brain is kinda messed up already.
I’d need a guarantee that they’d find out what would make him happy and give it to him before I could say yes. If they do any experiments on him, they need to be harmless.
Definitely. The chance to live basically forever? Oh yeah…where do I sign up?? The only caveat I see is if only I can be uploaded that would be kind of a bummer. I think since I’ve saved her life so many times she ought to be willing to upload my dog, wife, kids and a few friends.
I’d do it. I’d be happy as a virtual me if the real me is gone. I can always turn myself off if I don’t want to be around. And if I really want the android I can go to sleep for five year intervals waiting for it.
nm
I’d probably do it. I don’t have a <blank> in mind that would stop me from doing it but I guess I’d reserve my final decision for when I knew exactly what it entailed.
I already am the virtual me.