Ending death for most people in the United States is feasible. What's your excuse?

Summing up the arguments above :

*a. Maybe the brain is doing something so complex, so magically difficult to reproduce, that it can’t ever be emulated.
*
Maybe, but that’s not where the current evidence is pointing. The essential reason why is that even if neurons were doing the most delicate, esoteric quantum operations, evidence says that their primary mechanism of communicating the results of those operations to other synapses are all or nothing pulses separated by fuzzy amounts of time. Evidence from actual data shows that neurons aren’t particular reliable - which essentially disproves any such theories, because even if it were true that some neurons were using quantum physics to make decisions, if each neuron only gets one weighted vote, and routinely these ‘votes’ get destroyed by noise, other neurons that have failed and emitted a spurious signal, or other glitches, it drowns out these possible effects mattering to the outcome. Basically, Septimus, you didn’t “nail” anything. The overwhelming consensus of evidence says you’re totally wrong.

b. Ok, so maybe it’s possible, but it wouldn’t really be “you”. You would be dead.

Yes. But less dead than the alternative.

c. Old people are useless.

Yes. But this is because their brains are failing. Old people with new, digital brains, after a recovery period where they learn to use their new brains, would most likely be smarter than any human being alive today.

d. Since the science isn’t 100% certain that this would work, I’d rather take the certainty of being a corpse in the ground.

Ok. Stupid move, though.

*e. I’m going to personally attack you, the poster, instead of contributing to the discussion.
*
Have fun, but it doesn’t make you correct.

One minor problem - they can reproduce what they see, but if there are features of the brain, or interactions, that the scan does not catch they will not get a true copy. Even if they reproduce that small section, they can only test the copy against what they think is there in the original.

Same problem with the rat’s brain. You have to scan everything to be sure you have a good result. And while you say “just before death” slicing to scan kills you real fast. You’d hope that the last section you scan will be in good shape as the first - but that isn’t likely. Plus, many at the point of death suffer from degradation of the brain, so you might get something showing the effects of oxygen deprivation, for instance.

I’m not saying this isn’t possible with some sort of noninvasive scan, but it ain’t going to be in my lifetime or yours or probably in that of my 18 month old grandson. It is a bit early to license.

Summing up your response:
You might do better quoting directly-it would give you a better chance to respond to what was actually said, instead of the strawmen you have ready responses for.

BTW, when I talk about things the scan misses I’m not talking about a soul or any such nonsense. One of the things you can do in chip debug is physical failure analysis, which involves slicing through silicon to see if you can spot the source of the defect. This is trivial in relation to the brain, but still really hard, and certainly does not detect anything like neuron connections. It works best finding shorts between signal lines from blobs of material or missing vias (vertical connections between one layer of signal routing and another.) And reproducing a chip this way is nigh unto impossible.

BTW, if you can make one copy of yourself you can make several. Are all of them you? They then go on to lead different lives, different experiences and memories. If one of them is copied again later is that still you as well?

And if you are still alive which of you is you? I contend as in a story I read that the duplicates are unique individuals, who at the time of duplication are under the law considered newborn babies. If this were indeed achievable I can’t imagine what kind of legal status could be awarded to a newly created human no matter what memories or identity it feels it may have.

A pointless sci-fi distinction. If you actually had the technology to do this, you would resync copies with data transfers, such that post-transfer, all copies share the same mind state. At which point, removal of surplus copies causes no loss.

If you could do such a ‘live upload’, it would be philosophically preferable, because the digital “you” and the remaining dying meat “you” would share common information. However, the physical technology to do this is at the extreme edge of what is even maybe physical possible. (copying a living brain is basically magic and I concede this). Copying a preserved brain that was alive until seconds before being paused through chemicals or freezing is achievable, though it would take an extraordinary effort to perform such a copy using technology that could be purchased or developed in the near future.

Sorry, but I can’t figure out if you think a copy is the same person, or a person at all, so I can’t agree or disagree with what you are saying. But as I stated above, I don’t believe under any circumstances that a copy is you. It’s a copy of you, and I don’t want any copies of me made, and I’m pretty sure no one else does either.

:dubious: What’s my excuse for what? I have no objection to researchers working on some form of virtual “ending death” if they want to. Good for them, sez I.

But you don’t get to dictate to other people what subjects they must be interested in or must ignore in favor of subjects that you’ve decided are more interesting.

I think longevity extension research sounds kind of interesting, but I’m not so childishly terrified of the fact of my own mortality that I think it’s the only thing worth thinking or talking about. I don’t need any “excuse” for maintaining that opinion.

Who’s “you” in this instance? If multiple copies of “you” at some point acquire separate individual post-copy experiences, then ISTM that the original “you” has no right to “resync” the other individuals who originally started out as copies of “you”.

Ok. Imagine I make a copy of you tommorrow. You don’t know the copy exists, and anything that happens to the copy might as well be happening to the stranger.

I do the resyncs using a network interface when you go to sleep that night. You wake up in the morning, and you are momentarily confused, as you remember being both Tripolar1 and Tripolar2 and you have to check your surroundings to see which one you are. I’ve done it such a clean way though and inserted additional subsystems into your neural network to handle this, that you can remember context and remember what is on the agenda for Tripolar1 and Tripolar 2.

Now, the next day, I do the sync again, but now you wake up in the morning, you remember Tripolar 2’s experiences abruptly ending in pain, as someone murdered Tripolar 2. But you are still “him”. You still remember all his doubts and fears and joy and everything else. The day after that, I’ve built Tripolar 2 a new body (pretty easy if everyone is built from metal and plastic…) and copied off his branch again. You remember being Tripolar 2, recovering from ‘death’, and the emotions involved after that night’s resync.

Living as such a ‘collective’ mind, small losses would soon stop bothering you.

You may refuse to accept this example, but the physical fact is, your consciousness already is such a collective system, made of billions of separate components that are sometimes lost and sometimes replaced, so you can’t really validly complain about more of the same.

My consciousness (and I don’t consider consciousness to be the big deal many make of it) is formed from a tightly coupled system. You are talking about a loosely coupled system, that is nothing like me. I don’t remember recovering from death, I only have access to the memories of some copy of me. Whatever I know, remember, or have access to, I also know that once TriPolar1 died, he is no more and I am not him. I don’t know what I am, but I would know I’m not the original, or I would know that I am totally insane instead of only partially as I normally am.

If you are imagining some kind of enhancement of my brain that allows me to have additional independent nodes that I am synced with, I am only still me up to the point where those other nodes can operate independently. Anything they do by themselves is not something I did no matter the syncing process. As it is, I can read and hear about the experiences of others, imagine them in my mind, but they aren’t part of my experience, and not part of me. You are just arguing the degree of information I can get from other sources and how real it seems to me, but any independent actor is not me.

You are assuming many things as a given. We have absolutely no idea what it would subjectively “feel” like living as a consciousness through digital means; whatever those means would actually end up being, assuming it’s even possible.

What if you can’t quite simulate the delicate balance of hormones and neurotransmitters like dopamine, serotonin, norepinephrine, GABA, et al. and you’re in a miserable state of despair or maddingly swinging bi-polar states. Maybe without the stimulus of a real, physical self, you find you’re in a numb, abject hell. Maybe there’s a Jaunt-effect of lagging or over-clocked CPU cycles that make 1 second of real time feel like 1,000 Years. Maybe there’s a billion other issues we can’t fathom, because this technology isn’t even close to being a reality yet. But I’d bet that if the time comes when we’ll produce hard AI or are able to copy a human mind, there’ll be issues like these that may not be solvable, let alone non-trivial. Jumping to the conclusions you’re putting forth here is meaningless until we do know exactly what it’s like to be a digital mind.

I should say my above post is more in response to all SamuelA’s posts, and not only the one I quoted.

However, he seems to conclude digitally copying a mind is a foregone conclusion, when we have no idea if it is or isn’t. And if it is, just by being a different medium other than the meaty, human brain, may fundamentally alter the state of consciousness as we know it. And there may be no getting around that.

The evidence is overwhelming in support of the idea that :

a. The tiniest details of neurons are irrelevant, due to all the signaling noise
b. If you preserve a brain in such a way that the physical structure, except for the tiniest details, is still present, you can obtain all of the information the brain uses to make decisions, encoded in the matter itself.
c. If a&b are true, you can copy a mind. It’s a tautology, in fact.

So yeah. Since I assume a & b, c must be true. I think your mental error is assuming that if we can’t do this copying in the immediate, near future, it’s not worth considering. You’re doing a form of mental discounting.

But, preserved brains, assuming it’s done using something really stable like plastination or LN2, would be still intact, still containing the information in them, centuries from now. In order to disprove my tautology, you must produce evidence that the physical matter of the brain encodes details so fine you cannot recover them with the likely technology to be found over the next 300 or so years.

Hence it’s a foregone conclusion. It’s not a guarantee, but it’s pretty damn likely.

Part of what I’m saying is that it might be a huge moral and ethical quandary to even go so far as to “switch on” the first copied mind. Say if something like abject, unrelenting despair, or if the “Jaunt-effect” is even a remote possibility, but a possibility none the less, I wouldn’t feel right in ever letting such research move forward. To submit somebody to something so horrifying is something I couldn’t get behind.

It wouldn’t matter if the original mind gave consent either since you’d be subjecting it to a copied mind, and a copy is an entirely different entity than the original. Also, if that copied entity cried to be terminated, to be put out of its misery, now you’re into waters of euthanasia. And if they even have full human rights.

So, not only have you assumed copying without destruction, you’ve assumed copying cheap enough to be on every bedside table. You might be getting a little far ahead of the research here.
We’d also have to learn how our sense of time works, because memories of two different by simultaneous events might cause all sorts of problems.

There is another slight problem. The sensing of all neurons has to be done at the same time, or the brain would have to be frozen - which would cause many parts of the body to fail, and so is not practical.

To give a more plausible example, say you want to capture and reproduce the state of a microprocessor, and have some magical way of scanning all memories and state elements remotely without changing their states. If the processor is running, the last memory element you scan may be far advanced in time from the first - and the state you get when you build your copy processor is not consistent, and the whole thing would crash. Now in microprocessors you can stop or extend the clock, and might get away with it, but I don’t think our brains have any clock control circuitry built in.

When people write science fiction they skip over stuff like this for the sake of the story. You can’t do that in real life.

Right, which hits upon what I’m saying. In order for research into such things to advance, you’d most likely have to alter, fundamentally, the way the brain works to run on a digital medium. Which inherently introduces fundamental changes to the brain/mind system itself. How can we be sure we’re not waking up some sentient, human entity into some otherwise unknown hell-scape of subjective consciousness?

And if we have, then what do we do?

I should first point out that I am in the camp that thinks ‘uploads’ are possible. To be entirely honest, the technology to achieve them is likely to be hundreds of years in the future, not just around the corner. No-one alive today will see them happen.

But I can’t quite grasp why you think that these uploaded entities would want, or need, to ‘resync’ with each other regularly. If you have created a copy of your mindstate, you should recognise that this copy is a new, unique individual, and not force it to accept experiences from yourself, or from other copies. If your copies are regularly swapping memories and opinions with each other, they are going to have two or more separate histories to deal with, and remember two or more series of events occurring simultaneously in the past. That way lies confusion, if not madness.

Because it stops death. That’s the reason. Sharing memories as a collective with your close peers means that hardware loss, which cannot be prevented, even with hyper-advanced technology, causes no loss of existence. (since hyper-advanced technology means way better weapons, and riskier activities)

I do agree that a hi-tech future could be a risky environment, if you consider the amount of energy and fabricating power that any individual might have access to. And a working spaceship could make a devastating kinetic weapon. Terrorists and spree killers would love it.

Are you suggesting then that humans living in a hi-tech future should become group minds, sharing experiences so that they can’t ever lose their memories in death? This sounds like you are suggesting that humans should stop being human.

Perhaps this might not be the best route to go down. An intriguing idea, none-the-less.