Mind transfer question: preservation of personal identity vs dichotomy between versions

To be fair, if someone secretly put you to sleep, took you to a lab, and ripped out your brain, replacing it with circuitry that has copied it, the “original” you is now destroyed, disposed of in a waste bin. You just can’t tell any difference. And no one outside can, either.

Not to challenge the entire point, but the rules obeyed by individual neurons are very complex.

An individual neuron is an entire network with local spiking all over the dendrites and information flow between various subareas etc. Synapse strength is regulated by epigenetic changes to the dna. Chemical signalling has attributes (e.g. fast diffusion) that supports the overall computational requirements. Multiple types of signalling happen simultaneously (e.g. electrical and chemical gradients, signals based on amplitude and frequency, etc.).

Right, and those should be no problem with a galaxy full of autonomous self-replicating factories, right? :wink:

Let me explain this to you as clearly and frankly as I can, because you still don’t get it. Let’s start with my V-2 analogy. So you think that someone looking at a V-2 in 1944 should quite properly have thought “a bigger version of this will take humans to the moon, no question about it”. Really? Only a moron would think otherwise? Would any non-moron then conclude that – notwithstanding how little we knew about space medicine or rocket or spacecraft engineering – that a bigger version of the V-2 would also take us to Mars? To Jupiter? To Proxima Centauri? To distant stars across the galaxy? To other galaxies far, far away? To the edge of the universe? Just how much is a non-moron supposed to extrapolate with certainty about mankind’s future among the stars by looking at a World War II era German missile?

Here’s the more general point. As several posters have previously noted, you have a habit of jumping in to random discussions of highly complex topics like this and issuing ridiculously speculative and simplistic proclamations on exactly how the problem will be solved, with the absolute confidence of certainty and a stunning lack of awareness of the fact that the predictions of futurism are historically almost always wrong even when they’re made by knowledgeable experts on the subject. We’ve heard from you repeatedly about galactic networks of self-replicating autonomous factories, more recently about how climate change isn’t a problem for the global food supply because we can all eat algae, about how climate change isn’t a problem at all because geoengineering will fix it all up, and now, that mind uploads are not only possible but you’ll show us exactly how it’s done. It’s getting tiring. I would think that the poor track record of even well-informed futurism should instill some humility in those inclined to such prognostications, and all the more so on unfounded casual armchair futurism apparently based on reading too much sci-fi.