Is there a chance that I will l[i]ve forever?

Well, just because it’s bound to come up, there’s also the idea of quantum immortality: if the ‘many worlds’ interpretation of quantum mechanics is true, then whatever happens, as long as there is a nonzero chance of surviving, your subjective experience will always be of you surviving, as in all ‘branches’ where you don’t survive, there will be no subjective experience.

Consider the ‘quantum suicide’ experiment: you’ve got a gun in front of you, its firing mechanism keyed to some quantum event that gives it, say, a 50% chance of firing every ten seconds, such as the repeated measuring of the spin of an electron from an ensemble of electrons not in an eigenstate of the spin measurement. What happens after ten seconds will be that the world ‘splits in two’ (at least, on the most naive readings of many worlds): one in which the result of the measurement was spin up, the gun fired, and you died; and one in which the result was spin down, the gun didn’t fire, and you lived.

Of course, or so the argument goes, what you’ll experience then will be necessarily the branch in which you survive. But this is true about anything that could cause your death, and thus, your experience will always be that of survival, no matter how slim the chances. Thus, only situations in which your likelihood of dying is 100% flat—so-called ‘cul-de-sacs’—can conceivably kill you, and it’s claimed that no such situations exist. (It sounds like these cul-de-sacs are basically what you’re looking for.)

However, there’s a couple of problems with this argument: first of all, life or death rarely hinges on this sort of clear-cut quantum decision. Second, even in the simple example above, things are never as clean-cut as described above: the gun may misfire, the detector malfunction, an earthquake may destroy your lab, aliens may teleport you out at the last minute—all of which at some point becomes more likely than you experiencing spin-down after spin-down result—and so on.

But most importantly, dying is not very likely to be an all-or-nothing affair: even after the gun has fired, it’s not necessarily going to be lights out for you; you may be gravely wounded, but still, for the moment at least, have some remaining (and not likely to be very pleasant) experience. However, at that point, there may be no way out: this experience will gradually diminish, until it eventually fades completely—but there will be no sharp, all-or-nothing transition, and without such a transition, the argument doesn’t work.

This is a good thing: because while, if the argument worked, you’d be guaranteed immortality, in all likelihood, your experience isn’t going to be a pleasant one—you’ll be crippled, maimed, you’ll get sick, just as long as you manage to hold on to some thread of experience. As far as eternities go, this one will be rather more like hell than heaven.

But this probably doesn’t answer your question. Do cul-de-sacs, i.e. situations in which you are certain to die, exist? I think this is a very hard question to answer. In any kind of situation, it seems to me, you could come up with some kind of narrative that would allow your experience to continue in some fashion, if you’re sufficiently unfazed by its astronomical unlikelihood. You can always create swampman scenarios: you die, but somewhere, somehow, a bunch of molecules rearrange themselves by chance, producing a perfect copy of you, continuing your subjective existence.

Of course, at some point, the universe will end in some way; and for a sufficiently violent ending—a big crunch or a big rip, for example—it stands to reason that this will be the end of all life, as well. But in the case of a big freeze, for instance, while the energy available to do any kind of computation with—and thus, to run whatever hardware by then supports your consciousness—will become smaller and smaller, this merely means that the time between computational steps will increase ever further, but there won’t necessarily be a last one. (Indeed, Frank Tipler has speculated that in the case of a collapsing universe, since the processing capacity of such a universe would diverge, an infinite amount of computation will occur, including a perfect simulation of every conscious entity that’s ever lived; though let’s say that there’s a couple of steps in his reasoning that seem more than a little dubious to me.)

IMO it’s equally misleading to say cancer cells age as to say they don’t age.

Cancer cells usually have mutations that allow them to regenerate telomeres, so they can have an indefinite number of generations, it’s true. If we defined ageing purely in terms of telomeres, they don’t age.

But why would we define it that way? The reason loss of telomeres in healthy cells is a problem is because it means that functional DNA will get damaged and the cell line may end.
But that’s what’s happening all the time with cancer cells. They usually have mutations that make them more susceptible to further mutations and unable to repair/signal DNA damage when it does occur (these mutations are usually among the first to occur; they are prerequisites to allow mutations to stack up).
IOW cancer cells are damaged to fuck and there are many cell lines ending all the time.

First of all, rephrase it to “live to see 5000”.

Second, I think there is an actual chance, once that is above 10%, that you could, if you took the right actions, “live” to see 5000. The reason live is in quotes is that it is a philosophical question whether or not you actually lived.

The mechanism is simple : when you reach the age that you have a terminal illness (it needs to be controllable), you go to Arizona and have yourself frozen by Alcor. If you have more than 30 years left, there is the hope that other companies that Alcor may be formed, and they may have more resources than Alcor currently has.

There are people who were continuously frozen since the 1970s. In the face of this empirical evidence (people frozen for more than 30 years), it seems at least possible that you would still be frozen for between 100-300 years from now.

Why do I say 100-300 years? That is an eon compared to the technological advances made over the last century. And, the mechanism to revive you is describable using today’s technology, and may be attempted much sooner than that.

The mechanism is simple : using an automatic tape-collecting ultramicrotome (or comparable machines that have yet to be developed), they slice your brain into millions of thin slices and scan them at extremely high resolution. The mechanism described here : http://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf uses electron microscope beams, but there are more detailed methods.

Once they have a detailed enough mapping of the connection between every neuron, and exactly which membrane proteins are in each connection, it is theoretically possible for an electronic emulation of your mind to be run. Specialized computer hardware that is purpose built for high speed emulation would be used to run “you”. If it is done correctly, you would, perceptually, wake up as if you had never died, and have access to all of your faculties. You’d obviously have enormous inherent advantages in the form, such as the ability to back yourself up to storage media, transmit yourself over a network, and so forth. Living to 5000 seems entirely doable. (living past the sun going red giant is just an engineering problem, even)

The problem, of course, is that a copy made from your freezer-damaged organic mush in your brain, even if software can computationally reverse all the frost damage and/or a bit of editing can fix other problems, is still sorta a copy. You could reasonably argue that your existence ended when you died, although the problem with this argument is that you have to concede that you have died many times throughout your life, because your neurons change state without your consent many times, and they stop operating for hours every single night.

No, your brain does not switch off at night; the difference in activity level between waking and sleeping is much smaller than people popularly imagine.

Furthermore we are only unconscious in the sense where we mean low sensitivity to external stimuli.
If we’re talking conscious as in consciousness, the parts of the brain we associate with personality and conscious thoughts are largely active throughout sleep like everything else. The key difference is that they aren’t laying down new memories.

The universe itself is not going to live forever so you can’t live any longer than that.

This is arguable, during some of the phases of sleep, activity diminishes greatly.

The point of that statement was to establish that your existence is not tied to traveling electrical impulses, contrary to some belief, but your memories/personality are properties of the physical states of the synapses in your brain. If you are given general anesthetic, the neurons do become quiescent and even the ones associated with breathing will stop firing.

In either situation (waking from sleep or drugs), when you awaken again, those retained states are what make you you. Waking as a computer emulation can be thought of as waking but with different atoms.

Which is quite different from “stop operating for hours every single night”.

Well, the truth or otherwise of this statement is a point of contention in philosophy of identity. If you could hypothetically switch off my brain and at some future time switch in on again (which neither sleep nor drugs do of course), the person who wakes will be qualitatively identical to me, but will they be one and the same subjective instance? In this case, with the same brain and atoms most people would say yes, but things become murky with just about any more complex scenario.
Waking as a computer emulation is just one such murky situation.

This idea intrigues - and deeply worries me. I’ve been trying to create a fictional religion based around this idea, but it has some pretty disturbing consequences.

If you die in a multiverse or infinite universe full of identical copies of you, there will often be some alternate version of your consciousness that survives somehow. But if you die in a circumstance that does not allow any of your alternate versions to survive, and identical version of you might still emerge, somewhere, simply due to random events.

In fact if it is possible for a random version of you to materialise in the infinite, infinitely-prolonged universe, it is also possible for an infinite number of versions of you to emerge. Some of these randiom versions will emerge into situations where your future consciousness will thrive and continue to exist for the foreseeable future. Other versions of you will be recreated into an existence which is some kind of living hell.

The concept of the Boltzmann Brain

allows for entities that have an arbitrary complexity to be created at random at immensely long intervals, although these would become infinitely more common in an infinite universe. The least successful of these events would last mere femtoseconds or less, killing the recreated consciousness almost instantly- but of course the recreated consciousness could be recreated once again elsewhere and elsewhen - after zero subjective time.

Such a recreation could live a strange flickering existence, passing from random fluctuation to random fluctuation; the quality of life for these instantiations could range from marvellous to horrific and everything in between. And there would be an infinite number of them - an infinite range of future versions of you in a diverse range of heavens and hells.

This, my extension of quantum immortality (which is almost certainly not original to me) is a daunting prospect, worst than any of the excesses of religious afterlife that I’ve read about elsewhere. All I need is a label and it’s ready to go.

I don’t think you understand the basic premises.

Moore’s law is exponential, not additive. In the next 20 years or so, the roughly-every-two-years cycle of Moore’s Law will start to produce some really powerful computers, that will make everything that came before look like an abacus (assuming Moore’s law keeps going that long). Here’s why:

It’s all about the second half of the chessboard…and that’s where we’re about to be, assuming Moore’s Law keeps going long enough.

The other point I was making is that computers are already being used to model drug interactions. This isn’t wild-eyed futurism. This is already happening.

Find actual persons who think you have a chance and then see what they are willing to pay you for that chance

My point was not to compare computers and drugs. My point in mentioning Moore’s law and computers was in my next sentence : " human beings are not made of silicon, with simple binary circuits."

Advances in health care will not come at the speed of advances in electronics.
We all expect our computers and electronic gadgets to improve drastically every 12 months. But health care doesn’t improve every 12 months–and never will.

In electronics, it is easy… all the manufacturer has to do is continue the usual, well-known processess…making the same circuits they already do, just shrink the size and speed them up.
In medicine,it’s hard. We barely know which questions to ask, and have NO idea what processes to use…
Sure, super-computers will help design drugs. But that’s only for one small issue in medicine—drugs. For the real issues, we have virtually zero understanding at all how to even begin. Issues such as: how to grow body parts, how our brains remember information and emotions, why we need to waste one third of our lives sleeping, and how we grow old.
Living cells are not silicon…they use chemical energy, not electrical energy.
There is no “moore’s law” for chemistry.

The greatest revolution in health care came from the invention of antibiotics and vaccinations —half a century ago. I predict that nothing so revolutionary will occur again within the next half century.
The Op said he saw a TV show claiming that it will only be a few decades till miraculous wonders occur in medicine, just like two decades brought us miraculous wonders in computers.

I say that it’s all fantasy—like flying cars and atomic fusion .
Health care will improve gradually, and without any dramatic flashpoints.
And we’re all going to die at about age 85-90.

The same one your parents gave. Because I said so.

If you define “having a continuous line of living cells uniquely traceable to you” as living, yes it’s possible to live forever.

I don’t know much about cancer, but are you saying the DNA of cancer cells changes? Doesn’t that make comparisons between generations pointless? I thought the use of cancer cells in research was that they’re a common standard, even in laboratories around the world.

Even there, its gotten harder. There are just limitations - its hard to make a silicon wafer layer thinner than an atom thick - you can’t cut the atoms in half, physics doesn’t work that way.

OK, I’ve said it before, and I guess I have to say it again–the difference between the chemistry of immortality and the chemistry of drug effects is one of degree, not type.

You are correct in that we barely know which questions to ask. That’s not the point. We don’t have to ask the questions, because computers can already learn from trial and error.

In around another 6-8 years, CGI movies will look indistinguishable from live action movies. “Impossible!”, you may cry. However, 2020 has been accepted as a general date for this for quite a few years now. All that has to happen is that Moore’s Law continues until then.

So what makes 2020 such a special date? Because making CGI look like live action requires a certain computer speed. After another 3-4 doublings of speed, we’ll be at that point.

This is why I say the difference between modelling drug interactions and conquering aging is merely a difference of degree. There is a specific computer speed at which aging becomes a trivial problem, just like there was a certain computer speed at which beating the world’s best human chess players became a trivial problem.

Every problem that can be reduced to math becomes trivial at a certain computer speed. And everything that matters can be reduced to math.

Except that computers, no matter how advanced, still only answer the questions you ask them. If you ask a computer to model a certain set of biological interactions, it can do that… But that won’t tell it that there’s another important biological interaction that it should have been modeling but didn’t. Experimentation can do that, but that’s still limited by the fundamental limitations you’re looking to subvert: A biological experiment can require multiple generations of living organisms, which, if the organism you’re looking at is humans, might mean decades or centuries.

Or, you can posit that the computer models are so sophisticated that all of the relevant properties are emergent on the base model. But that would require a model that modeled the entire human body at the level of the interactions between atoms, at least. And now you’re pitting the exponential growth of Moore’s Law against a problem that’s also of exponential complexity, and the model again takes longer than the direct experiment.

Thank you for understanding what I’m trying to get across. :slight_smile:
There’s no need to do your experiments on humans. Mice should suffice, they suffer from aging in very similar ways to people. In fact, a good deal of anti-aging research already happening is being done on mice, because they’re so similar to humans in the relevant biochemistry. In addition, they live for about 3-5 years, so aging occurs MUCH faster.

You see, this isn’t really a question of “beating aging in humans”. It’s really a question, first, of

  1. beating planned senescence, which is part and parcel of the genetics of ALL of the vertebrate lineages (in mammals more so than cold-blooded species, roughly-speaking, but even the longest-lived cold-blooded creatures age)

  2. Beating other problems like heart disease, cancer, dementia, etc. that go along with aging, but aren’t necessarily part of planned senescence

those two may really be the same problem, but I don’t think we can say for sure, yet.
Planned senescence exists to combat disease and (to a much lesser extent) allow for evolution. However, there’s nothing magic happening in your genes. It’s just biochemistry, and quite model-able, with a fast enough computer.

And you don’t need to model interactions at the atomic level, AFAICT. You only need to do it at the molecular level, because aging happens at the cell level, not at the atomic level.

Again, thank you for understanding what I’m talking about. I appreciate that.

Cancer cells in a tumour, in a body, mutate frequently.

This is one reason why, without treatment, the long term prognosis for many cancers is very poor. Because if some event occurs which acts to reduce the size of a tumour, there will likely be some cells resistant in some way to that event. So remissions are often temporary.

On the subject of cancer cell lines used in research, I don’t know much about that. I am aware there are lines that have been used for decades. I don’t know why they don’t mutate.

Actually, nothing ages similarly to humans. Controlling for body size and metabolic rate, humans live for about four times longer than one would expect. By the time the average American woman dies, there is not one single non-human mammal on the planet that was alive when she was born. Studies on mice (or elephants or Rhesus monkeys or chimpanzees) are very useful, to be sure, and can help us get a grip on the fundamental problems we’ll need to solve, but eventually, we’re going to need to run human tests.

I don’t see why the planned senescence of humans would be all that different from that of mice or other mammals. Certainly mice suffer from atherosclerosis, heart disease, cancer, etc. just like we do.

Why would the fundamental aspect(s) of planned senescence be any different? We share a common ancestor, which inherited planned senescence from ITS ancestors, and so forth. Why would the mechanism have changed all that fundamentally?

And I agree, human studies would have to start at some point. However, I posit that, by that point, the human studies would be largely confirming what we had already discovered.