Is the Fermi Paradox becoming more acute?

It’s not. You’re calculating the chance of being right about being within a given subset of minds. That is, of all the minds to ever exist, those minds that could think to themselves, ‘I’m within the final 95% of minds to ever exist’, if they in fact do, are trivially right 95% of the time.

That’s just as true for the set of minds that do ever ask themselves that question, that’s just not a particularly interesting set of minds—perhaps the doomsday argument will be forgotten at some point in the future, and won’t occur to anyone else—so while you’ve correctly reasoned that you’re within the final 95% of minds that ponder the doomsday argument, that simply doesn’t tell you anything interesting. (In particular, nothing about the total number of humans to ever live.)

The doomsday argument for ‘people that ponder the doomsday argument’ never tells you anything about whether humanity reaches the stars, it simply gives you an upper bound for the number or people to ponder the doomsday argument—which just isn’t terribly interesting. For a practical difficulty, it’s also kinda hard to figure out how many people have pondered the doomsday argument so far, hence, to obtain a valid answer.

First of all, we exactly know the fraction of times the doomsday argument gives a correct answer—95% of all humans, if they were to consider themselves to be within the final 95% of all humans, would in fact be right about that.

Second, any study with a certain p-value is expected to give a false result in a precise amount of cases (in the large-number limit)—i. e. if you, say, repeat a study checking whether there’s any correlation between jelly beans of a certain color and acne, it will yield a wrong result about one in twenty times if the p-value criterion for a significant result is 0.05. That’s the same as with the doomsday argument.

Fine, then we won’t make it to the stars in the simulated universe. Whether you want to call a given universe ‘real’ or ‘simulated’ is just a meaningless distinction; what’s real to us is defined via the relations it holds to us, whether those are carried by stuff made of bits and bytes or atoms and electrons.

Probability(E: end fraction of humanity | IT: reliable interstellar travel) = P(IT | E) P(E) /P(IT)

I thought the argument here was a civilization that had just created a reliable way to colonize the stars. So IT=1. Prior probability that we’re in second fraction of humanity is 0.5. Taken as given that a planet-bound species is in its second half of all existing members, probability that they develop reliable galactic travel is close to zero.

This last is for the same reason that if I drop two healthy mating pairs of birds into a large safe nature preserve with plentiful food, the chance that those four are among the end fraction of birds to exist in that preserve is small. Species procreate to the limit of the carrying capacity of their environment. If the possible environment is entire galaxies, the future carrying capacity of that dwarfs the population on the planet before the tech was invented. Or to say it another way: given that the few birds we see are among the last of their species, the chance they are in a safe environment with an enormous carrying capacity for their kind is small.

Dupe

Can’t the same argument be made about winning the Powerball lottery? The odds are overwhelmingly against it…but someone does, actually, win.

Maybe we’re just “those lucky guys” who live at the bottom of the J curve. (Bummer! I want super-duper-future tech!)

I don’t get it.

That’s not quite what we’re talking about, though. Or rather, I think we’ve been talking about slightly different things. The doomsday argument, to me, essentially just says that an observation of being within the early batch of humans (or whatever reference group) will bring down the likelihood that humanity will ever reach a large total number. This remains the case if I have access to additional information that increases my antecedent probability of humanity growing to a large number/persisting long into the future (as long as that information isn’t certain).

That is, we’re interested in P(N|n), with N being the total number of humans, and n being ‘your’ number. This is given by

P(N|n) = P(n|N)*P(N)/P(n).

There, P(n|N) = 1/N seems to me mostly unassailable—it would be hard to argue that there is some metaphysical reason for you to be born preferentially among some certain subset of humans. Hence, all else being equal, an increase in N will yield a decrease in probability.

But is all else equal? Let’s take a simple test scenario. Suppose we have two hypotheses, which say N1 <= 60 billion and N2 <= 60 quadrillion, respectively. That means we can compute P(n) as

P(n) = P(n|N1)P(N1) + P(n|N2)P(N2).

Suppose then you learn that you’re within the first 60 billion humans. That means P(n|N1) = 1, since you’re certainly within the first 60 billion humans if only 60 billion ever live; if, on the other hand, there will be 60 quadrillion humans ever, you being within the first 60 billion has probability 10[sup]-6[/sup]. Suppose now you want to calculate the probability of humanity making it to 60 quadrillion, given you’ve found yourself within the first 60 billion, that is:

P(N2|n) = P(n|N2)P(N2) / [P(n|N1)P(N1) + P(n|N2)P(N2)]
= 10[sup]-6[/sup] * P(N2) / [1
P(N1) + 10[sup]-6[/sup]*P(N2)]
= 1 / [10[sup]6/sup + 1]

If we thus consider both the high and low number to be equally likely, we get an overwhelming likelihood that, given that we are ‘early’, we won’t reach the high number.

But, and here’s the part I didn’t sufficiently appreciate, it’s indeed possible to drive up this likelihood, provided one’s prior for N2 is sufficiently larger than for N1. Concretely, if P(N1)/P(N2) ~ 10[sup]-6[/sup], things pretty much cancel out, and we’re left with a 50/50 shot between both options.

So, if learning that humanity develops interstellar flight sufficiently increases one’s confidence in humanity reaching vast population numbers, then yes, one is justified in assigning a higher likelihood even after learning of one’s early status. But, first of all, this must be a rather large increase—one would have to be (slightly more than) 99.9999% sure that humanity makes it to a quadrillion. Moreover—which was the point I was trying to make—even in such a case, finding out that one is among the first 60 billion would reduce that solid certainty to even odds, which is, all things considered, a rather steep drop!

The reason I wasn’t quite clear on that was that I didn’t really think about the Bayesian version of the argument, but rather, in the form of Richard Gott’s ‘Berlin Wall’-formulation: for any given day you chose to visit the Berlin Wall during the time it stood, the probability is 50% that it will stand between 1/3 and 3 times longer than it has already stood. If you’ve visited it at 1/4 of its duration, it will continue to stand for three times that time; visiting it at 75% of its total time of existence, it will stand one third longer. Half of all visitors to the Berlin Wall during its entire time of existence, reasoning as above, would turn out to be right.

Now suppose—and this is, I think (now), what eburacum45 was getting at—you visit it on November 9, 1989, shortly after Günter Schabowski’s somewhat premature announcement of the opening of the GDR’s borders. You probably wouldn’t be tempted to declare that the wall is likely to stand for another third of its lifetime, let alone three times as long. In fact, you might rather confidently predict that it won’t last the day.

Does this negate the above argument? Not directly: you could still reason that way, you just would be within the 50% of observers to be wrong about that. It’s hard to escape the notion you’re making a mistake when reasoning that way, however. You’re no longer an observer arriving at a random time; you’re at a special point in the wall’s existence.

Similarly, one might reason, once one learns of humanity’s capability of interstellar flight, one learns that one is at a rather special time, in the same sense, of humanity’s existence—an early time, necessarily. But I’m not sure that’s correct. After all, it could be the case, even after learning that humanity is capable of interstellar flight, that humanity’s end is nearer rather than farther—perhaps, as in the Dark Forest, it’s this capacity that will cause others ‘out there’ to notice us, leading to a rather swifter ending.

So I’m not sure this works as a counter to my argument. After all, the original argument was that we live at a time where no space-flight has been developed, even though this is would be unlikely given that space-flight will be developed—which I think still stands; which we hence can give as an indication that it’s unlikely for there to ever be large-scale space-flight, which, if generalized beyond humanity, would also allow an answer for the Fermi problem.

But I appreciate that the issue is more subtle than I initially thought.

I have to admit that we do not have a reliable method of interstellar travel yet, so we cannot yet reliably escape into the galaxy. The Fermi paradox seems to suggest that we won’t.

Nevertheless I think we should try our best to perform this escape trick, even if the probabilities are against us.

100 percent! I believe it should be mankind’s primary mission, and everything humans do should be in service of this mission. Children should be raised to accept it as the purpose of their existence. Otherwise, all of this competition between people, companies, countries and other ways that we group ourselves is just “playing in the kiddie pool” while the tsunami approaches. The tsunami may be a billion years away, but we might need that long! Or it could come a lot sooner.

Silly joke. VR=Video Game=Donkey Kong; Spice=Melange=Mind-enhancing Drug mined in Frank Herbert’s Dune. Therefore, if you want to get high on Spice (and who wouldn’t), enhance your prescience and hard-ons, you’ve got to harvest it in *real *reality, not *virtual *reality.

Ah I see, I’ll have to watch it, sounds interesting.
But I still think fully immersive vr will transform human civilization like no other revolution in history. The internet x1000

Sorry for taking so long to reply, but if the population is exponentially increasing, as it is today, the probability that someone in the past being correct about being in the first 50% is much less than 50%. Someone in the 1800s who believed the world was going to end soon might have thought they were definitely in the final 50%. They’d be way wrong.

No. The claim is that, out of all humans to ever live, 50% are right about being in the final 50%. This is true even if nobody’s been right about it yet, and won’t be for another million years. Hence, if any given human were to make that claim, they’d be right with probability 50%.

I agree. I look forward to what VR can offer. If it incorporates Smell-O-Vision, Taste-O-Vision and Feel-O-Vision, I’m gonna invest in a virtual spouse, because by golly my non-virtual marriage was rife with too many aberrations and glitches. If that doesn’t work out, I’m told there are some good inflatable latex alternatives.

It will fix all those glitches, and you’ll be able to fly anywhere like Superman, every hottie is yours, every meal is your favorite, anything you desire. Drugs, travel, physical possessions will be pointless. Your only motivation will be to stay inside vr. Why would you wanna be natural?

Virtual reality is a trap, as Arthur Clarke realised in his 1949 story The Lion of Comarre. I’d be prepared to imagine that 90% or more of all advanced alien civilisations fall into this trap. But would all of them?

I am rather appalled by the suggestion that most advanced species, presumably including Homo sapiens, will embrace “Virtual Reality” as their final goal, and not bother with any further progress.

Heroin is considered better than VR by its connoisseurs. Is humanity’s great goal then to build an automated apparatus to keep us all fed while we doze our lives away on a heroin high? (With our robots presumably researching just to find better and better drugs.)

Agree and disagree.
I would agree with you that the idea that we abandon the outside reality entirely and focus on VR is a disturbing thought. Luckily there’s no reason to think that would happen.

However, I do think VR will be a big part of any sentient species’ future…and indeed a big part of any sentient species’ experience, period. Meaning: we already create our own realities in many ways – both in an abstract sense of writing stories, making game environments etc but also in more concrete ways of fashioning a world where understanding cultures and man-made systems, and being able to use human inventions is 99% of our lives despite all being stuff we invented.

We already make our own reality in many ways. It’s just that we’re about to gain a lot of power in how we do that. There are plenty of positives to this, as I’m about to allude. But of course many dangers too.

Heroin is better than VR as we know it.

<long tangent on why hyperreal VR will be better than in fiction>

In fiction, VR is always just shown as a poor facsimile of the real world. Less vivid, one simple environment, and just emotionally scratching a single itch (i.e. horniness).

But let’s imagine where VR could be in a few centuries time, with a direct neural interface…
The VR would be more vivid than your life right now. Because it wouldn’t have many of the limitations that your senses have. It may be able to add new senses.
It would be much more diverse than the real world, because you would be getting to experience the combined output of any human’s imagination, curated only by some measure of what worlds and concepts people find the most enjoyable and fascinating. And finally it will be more social – because you could share realities with any number of people from anywhere in the world any time. Not just for leisure but work, study etc.
<end of tangent>

There’s definitely a danger of us losing some touch with the fuzzy, monotonous real world. But personally I don’t see any reason why it would be an “instead of” and not a “as well as” relationship.

Very well said. After a species goes hyper real vr, it seems the pressure will be on to go full mind uploading to avoid dying, and from there a species only needs energy to survive. Not sure how we’d detect such civilizations, they’re in the ether.

A virtual civilisation requires two things, energy and a substrate. Unless the substrate consists of invisible material (dark matter?) we should be able to detect it, and also the waste heat from the processing. I would expect the construction of virtual utopias would be an addictive pastime, far worse than heroin, and this would lead to a certain amount of expansion over time.

Having said that, a virtual civilisation might find little benefit in expanding to new stars. If the guys over at Alpha Centauri are making new virtual utopias to enjoy, we’d have to go there to experience them - and travel will always be an expensive and time consuming option, so is likely to be unpopular with people who are accustomed to staying at home in a state of bliss.

The thing about fully immersive VR is: if I go in, I go in all the way. No bio/machine interface for me. What if the guy in charge of emptying my colostomy and catheter bags goes on strike? Sure, some say I’m full of shit, but not literally and I want to keep it that way. And being suspended in a vat of liquid tethered to electrodes sounds like a fast road to turning into a tub of lard. We need to exercise our organic muscles. I don’t think doing the breaststroke in the vat will cut it.

No, if I’m going fully-immersive VR, I want my consciousness fully uploaded into a perpetually self-repairing quantum supercomputer (or, at least a Samsung Galaxy S10 if that’s a cheaper option), till the heat death of the universe. Maybe longer. I don’t want a mere decade or two of cyber-fun; I want virtual immortality.

But, there-in lies the rub. Is my uploaded consciousness really me, or is it [del]Memorex[/del] somebody else? Inquiring minds want to know.

We’ve debated this conundrum on this forum ad nauseum, typically in the form of the Star Trek Transporter thought experiment. It never gets resolved, because it can’t be resolved. Sure, it’s me who gets transported (to all observers), but is it really “me”? Or, do I die and some newborn guy who looks and acts like me…and thinks he’s me, get transported (or, in this case, uploaded)? I don’t know. Nobody knows. I don’t believe anyone can ever know, even after the fact.

I believe there’s only one certain way to tell if the real “you” gets transported (or uploaded) and that’s only if it fails…and only if there is an afterlife (I wouldn’t bet on that). In that case you’ll be looking down (or looking up for you sinners) and see your doppelganger living your life. That’s when you lament, “damn, I just wanted to play Donkey Kong in virtual reality and now some stranger is schtooping my wife!”