Idea for what the Great Filter could be

I think we’re getting a bit sidetracked here. Whather SamuelA’s scenario is plausible is largely beside the point.

Fermi’s Paradox is saying: An intelligent species could leave evidence of their existence across the galaxy in very little time compared to the age of the galaxy. Why do we see no such evidence?

Now, most descriptions of Fermi’s Paradox imply such evidence should be likely to exist based on what we know, or even we should expect the galaxy to be “teeming” with life, but in fact none of that is necessary.
It’s enough to say that hypothetically there *could have *been evidence, and right now we don’t know why there isn’t, as there are too many unknowns about the likelihood of life, intelligent life, spacefaring life, their motivations etc etc.

The “great filter” idea is that there is a single factor largely responsible. I think in a discussion like this it’s fine to say “I think the great filter is X” (though some may question whether X really works as a fundamental barrier).
It’s also fine to say “I think there’s no great filter; it’s just a combination of factors”. Or simply “Who knows”?

What I take exception to is the idea of “Fermi’s Paradox is stupid because it makes so many assumptions” because that’s a misunderstanding of what the paradox is about. It’s simply asking the question Why do we not see anything?

Actually I do think that it is possible to make self-replicating systems that don’t mutate, using multiple error-checking systems as SamuelA suggested - in fact these non-mutating systems are probably the only ones that would fill a galaxy.

Unfortunately they would also be the most boring. As Robert Freitas pointed out, we could have an arbitrary number of stable, self-replicating probes in our asteroid belt right now, and we wouldn’t see them; they’d just sit there, repairing themselves and never, ever, ever changing.

It isn’t - you’re correct, though you didn’t mean to be, that such a species would be immune to random change caused by radiation. But there are other forms of change.

It might seem unwise to send out beings you no longer control who can gather antimatter, but play this out a little bit. Assuming technology is equal, who wins a scuffle between a starship and the home defenders of a star? Defenders always win, they have a mass advantage of many orders of magnitude. This is assuming the defenders aren’t static on an easily predicted planet, but are spread out in basically a dyson cloud.

A starship is a special purpose vehicle that starts huge at departure and you end up with basically a rowboat at the destination star. Just barely enough remaining hardware to land on an asteroid with the right element mix and begin again.

So being first is the only way to win.

As for why we don’t see this - I agree. This is weird. Either some profoundly major about our predictions of the immediate future for our species is wrong, or we’re the first in this galaxy, no matter how unlikely that may seem. My descriptions of immortal starships are mainly to shoot down the alternate hypothesis that all the little green men out there are just too busy watching reality TV or too suicidal to even leave their home planet.

That line begs to be uttered by Paul Reiser in a movie trailer.

We don’t need to look beyond our own world to answer the question of whether a non-wood ecosystem would still lead to the discovery of fire:
Before vascular plants, the dominant terrestrial ‘flora’ was likely the fungal or lichenoid Prototaxites genus. And as any buckskinner or Iceman would tell you, despite not having cellulose, fungi can indeed burn.

We’re in an artificial construct. We were intentionally placed here. Maybe a science project on how civilizations expand throughout a galaxy.

There also the creative solution in the Spin/Axis/Vortex trilogy where earlier arriving von Neumann probes place barriers around all places where intelligent life evolves wherein the time on the inside runs make orders of magnitude slower than on the outside.

Sent from my SM-G930V using Tapatalk

I believe the Universe may be chock full of civilizations that have achieved interstellar travel, but few if any in our galaxy (the subject of the Fermi Paradox).

I also believe there is a universal limit on the distance any civilization can travel (with or without robotics, etc.), that being intergalactic travel. Certainly, it’s not even theoretically possible, AFAIK, for matter or communication to reach us from galaxies receding from us at FTL, which is the vast majority of the Universe.

If true, then we can only potentially be contacted by civilizations within the Milky Way that have achieved interstellar travel, have the desire and resources to make their presence known and have had time to get here.

If you take a pessimistic estimate of the number of potentially habitable planets in the Milky Way, let’s say ~10 billion, well, that’s a big number, but not astronomically big (heck I have over 400 x that number of cells in my body—and I’m not all that fat).

And, if you take an ultra-conservative approach to the Fermi Paradox/ Drake Equation (e.g. not one, but multiple great filters, or bottlenecks), then it’s not so surprising that we haven’t been eaten by our intra-galactic brethren…yet.

If I had to pick the Great Filter, it would be Keeping Up with the Kardashians—picking up that TV signal would scare away even the Borg.

This.

Call me pessimistic, but I think the Great Filter is technology/complexity-induced self annihilation. (In the spirit of the OP, I'll consider fire to be technology, the ability to manipulate chemical reactions.)

On a human generational timescale, the human-induced world-ending risks (nuclear profileration, climate change) are kept on the edge of acceptable.

On a geological timescale, this level of risk is disastrous--even a 1 in 100,000 chance per generation of a HIWER manifesting means we won't be around in another 2 million years. And this does not account for the likely accelerations in technology. Even if we dodge the force-leverage, nuclear-type threats, there's complexity-induced collapse (as a near-term example, A/I getting out of hand).

I can't believe this kind of reasoning doesn't get more attention--certainly for this thread, but also in general. Do people just assume the HIWERs will decline in the future? If so, how will this happen? By magic? Will humans no longer (magically) be viciously competitive then? or technology (magically) no longer have the ability to raze the planet then?

Once the printing press was invented, the chance of a fresh dark age dropped precipitously. The reason was that no matter what happens, the information needed to create the gradually accreting tech base of civilization is copied so many times that it is vanishingly unlikely it will be lost.

Think about it. Post printing press, there were thousands of copies of Newtons’ book and manuals on steam engine and other key pieces of knowledge. By the 20th century, public libraries meant the number of copies were enormous - instead of a single Library of Alexandria that could burn down, an event would have to destroy tens of thousands of libraries or more.

Yes, nuclear weapons were invented, but there were never enough, even at the peak of the Cold war, to destroy every library, much less every human. They could realistically wipe entire nations back to the dark ages, but there were enough world powers even at the peak of the Cold War, 1986, that wiping them all out was not possible.

Digital technology is taking that a step further. Now you can have the entire library in a tablet, locked in a trunk of a car or something so even an EMP can’t get to it. Only copyright law has prevented us from copying everything smoothly and freely, but you can download free sources like wikipedia with a few clicks. (and the text is under 10 gigabytes)

The next logical step - artificial intelligence - means that libraries need not contain just some dry text explaining how to do something. You will be able to store an artificially intelligent agent that can actually perform the given skill, if hooked up to an appropriate robot. If we ever find a way to copy human minds, you could store entire human personalities on tap for when their skills are needed. At that point, combined with compact factories made with nanotechnology, you could literally fit a civilization restart kit into a very small space - and it would be self replicating.

So maybe you’re right, and a terribly destructive event will ruin everything - but your math is wrong. We are probably no more than 100 years from such a “civilization restart kit” being possible. So the event has to be highly probable and to happen in our near future, or it will be too late.

But you understand how that's optimistically selective--the same advances that lead to your seed kit, only exponentially worsen the possibilities for technology-related global risks. Consider for instance that the same A/I advances to make your seed kit feasible lead to A/I that decides the seed kit means the "unwelcome" humans can rebuild their civlization.

A working definition of technology could be "increased leverage per unit individual" (think of factor productivity in the [total economic output](http:////en.wikipedia.org/wiki/Total_factor_productivity) formula). This is fundamentally destabilizing. I remember Carl Sagan cautioning against asteriod-steering mechanisms to push an asteroid out of earth's path. Why? Because he felt the technology to do so would be dangerous in the wrong hands (someone steering an asteroid into the earth).

Well, maybe. But even if say the newly sentient AIs run amok and take over the planet, you’ve just replaced one boss (sentient humans) with another. Ditto if they set off a nuclear war and rise skynet-like from the ashes. Both terrible tragedies for humans, but from the perspective of outside observers, this is just a footnote since the new beings send out the immortal starships same as humans, once population pressures reach a point where it’s worth it.

Yes, but despite being ubiquitous fungi don’t dominate the landscape. That role is taken by plants that can build structures stiff enough to obtain large size. And if for any reason an alien analogue to the tracheophytes evolved to build itself primarily out of proteins rather than cellulose, you’d have forests full of poorly flammable or non-flammable plants. However at this point I can’t say how likely or unlikely that is.

I think your theory has some merit. Most folks tend to look way further in to the future for a cause. I think we should consider possibilities like yours with more thought. There are many variables that were responsible for our civilizations evolution, fire definitely helped quite a bit. Also I often think about what life would be like on earth if that asteroid had never wiped out the dinosaurs. This planet might still have been populated by huge lizards with no mammals in sight. Or hell, earth would definitely be a very different place without our moon. Etc…

We are witnessing the climax of The Great Filter here on this planet. The development of life is based on evolution which is based on survival of the fittest and selfishness. As civilization becomes more powerful and sophisticated it inevitably carries with it the inherent selfishness and drive to dominate that leads to war and environmental destruction.

If we didn’t have so much empathy we probably would already have ships heading out to other stars. But caring about others consumes a lot of man-hours.

Firstly, survival of the fittest doesn’t necessarily mean the most selfish. The whole reason that social species exist is because cooperation is often a better strategy.
Secondly humans are not “evolving” to become more selfish within the timeframes of human civilization.

There are fewer wars now than at any time in recorded history.
Environmental destruction remains a huge problem, it’s true, but it’s only relatively recently we acknowledged it as being a problem and tried to do anything about it. 60+ years ago we only cared insofar as it directly affected human health, or affected the number of animals we could hunt or fish.
So the empirical data seems to be showing the opposite trends to what you’re saying.