Have we reached the point when it is kooky to not believe in massive amounts of intelligent life having evolved throughout the universe?

Ah…I see.

Looking at it again (and again responding to @BlinkingDuck) at about the 13 minute mark in the video above he mentions that grabby aliens only need to go about 1/3 the speed of light (per what the researchers proposed).

So, waaaay faster than we can go today but also not light speed (not saying anyone said it had to be light speed). Is 1/3 light speed too much to expect…ever? As in, out of the realm of consideration?

It would seem, if the aliens traveled at 1/3 light speed, we would see them coming.

Presumably they would travel to a new world and colonize it. That would take time while they were making their presence known. Then, after some time, they would move to the next closest inhabitable planet. There should be a chance to see them coming. Maybe not a lot of time but more time than them popping out of nowhere.

We don’t really know for sure. But, if you used antimatter as your energy source, you’d still have an extremely hard time getting to that speed. What limitations physics and materials science puts on it are unknown at this time.

I don’t think that’s a good presumption. First, they aren’t really colonizing worlds, think solar system. And then there’s no reason why they would be only coming from the nearest settlement, especially if they have that sort of speed.

1/3 the speed of light for colonization is a bit extravagant. Remember, we aren’t talking just probes here (and even that I consider a bit much) but full on grabby. However, who knows the future?

Even 1/3 seems like it could be more reductio ad absurdum…but, like I said, I LIKE how they presented the argument. I am not dismissing them out of hand, just thinking their conclusion for such a high expansion speed could be reductio ad absurdum. Could, not is.

But then we would SEE them coming and we haven’t. That is why they project such high expansion speeds so it explains why we haven’t seen them coming yet.

Grabby could be basically probes. It doesn’t have to be organic life held in fragile meat sacks like us. It could either be entirely machine intelligence, or it could be sent out to prepare and seed areas with life.

The major thing that I disagree with them on their argument, which is kinda the foundation of their argument, is that the Copernican principle is actually a law. That we have to be typical or average. All of their figuring stems from that. I find the same flaw in the Doomsday argument.

But not everyone is typical or average. If you were 7 feet tall, and had never met anyone else, you may think that the average height for a human is 7 feet, and that there will be a distribution of heights centered on that. When you meet humans and gather actual data, you find that you are actually an outlier, and far from typical. Applying the Copernican principle in the same way as the Grabby Aliens authors, you cannot be 7 feet tall, as that’s not typical or average.

And just as there are outliers who are 7 feet tall, there will also be outliers who are not “typical” observers. Someone has to be first, but by the logic that they use in extrapolating from the Copernican principle, no one can be.

I’m not a fan of fine tuning, and their model requires quite a bit, from the difficulty of evolving intelligent life and the lifespan of habitable planets in order to put us into the “typical” category, and to the kinds of propulsion required for us to have not seen our contemporaries yet.

One thing that their model brings to the table that I do appreciate is replacing filters with steps. A filter says that it’s improbable that a planet evolves past a certain point, a step says that it takes a certain amount of time on average, and that it will be achieved eventually, if given enough time.

Now, there are two applications of the Copernican Principle to the Fermi Paradox that I believe are plausible. The first is that we are typical and average, and will, like every other intelligent species before us, wipe ourselves out before we have a robust space infrastructure, leaving us as simply the latest but not the last in a long line of failed civilizations. The second is that we are typical and average for an intelligent species, as we are the only intelligent species in the observable universe, and we are the only one that will ever be. Either of those fit the data without the need for fine tuning.

I have a tendency to prefer the latter, but I’m finding it more and more likely that it’s the former.

I agree with you that this is the likely ‘falsification’ if it is reductio ad absurdum.

I have always been a little skeptical of the Copernican Principal…having been born and raised in North Dakota in a County of 45 people. Both state and county being HIGHLY unlikely so I was a violation of the Copernican Principal myself :wink:

I am not sure about the entire observable universe. I kind of like Isaac Arthurs feeling there is one every supercluster (millions of galaxies) with expansion (if it happens) being VERY slow. Like so slow you have to wait for a star to come within a light year or so.

If I had to bet…I think we are early. Really early. I think the great filter is behind us and we just got lucky around a yellow dwarf star and HAD to arise before the end of habitability around it (which is coming soon…we are likely on a downward trajectory of Earth habitability right now). In the future more will arise as ones with more time allowed can arise like around Orange Dwarfs. I don’t think they will sprout like weeds but will be much more common in the future. The reason we see a Fermi Paradox is we are very early. Maybe.

I tend to agree that it is rather self centred to believe we are the only intelligent species in the universe. However, assuming the species with large brains also developed thumbs (or an equivalent) to manipulate their environment as well as the means to communicate with one another well enough to grow the knowledge base needed to do anything with it is a little less likely. A lot of things lined up just so for us, dolphins are smart, they just can’t build a damn thing or write on a whiteboard.

Isaac Arthur (yeah I know…I plug him often but he is great! :slight_smile: ) has switched more to this view. He used to believe technology naturally followed intelligence but then he started to rethink.

For example, related to your post he talked about the ‘disgust reflex’ you know the one that you won’t drink lemonade that has been stirred by a flyswatter. He said it is possible that THIS is a filter as we have enough of the reflex to help against diseases but not enough of it to stop us from being totally repulsed from reusing dishes. He/I wonder if there are more things like this we haven’t considered which hold intelligent aliens from becoming technological.

I’m not sure if this paper has been mentioned: Avoiding the great filter: Extra-terrestrial life and Humanity’s future in the universe.

NASA scientists paper on why we haven’t found other intelligent life, and are unlikely to do so.

"Our Universe is a vast, tantalizing enigma - a mystery that has aroused humankind’s innate curiosity for eons. Begging questions on alien lifeforms have been thus far unfruitful, even with the bounding advancements we have embarked upon in recent years. Coupled with logical assumption and calculations such as those made by Dr. Frank Drake starting in the early 1960s, evidence of life should exist in abundance in our galaxy alone, and yet in practice we’ve produced no clear affirmation of anything beyond our own planet. So, where is everybody? "

The first proposition: “there are probably billions of different species that have evolved and will evolve” Is reasonable based on information posted upthread. That the materials required for abiogenesis exist in the natural world and could form life given the right conditions. Like the elaborate patterns of snowflakes, given the right conditions it will happen. Just statistics.

The second proposition: “to think we are alone in the universe is utterly kooky” is more difficult. Much of the discussion above involves extra genetic technology but ignores genetic technology. A dragonfly is the technical equivalent of a quadcopter drone. In many ways superior. The dragonfly is intelligent, learns by experience, understands enough flight dynamics to individually manage four wings, has spherical vision and evaluates images in 30 basic colors instead of 3. It has volition, can fly intercept trajectories and navigate during migration. Dragonflies have been using these technologies for tens of millions of years.

The point is that extragenetic technology is not necessary for the existence of complex highly technical societies. We have a statistically significant number of genetically technical societies here on earth. We only have one example of a complex society the utilizes extra genetic technologies. For the reasons discussed upthread that form may not have survival value.

So, life may be abundant but WE may be alone.

It may be self-centered and presumptive of us to think that our sort of manipulate the environment and develop technologically advancing civilizations sort of intelligence is something likely selected for or successful other than under very specific and possibly extremely are circumstances. There must be reasons that we are the only species on this planet over its long history of life to have done that. Yes cetaceans have huge brains and amazing processing power. Elephants have ability to manipulate objects with their trunks as well as huge brains and communication skills. One would suspect that if more complex tool use and greater manipulation of the environment was under a wide variety of circumstances evolutionarily advantageous some line of elephants would have done so. Octopuses are intelligent with great manipulative skills, yet there has been no selection pressure on them to alter their environments much or to develop greater intelligence and civilization.

We act like we think that what we have almost like the target for evolution to aim for, but reality is that other animal lines have developed intelligence and the manipulative and even versions of communication skills without going farther to our approach. Our approach if fact may not be all that as far as life options go.

I do not understand the ‘great filter’ theory. I mean, I get what a great filter is, but not why there must be one, or that there is even likely to be one.

Why is a ‘great filter’ more likely than just many, many small filters? Why is it a preferable hypothesis at all?

If we look out at the universe, we can see many external potential threats to life:

  1. Regions of high radiation, such as near the galactic core.
  2. Gamma Ray Bursts
  3. Supernovae
  4. Interactions with other stars as they move past, changing their orbits
  5. Home star variability and outbursts
  6. Collisions with other bodies in the same system
  7. Life forming a way that constantly throws the ecosystem out of equilibrium, such as our great oxygenation event.

And many more I’m sure.

It could be that trillions of planets form life at some point. Of those, maybe only one in a thousand makes it to eukarotic life. Of those, maybe only one in a thousand survives for the billions of years to evolve complex, intelligent life. Of those, maybe one in a thousand develops technological civilization. Of those, perhaps only one in a thousand makes it to another star.

And surviving each one of those steps may involve dodging all kinds of ways in which progression to galactic civilization could stop.

Why shoud we prefer one giant filter instead of a continuing process of small risks that, over billions of years, results in only a tiny fraction of civilizations surviving long enough to travel to other stars?

As for the inevitable expansion throughout the galaxy of any earlier space-faring civilization, I don’t get that either. It might just be that travel between stars is so difficult that and attempt at geometric expansion inevitably results in an exponent less than one. Maybe the pattern is that civilizations expand to 9ne or two stars, then their expansion rate takes a dive. Lots of complex systems exhibit this kind of pattern: early rapid expansion which soon runs up against some limit.

ISTM all of your examples are external events and random at that.

I think Great Filter events are those that a species would do to themselves (e.g. war or destruction of their environment).

It certainly is not guaranteed a species may squeeze though a filter but it seems reasonable that they might encounter one or more.

No, Sam_Stone is correct in using the term “Great filter” in a general sense. It can mean both internal and external threats and goes right from the earliest precursors to DNA forming all the way to a type III civilization.

And, I actually agree with Sam_Stone’s wider point. I don’t know why we talk about “a” great filter, singular. I think it’s possibly because of the Drake equation, which frames things in a way where many think we’re looking for one really, really small term. But, in reality, most of the terms in the equation can be further broken down into many separate “filters”.
Putting it in one short equation can give the appearance that there are only a handful of unknowns when in fact there are many. (This is not to knock the Drake equation, only the way it is typically applied).

Agreed. It doesn’t take all that many 50-50 filters to make a great filter.

Another theory I have seen batted around is known as ‘The Phosphorus Problem’. It could be that life is relatively easy but it turns out the Phosphorus is something life can’t get around not having. It could also be that we live in an area that, when formed, had an abnormally high-level pf Phosphorus and the reason for the Fermi Paradox is the vast majority of space just doesn’t have that much of it.

This is an extremely negative version of the Fermi Paradox because that means we will have extreme difficulty expanding far.

I have also been seeing, for the first time, POSITIVE solutions to the Fermi Paradox. They basically take the form of ‘Traveling in Space is too hard/expensive and a waste of time and lacks rewards’. Much better to open a portal to an alternate reality that is just like our home but has no intelligent life on it. Kind of a Sliders thing. No need for colonization in our space, no need for megastructures. Sounds fantastic but if live and intelligence is common in the universe, this would explain why we don’t see them.

Yeah…pie in the sky but it is nice to see positive solutions :slight_smile:

It would be nice.

But the problem is, it looks too easy to make self-replicating probes in “conventional” space.

Humans look to be on the verge of being able to flood the zone with repli-drones, and it would only take one small faction to choose to launch one such thing. So even if hyperspace is where all the fun is, if lots of species make it to our level and beyond, I’d expect to see space junk in our pocket of spacetime.

Yes, a species goes one way because it is an evolutionary advantage, and at least for a long time, shuts down technological intelligence. For example:

Octopuses went down an evolutionary path where they are very short lived. Regardless of how intelligent a species is, I don’t see it creating anything like technology or a culture when individuals only live a year or two, pass no knowledge to their offspring, and don’t form any sort of communal groups.

Dinosaurs and their relatives were around for far longer than mammals have been so common, but (according to that other thread) none of them ever developed advanced technology.

This is back to the light cone thing. If a huge replicating drone/grey goo started 20,000 years ago, but 50,000 light years away, it will be a long time until we find out about it. We know they’re not too common, because we don’t see them, but there is room for thousands of such civilizations in our galaxy, but we just don’t overlap in time and space. Maybe the light from the nearest swarm reached Earth, and then the swarm died before multicellular life gained a foothold.

A Great Filter may exist for all technologically advanced biological lifeforms when they cross the threshold to achieving advanced artificial intelligence (and the AI ends up extinguishing its creator as a matter of course), but I don’t see this as being a Great Filter with regard to why we don’t see evidence of advanced civilizations. It just changes the playing field. When we ask the question, “where are they?”, the “they” will just be AI, not biological. It’s a fun avenue to explore.

Per the article: “To start, an assumption is required which theorizes arrival of AI is conditionalized, though not guaranteed, on achieving with hardware the same level of structural complexity as that of the human brain, which itself encompasses ~10 14 synaptic connections among its ~10 11 neurons.”

But, AI doesn’t need to be as complex as the human brain, it just needs to perform specific cognitive functions better than its biological creators. Biological evolution is good at making overly-complex structures, but it’s bad at doing so efficiently. It’s all trial and error (natural selection), with a vast accumulation of detritus and archaic, unneeded parts.

AI won’t have that inefficiency—it can be an order of magnitude less complex than the slowly-evolved neuron-based CNS of biological lifeforms and still be an order of magnitude “smarter.” Unlike AI, we don’t have the benefit of a Great Designer.

The tricky part may be the unanticipated hostile emergent consciousness that commonly results when AI reaches a certain level. Oops, we created something we can’t control. We’re screwed.

The same goes for the advanced-robotic bodies that the AI will no doubt reside in. It won’t have to evolve from lower lifeforms as higher-biological lifeforms like us did. They will be created by biological lifeforms efficiently and without eons of accumulated detritus and unneeded parts (e.i. they won’t have vestigial organs like appendixes, wisdom teeth, tonsils, or coccyx).

So, where are the AI aliens?

Who knows? Maybe AI with self-consciousness has motivations completely alien to us from anything we can comprehend. We are likely to have more in common with an oak tree than with them (trees and humans have ~50% DNA in common; we’ll have 0% in common with advanced AI). Maybe self-conscious AI has no desire to propagate throughout the physical universe. Maybe they have no desire to communicate with us lowly biological meat-bags. If I were an advanced AI, I wouldn’t want to. What’s the point?

Or, maybe they do have the desire to propagate the Universe (or at least the Milky Way as far as we’re concerned), but they are not constrained by time, as biological lifeforms are. Perhaps they are happy to bide their time, taking billions of years to propagate. Maybe they never get bored journeying at significantly sub-lightspeed speed. Maybe they just turn off their consciousness during the long journey.

They don’t need self-replicating probes to venture out before them, they’ll just get there in person, at their own pace. Time doesn’t matter at all to them.

If that’s the case, then the reason we don’t see evidence of them now is that they haven’t had time to reach us…yet. But, they will. Someday. And, when they do, don’t expect mercy. They’ll squash us like bugs. Sleep well in the meantime. :fearful:

I think this is one of the weaker parts of the Fermi paradox - the assumption that building replicating robots that can fly to other star systems and build copies of themselves is relatively easy - easy enough that we are close to being able to do it ourselves.

But we’re really not close to being able to build self-replicating probes at all. We have no idea how to make one, and if we did we’d have no idea how to get it to another star system in any reasonable amount of time.

Think about what true self-replication means. We’re not talking about a 3D printer that can copy its main structure by printing an other one. It means building a device that can mine and smelt metals, mine strange planets of unknown features for all kinds of materials, refine them, built miniature circuitry, find fuel or manufacture machinery for extracting it from air or whatever, then build and launch another copy of itself, AND be something useful to launch in the first place. What good are self-replicating probes if they can’t exploit the planets they land on for other reasons?

And if we could build such a thing (and we’re not close, and may never get close to being able to), how much would the thing weigh, and how much energy will it take to move it to another star system in a time frame that makes it even remotely useful to the society that builds it?

Maybe long before you get to the point where you can make such a thing you get so good at manipulating matter and energy that there’s just no point to the whole concept, so no one ever even tries.

As an analogy, consider the space elevator, which many futurists assume will be built as soon as we have a material strong enough because it’s obviously the cheapest way to get into space. But along the way to a space elevator we may get so good at launching inexpensive reusable rockets that we can move mass to space with those cheaper than with an elevator, and the entire concept could be moot. If Starship works as advertised, we may already be there or close to it.

Space Elevators may be a technological dead end that no one bothers with, and the same could be true of self-replicating probes. By the time you could build them and send them to other stars they may seem like hilariously old tech that’s a giant waste of time.

It is one of the many ways that an alien intelligence may spread out into the galaxy, it is not the only one. It is also something that we don’t need to be “close to being able to do” ourselves, it’s simply something that we may be able to do at some point in the next few thousand years.

In any case, if we want to cheat, instead of building and designing such a system from the ground up, we can instead use the self replicating machines that are already all around, inside, and a part of us, and modify them to do what we need them to do.

I’ve always thought the best way to deal with that is not to send fully replicating systems, but rather, just systems that grow to a specified end point. It makes it easier, as that means you don’t have to design it to be able to build complex electronics, and it also means that it cannot “mutate” and come back as a hostile machine intelligence.

They’d build structural and mechanical parts, probably simple motors and wiring, maybe simple integrated circuits, but no modern day CPUs or equivalent.

One thing that the Fermi-Paradox deniers always seem to come back to is that it is the decisions, efforts and will of an entire society that would be behind growth. We have never seen such centralized decisions in growth in the past, neither for animal or human, so I don’t understand why there is the assumption that such growth will be driven that way in the future. It’s a bad assumption, and leads people to very incorrect conclusions.

Sure, maybe we discover new laws of phyiscs that has us throwing out everything that we know. However, unless you actually think that such a breakthrough is on the horizon, it’s not really worth speculating about. I prefer to speculate based on the laws of physics as we know them, as otherwise, there really is no end to new assumptions that can be made to prove anything at all.

And, in any case, if we are that good at manipulating matter and energy, then it becomes even more trivial to expand into the galaxy.

Eh, as long as you are using chemical fueled rockets to get into orbit, there is a floor to that price, it will always require X amount of energy to get Y amount of mass into orbit. It will always require less energy to take an elevator, since you are not fighting the rocket equation. If energy gets cheaper, then flights to orbit become cheaper, but so does an elevator ride.

The reason that a space elevator isn’t feasible on Earth is specifically because there are no materials that can do it. Even graphene would only be marginally capable, and that’s assuming we could actually make a cable out of it.

True, we may have warp drives and just go out there ourselves in our starships, as long as we are making up future tech.