Have we reached the point when it is kooky to not believe in massive amounts of intelligent life having evolved throughout the universe?

A distinction between the galaxy and the universe has be taken into account. The observable universe is indeed vast- in fact if the universe is “flat” it may be literally infinite in size- but barring the discovery of faster-than-light travel (a VERY long shot) distant galaxies might as well be in another universe. The more distant ones we see are literally unreachable: even a beam of light leaving from Earth now would never catch up to them before the expansion of space would send them beyond the cosmic event horizon. At most, we might detect radio waves or evidence of galaxy-scale engineering from distant galaxies, but this would basically be archeology; that such intelligence existed once upon a time. And we see the more distant galaxies by light from so early in the universe that it’s doubtful intelligence could have had time to evolve then.

For nearly all intents and purposes we could only hope to encounter or communicate with intelligent life in our own galaxy. The chances of that depend on how common planets with any life at all are, how frequently intelligence arises, and whether that intelligence forms technological civilizations. If you start with pessimistic assumptions, it’s not hard to conclude that we might be the only technological civilization in our galaxy.

Quite a strong statement to make.

I don’t think we should handwave known limitations like the speed of light. But, in terms of engineering challenges, bear in mind that humans have gone from a horse being our most fastest form of transport to landing a probe on an asteroid in less than 200 years.

If we’re at all indicative of intelligent species then, when we’re talking about what may be achievable given millions of years, then at the very, very least I expect it to be easy to accelerate kiloton vehicles to relativistic speeds. In which case star-hopping is something that would take years, not thousands of years.

…not that it helps if it takes thousands of years anyway – there has been time even for a slow-moving civilization to criss-cross the galaxy.

I didn’t.
As I say, aliens popping round to say “hi” is one scenario among many. They could also launch unmanned (for want of a better term) probes. Or beam signals around. Or build detectable megastructures.
Or something we haven’t even thought of but would suspect was artificial if we saw it.

The point is, we don’t see anything that’s even a candidate for evidence of an intelligent species. And that’s somewhat surprising, given our current understanding of the universe.
Of course, there are lots of potential explanations: maybe abiogenesis is just stupidly, ridiculously unlikely. In the meantime though, we have no idea, and that’s why the paradox is so interesting.

iswydt

Nothing really changed with the JWST. We already knew there were hundreds of billions of galaxies, each with hundreds of billions of stars each.

At that scale, it’s likely that anything that can happen does happen, billions of times. I mean, look at it like this.

If life evolving is a one in 100 billion event, then we’re already looking at a couple of life bearing planets in our own galaxy, and pretty much every one of those hundreds of billions of galaxies. You can put zeroes on the end of that 100 billion above, and the numbers get smaller, but you have to put a LOT of zeroes on there before it becomes a one-time thing.

As for why we seem alone; I think it’s likely that the incidence of life is such that it’s rare enough that nobody’s within radio range or relativistic distance of us, assuming in the first place that it’s advanced enough to communicate. But again, I feel like it’s highly likely that there are multiple civilizations out there somewhere of comparable development as our own. They’re just in some galaxy 200 billion light years away.

I don’t feel like it’s any sort of judgment on someone to NOT believe in that sort of thing. It’s one of those mental exercises that requires a certain level of both mathematical understanding AND imagination. Geologic time is another one; not to hijack, but I’ve long felt that a lot of the resistance to evolution is grounded in not actually understanding geologic time and just how LONG we’re talking about. I’m not surprised that when you start applying the same sorts of huge numbers to how many galaxies or how far away they are, that people start thinking it’s all nonsense, if they can’t really conceive of it in any meaningful way.

You say that, but then add

Which is also quite a strong statement to make. It is possible that there is a limit to speed of a mass bearing object (indeed, under our current understanding, there is a finite speed limit to even massless objects), and that there is no innovation that will manage to supersede this limit.

Sure, I might be wrong, but if I’m not then it’s a rather simple means of explaining the paradox of no signs of aliens in a universe full of life - we’re too far apart, and it’s not possible to go fast enough to make up for the distance.

The problem with exoplanet research is it’s extremely difficult. Stars and galaxies give off their own light, so we’ve been able to study them for centuries. Planets only reflect light from their star, so it was only recently with new techniques and the best telescopes we have that we’ve been able to detect planets orbiting other stars. Since then, we’ve found thousands of them, to the point where it seems increasingly likely that more stars have planets orbiting them than not. With our new space telescope, we can use spectroscopy to better detect what elements make up the atmospheres of these exoplanets.

Of course many other animals are very clever, for animals, but the difference in degree between humans and , well, anything else - chimpanzees, crows, octopuses or dolphins - is just unimaginably vast.

It is not at all obvious that technological intelligence is particularly important for survival. Of all the animals that have come and gone, we are the only one to evolve technology. Sharks are dumber than shit but have been around for two hundred million years, and they are newbies compared to jellyfish. Jellyfish literally don’t have brains at all but they were around before dinosaurs, and I have no doubt they will still be around when we are long gone.

Of course, as @Sam_Stone also points out, intelligence doesn’t always mean technology. Octopuses could be unimaginably smart but they have many disadvantages to the development of technology; they live underwater (so they ain’t gonna start a fire anytime soon, and manipulation of fire is immensely important) and don’t live very long. Dolphins are very smart but are physically incapable of substantial manipulation of tools. Intelligence also doesn’t necessarily mean our kind of intelligence, either. Dogs are amazingly clever when it comes to social and emotional smarts and can comprehend things other animals cannot (the act of pointing being one - something dogs quickly grasp, but which no other animal on the planet besides us can easily learn, except maybe elephants) but my good boy Benny isn’t solving a Rubik’s Cube anytime soon.

Or I could be wrong and maybe the Vulcans are on their way. What is for sure is we simply cannot know. We just don’t have anywhere near enough information to even hazard a rough guess.

Well, maybe. It might also be that galactic downtown is so inhospitable to life that it never arises there. Stars are pretty but they’re also hot and radioactive and huge gravity wells. One of the coolest things that could possibly happen to us is if Betelgeuse goes supernova an blows up, it’d be such a neat sight - from 550 or so light years away. I wouldn’t think that if it was only 5LY away, though, because then it would kill us all.

I always liked Neil DeGrasse Tyson’s analogy that it’s like scooping a bucket of water out of the ocean, looking in the bucket, and proclaiming “there aren’t any whales in here; I guess whales don’t exist.”

The sheer vastness of space is so mind bogglingly huge that it is the one thing that makes it near certain we will always be alone.

Right- it takes a certain amount of imagination to conceive of numbers in the hundreds of billions. Even millions are more or less abstract; you can conceive of say… 100,000 of something. Stadiums hold 100,000 people. You can make 100,000 dollars a year. Houses cost multiples of that.

But millions? That’s a whole other scale. There are things we deal with in the millions, but typically they’re aggregated into larger units. Nobody talks about millions of pints on their water bill, nobody talks about ounces of fertilizer, or grains of rice, or whatever.

And billions is another order of magnitude above that, and it’s used to count unimaginably huge numbers of things- how many grains of sand on a beach, or how many raindrops in a thunderstorm, and other things of that essentially unknowable scale.

So when people say that there are hundreds of billions of stars in a galaxy, and hundreds of billions of galaxies in the universe, that’s trillions of stars. And when the comment that something extremely unlikely is stilly likely to happen if you do it a trillion times, that sort of flies overhead at 30,000 feet, because they can’t really conceive of numbers that big and what that actually means.

No, it’s not a strong statement to make.
We have accelerated (tiny) masses to very high fractions of the speed of light. And in astronomy we can see (and even detect the gravitational waves of) huge masses being accelerated to relativistic speeds.

So there is no physical limitation to accelerating masses arbitrarily (or is asymptotic the better word?) close to c.
There are, of course, practical difficulties, but for the reasons already given, I don’t think practical difficulties are very important in the context of the Fermi paradox.

We shouldn’t dismiss the practical problems of relativistic flight.Even if we had fantastically powerful propulsion systems that could accelerate payloads to speeds near c without melting them in the process, these payloads would still need to pass through interstellar space at relativistic velocities- and this space isn’t empty. The cold hydrogen gas that fills the vacuum is very thin - but it would be converted into deadly radiation by the speed of the ship - and any dust or heavier particles would impact with the destructive power of a bomb.

These practical problems are so great that they may have no solution - which really means that any interstellar expansion will need to happen as significantly slower speeds. Personally I think that slower and safer is the better option. Any civilisation that can’t build a thousand-year space mission doesn’t deserve to expand.

Let’s review though.

Sputnik 1 was within living memory. Our first venture beyond our atmosphere was mere decades ago. We are very, very new to space travel.
Meanwhile, if interstellar travel difficulty is to be a primary explanation for the quiet skies, then we need it to be a limitation across hundreds of millions of years of technological progress. To such an advanced ET, the james webb space telescope and the Osiris space probe would look thousands of times more primitive and basic than a sharpened rock looks to us.
Literally, not hyperbole.

For us to speculate about what things may be impossible to an advanced ET, then they need to appear absolutely and incontrovertibly impossible from our point of view. Stupidly, ludicrously difficult just isn’t going to cut it.
What kind of problem is the interstellar medium? It’s the latter kind. It’s a stupidly, ludicrously difficult engineering challenge to imagine getting a large object from one system to another. But we don’t know that it’s physically impossible to deflect particles, or (harmlessly) absorb particles, or build probes that can repair their own circuitry continually.
It falls far short of the kind of problem we need to work as a primary Fermi paradox solution.

And yes, I’m saying “primary” because, if there are other great filters that somehow mean that there are only a handful of intelligences within a galactic cluster, then sure, the difficulty of interstellar travel can be an additional factor in why we don’t see anything yet.

Great contributions from everybody. I have a lot to think about.

I think that is very well put. Designing an interstellar Von Neumann probe (I’ll call it a seed) to to resist impact with tiny pieces of space rock at relativistic speeds could be a technological filter.

I am not sure if we want tomake our shell out of something resistant (diamond? tungsten? or should it be something “mushy” that self-heals with enough redundancy to take tiny bullet holes from time to time like a champ?

A prior filter could be harnessing enough of the sun’s energy to accelerate that seed to, say, 1% of the speed of light. How do you even do that? You can only do it all at once if you can handle the giant explosion involved. Using a giant accelerator to gradually accelerate it, using magnetic fields to guide the seed the way we do for particles nowadays? Doing it in Space using the Sun’s gravity? That would likely require a really close orbit to the sun to ensure gravity holds it until it reaches that 1% of light speed. Is that even possible without the seed melting before it leaves sun orbit? Does such a coating exist? I don’t know. Could that heat energy be mostly absorbed by the seed, making it 2 birds killed with one stone? I could see the seed rotating like a rotisserie chicken to evenly distribute the heat. But I digress.

A prior filter would be designing the seed. Actually, multiple filters.To give a few human-centric examples:

We will have to either:

-Change ourselves, ditching our current bodies to become androids with artificial brains, capable of making android babies with our android (metaphorical?) genitalia and artificial gametes.

You would live basically forever with the necessary replacement materials.

You would spend most of your interstellar travel asleep or running at 1 millionth speed during the journey between stars so it only takes your group of colonists a couple of hours or minutes, subjectively. You would, of course be severed from all those you love who remained behind because communication would simply be too slow, even if the distances were small enough for signals not to be swallowed by cosmic background noise but that has been done before on earth, so it’s not a filter.

This requires the ability to transfer our consciousness from bio body to man-designed body. Probably a filter right there. We don’t know much about consciousness yet and transferring it might be an impossibility, leaving us with only option 2:

-The seed needs to be able to hold in stasis our natural gametes until it finds a suitable planet.

If it slows down to check things out, it has to be able to change its trajectory to start orbiting the candidate star while it observes its planets at potentially non-relativistic speeds, then build back speed using the star’s energy to hop to the next candidate star system.

And then once it finds suitable life conditions, it has to be able to build an artificial womb to bring multiple natural babies to life and a farm to feed them.

The seed needs to be able to then teach said babies using AI teachers of some sort why they exist (Hi! So your mission is to build a new human civ from scratch on this hellhole, produce more Seeds and lauch them into space.)

Even if we get the AI right, how can the the seed prevent the growing society from deviating from the original plan? So plan 2 sounds pretty darn great filtery too.

Well put. This was my view as well.

Good point. We are having issues with people blocking and denying the benefits of scientific and technological progress.

That is the best answer to the Argument from Incredulity fallacy IMO.

[quote=“Voyager, post:73, topic:969972, full:true”]

I wonder how you thread that needle.

The part about simulating trillions of new universes reminds me of the land of infinite fun featured in Ian M Banks “The Culture” Books. Highly recommended fiction.

He posits that if you are a highly evolved benign intelligent life-form, your goal is to saturate the universe with intelligent life. Whether it looks like you or comes from you should be secondary. As long as said shares your core values. If we accept that premise, we can venture some deductions on universal (literally) values:

I expect the most universal value to be efficiency. It is more efficient to use a needle than a sledgehammer to solve problems when possible. This likely means that destroying useful artifacts, exterminating life-forms and otherwise making less efficient use resources a universal no-no. Genocide and destruction are generally inefficient. Killing an intelligent life-form, when its actions and genomic treasures can be harnessed for constructive purposes instead, seems bad.

I expect Collaboration, whether from symbiotism or corporate synergy, to be highly valued. Collaboration generally increase efficiency, (Mitochondria = good, cats & dogs = good for humans).

Perhaps love is a universal value. Love & collaboration go hand in hand.

Helping other intelligent life-form to be able to use technology, might be seen as something highly valued. David Brin’s “uplift” series explores this concept to some degree, with humans having uplifted Dolphins and Chimps to be our peers.

The prime directive idea is indeed a more cheerful one than hostile dark forests. I’ll have to think about the competition between benevolent and malevolent seeds, which I assume would be the result of a noxious exclusionary religion/philosophy.

Right, and what I’m saying is that being “agnostic,” if that is the term you prefer, is the reasonable position. I don’t think it’s reasonable to believe one way or the another on this question, because we simply don’t have sufficient evidence.

How about this for a filter:

Maybe as a species’ technology level goes up, the amount of power that can reside in a single average hand increases proportionally with it in a non-sustainable way.

For example. Somebody in the USA today has the ability to use an assault rifle or a garbage truck to kill dozens of valuable tax-paying, society-building people before they stop or are stopped.

What if the filter is too many people having enough raw power at their disposal that a single person or a small group can obliterate a city or an entire planet/habitat in pursuit of some fringe ideology, or simply through accident?

Nuclear weapons falling into the wrong hands are something that we fear right now, and we’re not even a Type 1 civilization on the Kardashev scale. Maybe striving to become a Type 2 civilization is almost always fatal.

Or maybe Dark matter is just the weight of all the civilizations that have dyson spheres absorbing their starlight? :exploding_head:

Maybe we don’t.

Maybe we manage to evolve into some version that can survive. But maybe that’s a version that does think there’s such a thing as Enough, and doesn’t try to take over the universe, even if they explore what they can by other means.

??

Why?

And, if so: wouldn’t it be most “efficient” to do as little as possible to survive, and therefore not to do any of the things you’re positing?

This is the Kzinti Lesson (from Larry Niven). Paraphrased, it says that a means of propulsion capable of producing very high velocities is also a weapon that can produce extravagant amounts of damage. Kinetic energy can be a killer, as the World Trade Centre attack demonstrated. Spacecraft propelled by powerful sources of kinetic energy would be deadly, and so would the sources of energy by themselves.

There may be a stage of technological development where the amount of energy available to an individual actor of ill will exceeds the amount of energy required to destroy that civilisation. We are nearly there, but not quite.

I don’t think the simulation hypothesis affects the Fermi paradox in any way. If you use the pre-Copernican hypothesis - that Earth is the only place that counts and the rest of the universe is just a holographic image you could still simulate intelligence elsewhere that would contact us. If you are doing a full universe simulation, it is perfectly plausible for multiple simulated intelligences to show up.
I could see the simulation designers not allowing other intelligence to make the simulation simpler, but they don’t seem to have followed that policy anywhere else.

what does this mean?

I had to look it up. Apparently it’s a reference to the James Webb Space Telescope.