Fermi Paradox solved. (Well by me...so...)

Grain of salt and all that.

So I was thinking about our own civilization…and we’re doomed. It just takes one F**k up. And that’s the crux of my solution. It just takes ONE major EFF up and it’s game over man. Never mind minor ones that set the civilization back. One nanite mishap. One major nuclear exchange. etc…etc…there are plenty of “This is how the world could end” websites. And it just takes one time.

so a civilization surviving to invent radio or some sort of transmission? Not too crazy, but the window between that and self-destruction seems like it would be thinnish relative to the civs rise. Not to mention the other problems with picking up a civilizations transmissions.

So, yeah my idea has holes…but I think the main crux is solid enough.

That’s a common Sci-Fi meme.

Leo Szilard explained to Fermi that there are aliens here; we call them Hungarians.

The lifetime of a civilization is one of the terms in the Drake Equation. If civilizations are short-lived, then there will be fewer of them anywhere.

And…yeah. Technological advance increases the amount of sheer destructive power, and concentrates it in fewer and fewer hands. There’s very little to stop some creep from doing something incredibly harmful, that might end our civilization, or knock it back a few centuries.

On the other hand…it’s actually surprising to some observers how little callous destruction there is. People aren’t throwing gasoline bombs off railroad crossings, or even dropping large rocks off high places. We seem to have some instinctive prohibitions that keep nearly all of us from doing that kind of harm.

As with all other factors, we have no way of knowing how common such traits are. Maybe evolution has led most other intelligent species to be hugely and destructively competitive, while humans are remarkably cooperative.

(Scientific American had an article on how Humanity conquered the world, and noted we have the trait of “hyper pro-socialty,” i.e., we instinctively work well together. This might be a very rare trait, cosmically speaking.)

I would dispute the premise that civilization-ending “fuck ups” are common. Terrorists might blow up a few cities with nuclear weapons but destroying civilization would take the combined efforts of hundreds of people. And the fact that we’ve avoided a general nuclear war for a couple of generations shows that civilization does resist destroying itself. We’re not lemmings; we avoid jumping off cliffs.

My personal theory is the game preserve theory. Sure we can speculate about universal conditions; but we don’t have any knowledge about what local conditions are like. Perhaps we’re just lucky and Earth happens to be located in the midst of some organization like Star Trek’s Federation; an organization that believes in the principle of leaving fledgling civilizations alone to develop at their own pace. So we appear to be alone in the universe only because they’ve banned other races from coming here.

I’ve always gone with the Ur-Fermi solution. There’s always a first technological civilization. And its Fermi is going to wonder where everyone else is.

Someone’s got to be first, why not us?

One reason is that the universe was 8 billion years old because our sun was formed. Now there wouldn’t have been any metals in the early stars and their planets, but I think earth-like planets would have existed long before earth did.

I think the aliens are waiting for us to grow up a bit before they make contact again. :wink:
Really, if they have anything resembling our morals they are staying away until we’re more ready as a collective. Most of us are still squabbling over things like ideas, resources and territory. Until we calm down and get our shit together, I don’t think they’re going to contact us.

Like if we were to find an unknown tribe in the Amazones today we wouldn’t just waltz in there with TV’s and hamburgers and… well, ok we do exactly that, but if we’re assuming that these guys are NOT complete jerks like us…

No particular reason, except that it’s so mindbogglingly unlikely, statistically speaking (if intelligent life should, indeed, be common), that it’s not a conclusion that you want to jump to until you’ve vacuumed the place for more likely scenarios.

To me the biggest solution to the problem that is not often discussed is time.

Aliens might be a plenty, but they might only stop by every couple of million years.

If there were one thousand different alien civilizations within reach of us and each one only came by every 10 million years, thats still on average only a visit every 10,000 years.

Maybe not enough importance has been placed on the rareness of a civilization like ours. Humans are a blip in the life-sustaining history of the Earth, and yet as far as we can tell…no other species have created advanced civilizations in all that time. Now maybe crows pass down stories from generation to generation and maybe dolphins and whales sing songs for generations…but none of those animals are going to be on the moon anytime soon.

So we have rareness of advanced civilizations. Personally I’ve seen no special morality accompanying the ability to create WMDs. There’s the possibility of some disasterous technological event common to all civs as yet undiscovered. And then the possibility that space travel may simply be too difficult to overcome. Add to all that my theory how it just takes one Stanislav Petrov-less reality andits game over for that civ.

At least for a very long time.

When we look up into the night sky we may think that we are seeing more stars than we actually are seeing. We only see a couple thousand of the nearest ones. Most of these stars are in our local neighborhood, the Orion Spur off the Carina-Sagittarious arm of our Milky Way home galaxy.

The Orion Spur is kind of out in the boondocks. So it is entirely possible that we are it, we are the only intelligent life in our little part, of an average galaxy, in a great big space. And we will never find others like us or strange to us, we will never find any other intelligent life at all. Ever. Even if we become space-faring people ala Star Trek.

Given the apparent speed limit of C, or the speed of light, no one is watching us, no one is keeping us in a protected zoo-like preserve, there just isn’t anyone else within the distance who could possible interact or communicate with us.

We are the Elder Race in the Orion Spur, there isn’t anyone else. In the larger view of the galaxy and the universe in general, sure, there are probably others. But we are it here and we will never find evidence of others.

The problem with the “we fuck up theory” is it fails even casual analysis. Ever since the inventing of the printing press, the useful ideas got copied many, many times for the technologies of the day. More and more copies were made, to the point that in the 1980s, while nuclear war might kill most of North America, Europe, and Asia, there would be enough copies of basically everything in every public library. There never were enough bombs to destroy all the copies.

At this point, you can fit all of wikipedia and thousands of books onto a device the size of a small tablet. A hypothetical war would have to annihilate every human down to groups the size of 100 people and every last one of those devices. Not going to happen.

And the next phase up is one where we manage to cram the thousands of square miles of factories we depend on for high tech goods (I’m adding up all the factories that make all the things that make all the things down to the bottom of the production tree) into something about the size of a shipping container, with a case of solid state memory integrated into the machine that has the blueprints for thousands of products.

Once we have that…and this factory will be able to make computing cubes the size of a server rack that have the same capabilities as a human brain, probably a lot more, no event could wipe it all out. Can’t happen, too many copies, and the copies can copy themselves in a week.

Eliminating this possibility leaves only one possible explanation for the Fermi paradox - life getting going at all to the phase we are at now is unbelievably rare.

One completely possible explanation is that getting off the planet, or at least out of one’s solar system, is really, really, really hard. Which it is.

The idea that intelligent life being relatively common = someone will have colonized the galaxy, or even be noticeable from here, is where it falls down for me. As far as engineering challenges go, our ideas of how to go about interstellar travel are entirely in the “what are you smoking” territory.

No. That’s not even remotely true. Relative to the challenges we’ve already solved, the next steps are actually pretty easy.

  1. Build bigger digital computers that emulate human minds
  2. Boost the intelligence of these machines with speedups and inbuilt software boosts
  3. Use #2 to make computers that are about the size of the human brain they are emulating

And that’s it. If you don’t die of old age and are a solid state cube that needs 10 watts of power and weighs a couple kilograms, interstellar travel is easy. Even if you can’t get going all that fast, you can wait thousands of years.

Sure, if you wanna travel fast, you do need some more exotic stuff, like antimatter, but if you can wait, a few thousand years is still no big deal.

Yeah. Don’t bogart that thing, OK? :wink:

Are you criticizing the idea? Wanna sit here and suggest that in, oh, 1000 years we won’t have made progress from, say, Watson or google’s new neural net AIs that stomp humans at Atari games? Or do you wish to take the other track and suggest neither a turing machine nor a quantum computer, both of which we have right now, can replicate the effects of the circuits in a human brain? Even though we’ve got high level replication workingright now?

You’re not fighting ignorance with your post here, you’re encouraging it. The reason AI has taken “this long” is because early estimates had little idea of the true scale of the problem.

Fair enough, enlighten me. How good is AI now? How good do you expect it to get in the foreseeable future? Or beyond?

It’s good enough to beat humans at limited domain tasks. It’s a simple and logical progression to expect us to get “bottom up” simulations to scale up thousands of times eventually. We’ve done it before - the very search engine that google uses used to be just a single computer with a small list of text it found crawling the known web. A “bottom up” simulation that uses a set of interconnected nodes to model a subsystem in the brain can scale, because every other simulation in the network uses the same source code, just a different set of data to govern how it operates.

I think that ultimately “it” will at a minimum allow us to run a system that is equal to human intelligence but at least 1 million times faster. This assumes an interconnected set of chips running at 5 ghz (for round numbers) and using hollow core optical fibers for the interconnects.

I think a human being of above average intelligence - say a currently living aerospace engineer, doctor, etc - who thinks the same way but a million times faster would be a being we would commonly agree is “super-intelligent”. Ask it a question, and you get a 100 page pdf file with a detailed, researched answer instantly. Ask it to design a new wide body jetliner or jet aircraft, and it has a preliminary design in a few hours. (the limitation would become how fast you can construct physical prototypes - obviously, even a being that thinks a million times faster can only do so much without empirical testing)

And yeah, for such a being, putting together a starship, if there is any way at all to do so, would be a straightforward sequence of steps. Personally, I think the obvious, non crackpot, albeit tricky to execute way is to produce anti-protons and anti-electrons via spontaneous pair production. (a big honking laser in space crosses some beams).

You then fuse the anti-hydrogen together in a series of fusion steps until you reach anti-beryllium or some other solid, superconducting element.

Without every touching it, you cool it down with lasers (this is also how you manipulate it) and compress it into fuel pellets with magnetic fields. A solid superconductor is trivial to contain, as it will reject other magnetic fields and just levitate there.

So, your starship is a bunch of fuel canisters with magnets in the walls. It reacts the antimatter in a big honking engine that gives very low thrust, but absurd ISP. Easy.

Or, we could just do fission fragment. This is a solid, almost certain to work engine design that current day humans could construct.

Either way, the limiting factor behind starships is the fragile apes who have to ride them. This is why a form of AI is the only practical way to do it. A centuries long journey is not a big deal if the crew is solid state and can go into low power mode for the boring trip. Not to mention, the risks of interstellar travel are a lot easier to mitigate if the beings are digital. You would just launch a fleet of ships, and they would beam memory state changes to each other via radios. As ships are destroyed by hitting interstellar dust, the surviving vehicles continue the mission. Doing it this way, if only 1% of the ships on average survived the trip, it’s no more than a minor inconvenience because the beings riding them are all crowded onto the surviving vehicles.

You assume a nuclear bomb is the most destructive thing that can exist. What if one of those computers that’s a lot more capable than a human brain somehow finds the will to wipe us out? Especially if it’s already hooked up to a little super-capable factory.

duplicate post removed