E.T: Friend or Foe?

It’s entirely possible that we’re not currently an interstellar species because we’re not cruel or warlike enough, and that it takes things like a Cold War to drive tech to the point where we discover stargates or warp drive. i.e. the theory that technology develops much faster in wartime than otherwise. Not sure I buy it, but it is a possibility.

Or there’s always Turtledove’s “Road Not Taken” scenario - where the aliens aren’t more advanced than us socially i.e. they’re only at a pre-Enlightenment social level and in most technology, but they completely accidentally lucked into space travel. So any arguments about how we, as a society, are better and better all the time are moot, because it would be as if Mongols or Vikings had space travel…

Or, you know, they may just need Blood for the Blood God!

Agree with the OP.

I think it’s likely whatever intelligence we meet will be either AI or genetically engineered. So a synthetic intelligence, to one extent or another.

Would an intelligence synthesize aggressive characteristics? I don’t believe so – if you’re trying to build interstellar spaceships you’re going to need a lot of cooperation, not combat. And a species with the ability to build such spaceships would quickly realize the main threat to themselves is themselves.

So what I’m saying is, this takes out the idea that “they’ll just be instinctively aggressive, like us”.

The OP left out a major point they made in the show.

Sure we have no resources on our planet that you couldn’t find anywhere. But for the right species we might have something that is not unique but extremely rare. Which is the Goldie Locks feature. Just the right mix of atmosphere, just the right gravity, water, etc…

They went on to make the point; if that were the case, we humans would just be in their way. Which would be very bad for us.

Obviously there are always exceptions. Fortunately for us, the nazis aren’t the norm for advanced civilizations on earth.

I just don’t buy that a super-intelligent species, one capable of both advanced AI and advanced space travel, would be stupid enough to create something with the potential to “get out of control and kill off its creators.” That’s like something we’d do. But, while we may be the Albert Einstein species on Earth, we’re no doubt the Gomer Pyle species in the Universe.

Conversely, I don’t believe a non-aggressive super intelligent species would create AI that would be safe for them, but threatening to others—that would be like the Hare Krishnas handing out exploding pencils at the airport; it would be against their nature.

I’d go so far as to surmise that virtually all super-intelligent species would have a violent ancestry. It’s hard enough to imagine even a minimally complex life-form evolving from a single cell (or whatever constituent unit they come from) without passing through stages that involve eating other life-forms. And, with no predation ancestry, there would be few advantages to evolve intelligence in the first place (I suppose you could place blue-green algae on an obstacle course planet (bits of food at the end of mazes, etc.) and see what happens, but other than that…).

And I agree that curiosity would be a prime reason for a super-intelligent species to travel far. I put this in the same category as those who would come here on a mission of good will. Scientists and good will ambassadors alike are typically not the warmongers. Worst case scenario is they may want to pin a few samples onto their collection board, in which case you simply take a step backwards and point to a more attractive human when they bring out their net.

The Goldie Locks feature is equivalent to a desirable habitable planet:

Sorry about that. I don’t know how I missed it.y

As for terraforming. That’s a big, I don’t know. Too many variables come in to play. Depending on how their ship works and the nature of their technology. I don’t think it’s at all a safe assumption to assume it would take less resources to terraform a new planet.

A few factors off the top of my head:

They may already know before hand were GLPs (Goldie locks planets) are. In fact, I think its likely that they would. Look at us with our limited technology. I just recently read an article where scientist think they have found a planet with massive amouts of water. Several light years away.

There may not be any terraformable planets in their local system.

How does their ship work? Wormhole technology? If so would it matter to them if they needed to travel 20 light years or 200? The expence to them may be the same.

Terraforming can take hundreds of years. Do they want to wait that long?

There are “ET’s” like this living among us right now!!! But we don’t call them ET’s, we call them autistic.
There are some autistic adults who are very intelligent, very rational.Their brains can do incredible feats of mathematics.But they are totally devoid of emotions.
Why is it so difficult to imagine a whole planet of such people?
They could do great engineering, and could explore the galaxy.

But I would not want to meet them.

It’s not difficult to imagine any number of such individuals. Its difficult to imagine a functional society composed solely of such people.

E.T. the Retard.

You forget that they could decide to exterminate us (and any other sentient species they come across) purely from a logical and pragmatic position-in that they might see us as long-term competitors if not threats. Without compassion to stay their trigger tentacles it probably would be a trivial decision for them (c.f. the Vogons in Hitchhiker’s Guide).

I think the OP is missing an important scenario: the ETs are expanding through the galaxy and we just happen to be in the way. Sucks to be us.

I din’t understand why the OP keeps using the term ‘super-intelligence’.

I don’t think we as a species are more intelligent than we were two hundred years ago. And I don’t think that we will be significantly more intelligent in another two hundred years.

Some species might acquire interstellar travel sooner than us for a number of reasons. One super-genius who developed the right physical theories to make it possible could do it. Or perhaps their industrial revolution simply happened a few centuries earlier in their development than it did in ours. Or some other species showed up at their door and gave ot to them. Who know?

Others have already suggested there is no necessary correlation between intelligence and morality or ethics. I agree with this, and I also claim there is no necessary correlation between intelligence and technology.

Ok, let’s expand on this.

First, let’s remember the goal post: when we are first visited by an extraterrestrial species, are they more likely to be aggressive or passive toward us? It’s a question of odds.

For the sake of simplicity let’s divide the possible evolutionary pathways to super-intelligent beings into just two camps: 1) Super-intelligent beings who are logical and compassionate; 2) Super-intelligent beings who are logical and compassionless. Can we agree that those are the only two possibilities? I can’t imagine beings with or without compassion becoming super-intelligent with no grasp of logic—Elsie would have to be pretty lucky to simply stumble upon the mechanics of inter-galactic space travel while chewing her cud.

Ok, we know the pathway to middling-intelligence coupled with compassion is possible with a planet’s apex-intelligent species because we’re a data point example. And, we know that this compassionate apex species (and a few others) evolved from simpler species that had no compassion—the arrow seems to point in the direction of more compassion over time for our species (and orcas ;)). We have no examples of a logical, non-compassionate apex-intelligent species. They may or may not exist, but let’s assume they do. So, it’s logical to assume the percentages of each type beings (logical-compassionate and logical-non-compassionate) are both non-zero, and most likely not tipped too much to either extreme.

So, of the percentage of logical, non-compassionate beings, what percentage of them would find it logical to harm or destroy another intelligent species? They could certainly evolve lacking compassion, yet still find it illogical to destroy other planets, or the apex species on them. Lack of compassion isn’t equal to aggression, in fact a case can be made that wonton aggression is illogical. Smart beings can deduce that there may well be deleterious consequences to aggressive behavior (e.g. counter-attack from a more powerful compassionate civilization who may witness the carnage applied on a weaker species).

Non-aggressive behavior could even be a favored heritable trait beyond a certain point of an intelligent being’s evolutionary pathway, compassionate or not. While aggressive behavior appears to be favored for the viability of lower life-forms, when those species evolve to acquire the game-changing ability to kill or destroy in great numbers, aggressive behavior most probably flips into a liability. Spiders do no harm to their species killing a few insects or even a few members of their own kind, but give one a weapon of mass insect-destruction and he may logically deploy it, with no remorse, in order to stockpile easy food for the rest of his life, to the extreme detriment of his species. First you learn to not harm yourself, then your family, then your species, then native species, then foreign species, then alien species—that could be the pathway favored by logic.

So, of this remaining percentage of logical, non-compassionate beings that are aggressive, how many would bother to make the trip to visit us aggressively? It’s a long way to travel just to bully someone or to take something that they could almost certainly find closer to home.

I maintain that there is a not-small percentage of super-intelligent species with compassion, and of those a large percentage would be motivated to visit us out of curiosity or on a mission of good will, if they could. I also maintain that while there may be a non-small percentage of super-intelligent species with logic and no compassion, a smaller percentage of them will be aggressive, and a smaller percentage of them would bother to bother us, even if they could. Ipso-facto, when the spaceship lands in my backyard, I’ll put on the tea, crumpets and prophylactic (you don’t know what kind of bugs those space chicks may have).

I have never met an unfriendly extraterrestrial.

Perhaps. But progress appears to accelerate over time. Seems to me that exterminating inferior lifeforms would be as rational for them as it is for us (e.g. exterminate chimps lest they overtake us).
But of course, post-singularity all bets are off.

Two hundred years might be borderline, but certainly long term of course we will become more intelligent: not by evolution but artificially.

We wouldn’t want to rely on these paleolithic grey blobs forever. As soon as we find a way to improve our cognitive abilities, we will. OK, there might be a few years lag while society pretends to wrestle with the moral issues. But this will be a trivial pause in the grand scheme of things.

Or failing that, AI will eventually reach and surpass (current) human cognitive levels.

This could be the case with mean, stupid super-intelligent beings, but those with compassion or smart enough to fear deleterious consequences, will use the world-reaping equivalent of a dolphin-safe tuna net, leaving us a bit ruffled, but unscathed.

You are correct, sir. John was the cynic of the group. But, “it can’t get no worse” is also applicable to our current stage of moral development.

I think that it’s overwhelmingly likely that when First Contact comes, it won’t be face-to-face. We’ll communicate with aliens long before we actually meet any in person. And while it’s fairly straightforward to help others through communication, it’s very difficult to hurt through communication.

What’s the story where the aliens send us the plans for a mysterious machine and we build it and it makes the sun explode? Or am I mixing together multiple plots here?

Well, we are made of meat.

I’m not very worried about ETs attacking us, since there is precious little we have that isn’t more easily available elsewhere. I’d be a lot more worried about them killing ups with kindness. On earth, if colonists were brutal or gentle with the colonized, it didn’t work out very well for the native culture.

One thing that always bugged me about ST:TOS is that no attention was paid to how much interference there would be for a native culture to discover they were not alone. Imagine the impact if the aliens would say “God? Oh, we gave up that ideas thousands of our years ago” or even worse “Yes, the Buddha visited us also.” Not to mention that all work on science would pretty much stop until we caught up to where the ETs were.
The only non-aggressive thing the aliens could do is to just not come until we were ready. I use the first interstellar flight in my book. That way first contact happens as a result of a great achievement by the contacted race, not just randomly. That’s my solution to the Fermi Paradox, anyway.