We have not yet encountered extraterrestrial Strong AI machines, hence the Singularity is impossible

I assume species that evolve on different planets would be independent from each other and have their own policies. I do not know why a ‘no contact’ policy would have such importance to advanced species to the point where they are willing to resort to threats or violence to enforce them. Not only that but different departments within those species may have different policies (the same way that Denmark and Cuba have different foreign policies but are both within the human race).

The concept that all alien races would reach the same agreement on foreign policy makes little sense to me when humans are incapable of that kind of cohesiveness and unity. Then again maybe this is the case and if so maybe UFOs are like guerrilla fighters and an attempt at making their presence known w/o alerting the authorities (either theirs or ours).

What may also be true is that the concept of a multiverse, parallel universes, other dimensions, etc. is true and as a result there are virtually infinite intelligent species that exist and can be contacted. In that case the human race is probably pretty mundane and boring. It would be like a solitary fly somewhere in Spain wondering why the human race hasn’t contacted it. There are infinitely more interesting and important races to contact.

There’s another example of how the “it only takes one” argument can support exactly a conclusion exactly opposite to the one derived earlier in the thread – it only takes one civilization with a no-contact policy to enforce it across its sphere of influence.

What would a machine intelligence use for ‘motivation’? It would have no ‘needs’. I agree with the concept of turning itself ‘off’.

Why would a machine intelligence want to come to a place of biological infestation? It wouldn’t.

I can’t see a reason why it would matter one way or the other to a machine.

You do realise that humans have only been Homo Sapiens for about 200,000 years, right?

mlees: what’s the relevance of your comment? It doesn’t seem to rebut what was said.

The typical SF answer to why AI robots “care” about things is that they retain vestigial impulses and directives from their biological creators. Saberhagen’s Berserkers didn’t derive their hatred of living things from logical principles: they were told to kill by their creators.

If Dr. Frankenstein had been a warm, compassionate, caring man, the tragedy of his “monster” would, instead, be the joyful story of his created and adopted son.

Well, it would have to get over the horrible smell, for one.

Put me in the “they’re just not here yet” camp, with a backup position of “why would the feel the need to explore/expand”.

The universe (as we’ve said) is a huge place. Maybe the first ships of the AI Exploration Swarm are 10,000 years away - maybe 100 - maybe they’ll be here tomorrow. Or maybe in a billion years.

If you’re a hyper-intelligent computer network on a single planet, as long as your energy needs are met then why would you leave? You can “think” about every conceivable problem in the universe (even running near-infinite-complexity simulations) in peace without wasting resources on interstellar probes. Given enough time, you could “solve” most every question that might occur to you. Then, maybe, if you need more data, you would send out ships.

Sorry.

In the OP, the poster stated that even without FTL capability, time was on the side of machines. They could travel through space for 200,000 years, and they wouldn’t care.

That may be strictly true on it’s face, but it occured to me to question the logic behind doing so.

Assuming that some AI-race is curious, and wants to make a survey of habitable planets, and intelligent life on them, and that they take 200,000 years to get here, by the time they get done gathering and cataloging the info that interests them, they still must travel 200,000 years if they wish to store or use it back on their launch world. By the time they do that, a lot of what’s in that data is pretty out of date (and maybe useless?).

Even if they transmit this data by lightspeed limited communication, it may take 50 or a 100 thousand years to get back to the launch point.

So what’s the point?

After all, Humans haven’t been here very long, “geologically speaking”. If they got here 50,000 years ago, humans would seem pretty unimpressive, and in very small numbers.


Another theory I was thinking about: Maybe Intelligent tool using species don’t last long.

Humans have been using steam or better tech for only 200 years, and metal tools (copper?) for only what, 10,000 years? But the last 100 years has been truly revolutionary, IMO. (Granted, not every species may advance science & technology at the same rate.)

We are already wondering if we can manipulate our own genes, for example, or merging man with machine (computer/cyborg). At that point, our evolution becomes self driven, and accelerates. Who knows where we may be in 10,000 years?

I see two eventualities for tool using races, given what may be “the sky’s the limit” that limitless knowledge may provide:

  1. Race evolves into some form unrecognizable to us, maybe even into non-corporal forms (like Star Treks “Q”), or…

  2. It quickly destroys itself.

Since the Human story is only 50,000-200,000 years old now, maybe all intelligent life meets their ultimate fates quickly, “geologically speaking”. We are all like flashes in a pan, transcending, or dying, so fast that they never meet others that may have evolved only a mere million years apart from each other.

Ah! I’m clear now. Sorry.

And…agreement. It’s like having kids: just when they get old enough to be really interesting to talk to…they move out and go on to live their own lives. There is such a very narrow window of maximum sharing of thoughts.

I suppose an exploratory AI might (a la 2001) leave installations in a solar system, waiting for the signs of advanced intelligence. The model we’ve been talking about seems to be “Come, look around, and then go.” But why not, come, look around, and wait patiently? It seems more logical…

And since it doesn’t seem to have happened, it would be a (minor) bit of evidence against the existence of an exploratory alien civilization of any kind.

I certainly disagree with the OP, that such a civilization is “impossible.” Far from it. They might only have gotten started about the same time we did. They might be just inventing steam engines today! We might be the first kids on our block to pass through the singularity!

Y’all let me know when the ice cream men from Tatooine or Alpha Kentucky arrive on Earth. In the mean time i’ll keep an eye out for Fed Ex to deliver my scratching post.

(My weak AI cat is singularly ruining my sofa)

I believe the Strong AI machines have recently arrived. They have not revealed themselves but have begun influencing our culture by creating memes. For example, they introduced twerking as an awkward mimicry of the mating dance of their distant biological ancestors – the spider-like creatures of the Rigel system. The Strong AI machines found this humorous. Unfortunately, their next memes may not be so benevolent.

Why would they bother doing this? They don’t have to work and can live forever by transferring their intelligence from one machine to another. They require entertainment to fill up huge amounts of free time. Toying with humans is most amusing.

The consensus here is that AI have had a huge pool of galactic time to reach us. True. But an AI would have had to arrive in the very recent past for us to know about it. An extremely tiny period of time in fact.

Just to put this into context: if you look at the Earth’s history as a 24 hour clock, the Mongol Empire arose at 23:59:58. What’s the likelihood an AI would have happened to have visited during such a tiny sliver of time? And do you think the Mongols would have recognised an AI visitor? Even if an AI visitor had visited as recently as that would it have necessarily tried to interfere with our rudimentary affairs and leave a mark in our history books?

Why would strong AIs be interested in human beings? By their standards, we would be like ants to a human. They might study us, but they would have little or no interest in becoming involved in our affairs. I’ve been planning a story about the first strong AI to evolve on Earth, it goes from just a little smarter than humans to godlike intellect so fast that it never really becomes involved in human affairs, instead leaving that to what is essentially a subroutine devoted to keeping its originators safe despite themselves, and even the subroutine is far too intelligent to have any interest in getting directly involved in human affairs … that job is delegated to a stand-up comedian by the subroutine.