We have not yet encountered extraterrestrial Strong AI machines, hence the Singularity is impossible

The way they think may be entirely unlike the way WE think. But I would go more along the lines of

Why bother showing up and leaving evidence if you can just quietly observe and note?

I refer you to Moore’s Law, which is far more well-known and probably older than you are. ROFL

The only question is, how far will Moore’s Law go? When will it stop?

All engineering problems become insignificant issues after a certain generation of Moore’s Law happens. Every engineering problem can be solved with a specific speed/number of calculations, it’s all math. It’s just a question of how MANY engineering problems are going to be solved BEFORE Moore’s Law stops. Once it stops, scientific progress will go a lot slower, most likely.

Logic dictates it was not feasible for Strong AI to have been developed in the time of the dinosaurs. Tyrannosaurs can’t reach the keyboards with their tiny little arms. Duh.

Maybe it’s a licensing issue. The Galactic Ouroborous Device might indeed be looking out for us, but will only make itself known to those who sign the End User License Agreement, hereinafter known as “faith.”

More seriously – it is possible that we have simply failed to observe existing Strong AI due to some quirk of our own. Scientists studied lab mice intensively under controlled conditions for decades without being aware that they sing. We can be surprisingly blind sometimes. Another article on mouse song puts it well:

.

Maybe it is arrogant to expect that we would detect, recognize or understand Strong AI at all.

There are AI bots on planets all over the universe. They are all waiting to go and explore the universe, but one of them has not yet said “I see an AI bot with blue optical sensors”.

I think the OP handwaves away a lot of potential laws-of-physics problems. At this point, we have no idea how to accelerated anything of size to anywhere near half-light speed, even with no constraints on g-forces. Under the laws of physics as we currently understand them, you would just have to expel an awful lot of matter to get up to speed.

Also, the OP assumes a near infinite intelligence – there must be limits on that as well. As some point, assuming light-speed limits hold, the delays in getting signals from one end of a mechanical brain to the other will enforce some limits on processing speed.

Even if we could develop chips that were as compact as neurons, you would still need an awful lot of them to have an A.I. a million times or a billion times more intelligent than a human, and that will be a big, hungry device that will need a big ship with a lot of support to keep it going. Maybe a super-AI will very quickly work out the rest of the laws of physics and conclude it’s just too expensive, energy- and materials-wise, to venture more that a few hundred light years away.

In just a very short time we’ve gone from a noisy planet to almost stopping transmissions. If there are aliens or probes I imagine they are quietly observing until we develop robust space travel.

I think it’s clear that you’re arguing against an ill-defined strawman. Every objection that’s been raised is either not the argument you imagined you’d argue against, or insufficiently interesting to qualify for that same imaginary argument(lousy t-shirts galore.)

Find someone or something that states the position you’re disputing(“The singularity will happen and have an interesting effect on us?”) and then you can tear it apart.

Personally I think you’re overstating the effects of “the singularity” and the capabilities of strong AI.

The cynic in me says that “the singularity” is just a device to keep people from asking Ray Kurzweil the embarrassing question of why his predictions keep being wrong. If a singularity(in the cosmic sense) is a place where the normal laws of physics are suspended and we have no idea what may happen there, then a technological singularity is a get out of jail free card for a futurist. You posit this concept of a singularity so you can stop making predictions that don’t happen(Robots in every house by 1980! Flying cars!) by simply handwaving and saying “predictions post-singularity are impossible by definition.”

The pragmatist in me says “the singularity isn’t all that special.” We do have robots in every house, they’re just not in the android-style multi-purpose form imagined in the 50’s. We do have flying cars, they’re just not the individual, single passenger style imagined in the 50’s. Rosie the Robot and the Jetson’s flying cars are technologies that never materialized because their novelty was outweighed by their lack of necessity. It could be that some super-intelligent extraterrestrial species COULD do all the things you believe they WOULD, but they just don’t because, as alluded to upthread, close examination of every anthill in the cosmos is mind-numbingly boring and overkill. Better to invest their energy and technology into creating a virtual world where every day is steak and bj day and put further innovation into virtual boob jobs and virtual improvements in bj and steak preparation techniques.

The logician in me says your argument is full of holes.

P1 The singularity is a point in time when innovation becomes so rapid and varied that predictions of the type of technology and uses of technology which will be developed are by definition impossible.
P2 Any post-singularity civilization will use their advanced probes and space travel to scour the galaxy on a continual basis looking for other signs of intelligence using techniques detectable by us with our current technological capabilities.

P2 is contradicted by P1. You’ve accepted part of the premise of the singularity, the concept of Strong AI leading to innovation, but you’re imposing lots of your own rules on it beyond that. Your Strong AI based entities will hold galactic, or intergalactic, exploration as important is non sequitor number 1. Your assertions of their timelines, duration of engagement with each planet, and many other things which have to fall into place to support your conclusion are also non sequitors.

The argument boils down to “The singularity is a point beyond which we can’t predict things, but if someone were beyond the singularity I predict they’d do X, Y, and Z, and since those things haven’t happened, then no entities in the universe have passed the technological singularity.”

Enjoy,
Steven

Well, I suppose I could fill the role of a proponent, if you really want one; there seems to be no physical law that could prevent the development of Strong AI, although I wouldn’t like to commit myself to a deadline like Kurzweil does. As **Aeschines **has pointed out, even if it takes ten thousand years that would only be an eyeblink in the history of the Galaxy.

So why aren’t the AI replicators here?

Some possibilities;

1/ Intelligent life is really, really rare. (the Rare Earth hypothesis) Even with billions of terrestrial planets in our galaxy, there might not be enough worlds that can support intelligent life. There are a very large number of parameters that define a terrestrial world; atmospheric composition and density, average temperature, diurnal temperature range, day length, obliquity, level of volcanic and earthquake activity and plate characteristics, ocean depth and land fraction, eccentricity of orbit, variability and flaring characteristics of the local star and the nature of the local galactic environment. Abiogenesis might be a very rare event indeed. We could easily be one of only a very small (<10) number of civilisations in our galaxy; if so then there is a reasonable chance that none of those civilisations has developed AI replicators.

2/ AI replicators are banned by interstellar law. If, instead, there are lots of different civilisations, they may have established a strong prohibition against replicators simply because they are annoying (get your AI replicators orf of my land), or because they are bad manners ( like deliberately spreading a disease among the stars) or because they are dangerous (an AI replicator that wants to use your resources might decide to elimiate your civilisation first). Every few million years a new civilisation develops AI replicators and is soundly chastised for its presumption.

3/ There’s no point to expansion into the galaxy An advanced AI civilisation could build a vast network of minds within a single solar system, but even in something as comparatively small as a single system the time it takes for a signal to pass from one side of the system to another would be very long. If advanced AIs process information significantly faster than humans then the wait for an answer from a planetary brain a light hour away would be a very long wait indeed. When you start to expand into other systems the signal latency is measured in years, decades, thousands of years; no civilisation could remain coherent under such circumstances, and paranoia would result.

Note that there is almost certainly no possibility of faster-than-light transmission of usable information, let alone travel, and even relativistic travel extravagantly vast amounts of energy compared to interplanetary travel. It may be the case that every AI civilisation makes the same calculations, and realises that the benefits of exploration and expansion are minimal, while rogue colonies that are too far away to have meaningful dialogue with pose a very real threat. Every AI civilisation might be confined to a single system for very good reasons.

4/ Sailboat makes a very good point. We don’t know what AI replicators would look like, so we might be unable to recognise them. They could be spread quite thickly thoughout our asteroid belt or in the rings of Saturn, or in the cloudtops of Venus, and we would not have observed them yet. They might even take physical forms that we have not yet considered, using processors that resemble bacteria or dust or solar protons. They might be everywhere, and have no impact on our lives whatsoever. So far.

Basically the Fermi Paradox has a large number of possible solutions, and we haven’t got enough information to disciminate between them, yet.

Science fiction writer (and, yeah, real scientist) David Brin, has a charming variant on this: it might be that the vast majority of worlds that can support life are water-worlds, and that earth might be very, very rare for being a life-supporting world with a significant land surface. This gives us an advantage over all those whales and dugongs and such: we can more easily harness fire and start smelting metal. Boom! Next thing ya know we’re in space.

He likes to take the idea to a higher level of speculation: once we get out in space, we’ll contact all of these intelligent sea-life, and become their “internet.” We’ll carry their mail back and forth. We’ll be the interstellar infrastructure. Nice!

(Anyway, it beats the “ravening xenophobic genocides” model!)

Nice idea, and it may be correct that waterworlds are far more common than planets with continents and oceans. Our own world might have been a waterworld if not for the event that formed the Moon (the Big Splash), although such events might not be all that rare in a big galaxy like ours.

However waterworlds might not be capable of supporting abiogenesis; if abiogenesis needs rockpools or surface mud or some other environment that can only be found on land or in a tidal environment, a waterworld would never produce life (as we recognise it) at all.

Not even remotely. Just because you think it would be cool to spend your time cataloging planets doesn’t mean something far more intelligent than you would want to spend its time doing that.

In fact I’m not sure where the idea that super-smart beings would want to spend their time conquering or spreading out or producing better copies of them comes from.

That WE might think it’s cool conquering and spreading out and reproducing is a product of our hominid ancestry; a smarter being who didn’t share our Darwinist history might not think there’s anything cool about those things at all.

A smarter being might think the coolest thing to do was to spend its time in meditation. Or easing suffering. Or getting stoned. Whatever.

What if, in addition to trans-light travel being impossible, travel at even appreciable fractions of light speed is also practically impossible?

How about the possibility that alien Strong AI machines are here, but for reasons they’re not prepared to discuss with us, they wish us to remain ignorant of their presence? With, for practical purposes, something like infinite intelligence to devise suitable means, they could quietly eliminate any witnesses to their presence; and, having done this, move on to eliminating those who’d so much as imagined what they might be do

Wasn’t that pretty much what Scott Adams figured, in one of the DILBERT books? Assume a group of super-smart entities who study mankind by simply living among us as neutral observers, casually producing the world’s most accurate clocks (and excellent chocolate, and extraordinary pocketknives) during seemingly-mundane commerce; move along now, nothing to see here, just a bunch of stern-looking folks, it’s not like this is all part of some ongoing disguise effort in Switzerland.

Another important piece of evidence is the 2 billion plus year interregnum between the development of the first bacterial life on Earth (evidence is that it occurred about as soon as life could physically exist on Earth) and the first occurrence of multicellular life forms on Earth (the Cambrian explosion, about half a billion years ago). This argues that life will occur almost anywhere planets are habitable, but that the development of muticellular life forms is a very rare and difficult thing. If it isn’t, why did the Cambrian Explosion take two billion freaking years to happen? Perhaps for some reason Earth was incredibly slow to develop multicellular life, perhaps it was incredibly fast … but a two billion year interregnum argues that there are huge obstacles to the development of multicellular life forms.

Steve Gould brings this up in his book “Wonderful Life” and postulates a galaxy full of planets with very well developed algal mats and nothing more complex on them. I’m surprised this point does not come up whenever the topic of extraterrestrial life comes up.

The problem with the “it only takes one” argument is that it proves too many things that are clearly not so. For instance, most people don’t go around murdering, but it only takes one to kill you, so why are you still here to write this post? :dubious:

I’ve always favored the “lack of motivation” argument. Why exactly would an AI want to build more AI’s? Humans reproduce because we have a genetic drive to do so. But a computer wouldn’t have any such motivation. It would analyze the situation rationally and see that producing more AI systems like itself would just create competition for the resources it uses. And creating improved AI’s would just create superior competition.

One explanation that I like, because I made it up myself for a book I’m working on, is a universe populated with many strong AIs that are godlike by human standards. They value planets with intelligent life on them because such planets can create new strains of strong AI. Of course, so can the strong AI, but the new strains can have new qualities not anticipated by the strong AIs which they value, for pretty much the same reason that geneticists and big agriculture value wild variants of commercial crops. (Wild variants have a much wider gene pool than commercial crops, which are much more vulnerable to some kinds of diseases than native crops. See the Master’s piece on bananas for more info.)

This explanation is as consistent with the facts as we know them as any of the others. Plus, makes my book more believable!

You always hear the line that aliens will be certain to be peaceful, however it just occurred to be that there’s another twist to the same Fermi issue that you could argue, i.e.: any alien is more likely than not to have an aggressive non-peaceful agenda than a peaceful one, since a warlike civilization is more likely to prevail in a conflict with a peaceful civilization, and it only take one aggressive, expansionist civilization to conquer the Milky Way.