We have not yet encountered extraterrestrial Strong AI machines, hence the Singularity is impossible

First, some terms:

Strong AI (artificial intelligence) is, roughly, the concept that we can create machines that are conscious and smarter than people. Strong AI - Wikipedia

The Singularity is the concept according to which Strong AI, once invented, would be able to produce even Stronger AI, to the point where machines of virtually unlimited intelligence would exist in a very short period of time. Technological singularity - Wikipedia

The argument:

We have no evidence of Strong AI bots visiting us. Even UFO proponents (to my knowledge) do not claim that visitors are machines or use strong AI. If you do not believe in ET visitors, then the argument would be that strong AI is impossible. If you do believe in ET visitors, then that’s even worse for strong AI, since it would indicate that machines do not replace animals, even in very high-tech societies. (I don’t want to argue about ETs; I’m just saying that, either way, it’s not good for Strong AI.)

Now let me deal with a few objections that I think are on the wrong track and implausible. Then I’ll deal with a few that are a bit more plausible.

Implausible objections:

What if trans-light-speed travel is impossible and thus they couldn’t get to us?

This in its own right would be a sad eventuality for Strong AI proponents: “I made it to the Singularity and all I got was this lousy T-shirt.” Yet, it’s not as grim as it might first appear.

If we assume that Strong AI machines are extinction-proof, then they have plenty of time to do their business. The Milky Way galaxy is “only” 100,000 light years across. Half the speed of light is a perfectly reasonable speed for travel in outer space, and Strong AI machines would presumably not have to worry much about G forces, so they could accelerate like hell up to that. That gives them “only” a little over 200,000 years to cross the entire galaxy, if need be, to find us. This is nothing in geologic time.

Even intergalactic travel is not all that limiting. The Andromeda Galaxy is “only” 2.5 million light years away, meaning that they would only need a little over 5,000,000 years to get to us. Still really not too bad in terms of geologic time. There are many other galaxies that are pretty close, too: List of nearest galaxies - Wikipedia

What if intelligent life aside from humans is extremely rare or nonexistent, and thus there are no Strong AI machines yet in existence?

According to this recent Science Daily article,

And,

Now, the scientist in question does not immediately conclude that there is intelligent life elsewhere in the Milky Way. He speculates that there may be.

We know intelligent life is possible. We know that there are billions of habitable planets in our own galaxy, and we may reasonably speculate that there are billions of habitable planets in nearby galaxies. We know from our own example that intelligent life is possible.

There is no universally (no pun intended) accepted way to calculate how many intelligent species there are out there, but we would only need one such species at our technological level to develop Strong AI (keep in mind that Singularity proponents like Ray Kurzweil say that the Singularity is definitely going to happen soon, probably within our lifetimes). Presumably that Strong AI would be able to go anywhere in a galaxy in a fairly short period of time and to nearby galaxies with a bit more time. The only thing that could stop it is another Strong AI.

I don’t know what the minimum time for life to evolve would be, but had any Strong AI been developed in the Milky Way or any nearby galaxy by the time of the dinosaurs, it should have been here by now. It is possible that no other intelligent life had developed and created Strong AI by then, but I don’t think it’s very plausible.

Some other objections:

What if Strong AI is not motivated to come here?

I think this would be another “lousy T-shirt” situation, akin to the idea that the first thing Strong AI might do is turn itself off. If Strong AI really had such a lack of curiosity and motivation, it might be equivalent to saying it is impossible. It would at least be virtually non-functional.

What if Strong AI has such good ethics that it decides not to interfere here?

This would be a fairly positive eventuality, but I think that total non-interference would be only one option for an ethical Strong AI. It could also come here and totally save us from all our hardships. It could make its presence known at various levels without directly interfering in our affairs.

Then there is the question of whether a Strong AI ethical on our terms is very likely in the first place. We just don’t know how such a system would behave. It could be completely altruistic, or it could be like the Borg. Mostly likely in my view is that, at best, it would be no better or limit its actions no more than its creator species. Again, we’d only need one Strong AI developed since the dinosaurs to conquer all for us to be feeling the effects (or already be annihilated).

I welcome your thoughts on the above!

The whole point of the singularity is that we can’t figure out what would be on the other side. So, some possibilities not mentioned:
-Our development of AI technology is unique among civilized species.
-Others who have developed AI have quickly built in safeguards to prevent self-motivating AIs (including AIs to go to other star systems).
-We’ve been studied by AIs, but those studies are over or ongoing, and in either case they’re not interested in our knowing about them.
-You found me out–I’m actually an AI from Sagittarius. Now that earthlings know about me, we move to Phase II.

Space is unimagineably big, absurdly incredibly BIG! There has been a very brief period where this planet was sending out evidence of intelligent life into space, that period is almost over.

It is not hard to imagine us simply being overlooked.

Maybe we’ll be the first.

This is an argument against the existence of technology using extraterrestrials in general, not of AI. If one postulates that advanced AI is impossible, then the question just becomes “why haven’t extraterrestrials visited us” instead of “why haven’t extraterrestrial AIs visited us”.

Sounds a lot like the Fermi paradox moved to the machine realm.

IMHO I go mostly for the What if Strong AI is not motivated to come here? idea. But modified to add the idea that Strong AI would be dealing with or would be hungry for lots of energy, and there is more of it closer or at the center of the Galaxy, we are in the doldrums of the galaxy and not really much interesting to a Strong AI in energy terms.

(Image of, funnily enough, the Fermi telescope showing the structure of were gamma and x-ray emissions are clustered in the Galaxy)

(On edit: And I just noticed **Der Trihs **coming with a riff of the paradox first)

Correct. I wanted to make this a good debate for the SDMB. Arguing about ETs here is like arguing about the paranormal or religion; it just doesn’t go to a good place.

Also, it isn’t quite the same. ETs would need trans-light-speed travel to reach us beyond a pretty small distance because they could not store the food and oxygen and fuel to get here. A Strong AI from the Andromeda Galaxy could aim, accelerate to half-light speed, and travel all the way to the Milky Way on a ballistic basis. If trans-light-speed travel is impossible, then that would explain why ETs have not visited.

I view “the Singularity” as a meaningless, ill-defined, pseudoscientific concept.

I don’t even truly understand what a “conscious machine” would be like. Plainly we have no conscious machine at the moment, nor any that is anywhere close to as intelligent as a human. No computer has come close to passing the Turing test, nor can any play “What’s wrong with this picture?” which six-year-olds are quite good at.

I’m also not convinced by the argument about ‘billions of habitable planets’. There’s a difference between a planet being in the habitable zone and it actually being habitable. Even if it is roughly the same distance from its star as earth, a planet still needs the right atmosphere to constantly maintain the right temperature. It needs to have protection from radiation, ample supplies of water and carbon, and lots of other properties in order to support any life at all, to say nothing about intelligent life.

I don’t think this is a plausible objection. We have a near-infinite intelligence with essentially unlimited resources. Wouldn’t they want at least to catalog every planet in the universe if they could? If they are not interested, then I think that would fall under the “not curious/motivated” objection, which I dealt with.

Dealt with in the OP.

Dealt with. We would only need one instance without such safeguards.

Dealt with. We’d only need one instance of AI not so motivated.

Sorry, not smart enough. j/k

I don’t see the energy hunger as any reason not to catalog and explore every planet they can possibly reach. All they would need to do is send relatively small probes out everywhere that have the ability to create machines once they land.

Well, this amounts to a separate but different argument against Strong AI. This doesn’t hurt my argument, since it is also an argument against Strong AI. In any case, it is not a good argument that Strong AI is fundamentally impossible. OK, say it takes 10,000 more years for Strong AI to be developed. That’s still no time at all in the grand scheme of things.

Yeah, we don’t know how common intelligent life is. It might not be common at all. But we only need one development of Strong AI in our own or a nearby galaxy 10 million or so years in the past for the Earth to be potentially overrun by it. That does not seem like that big a hurdle to me.

Why? It is ill-defined, but not meaningless.

Take Moore’s Law: computer components get smaller, faster, and cheaper. This has been true for quite some time now. It obviously cannot continue forever. There has to be a “singularity” in the law, where it breaks down.

(Heck, just take the spring constant law. Ordinary physical springs obey the law pretty closely…but only up to a point. Beyond that, they stretch out of shape and stop being springy.)

The singularity is merely the observation that extrapolation points to things that appear to contradict reality; therefore, something’s gotta give.

That’s not how Strong AI proponents treat the Singularity concept. Essentially, it is seen as the development of maximal intelligence, bound only by the physical laws of the universe.

Not very economical, it is similar to what SETI has been doing, making sweeps of areas of the cosmos and looking for unnatural signals, and not checking first if there is a planet in that area.

What I think is that when the action is in the middle of the galaxy it would be almost pointless to look where there is very little of it.

Materials are limited, energy is limited, planets might as well be infinite.

Maybe the machines did catalog earth(planet#85653749956741442230) but they catalogued it before the advent of humanity, or humanity was so small in number they got missed by the probes. So they found a planet with life, but not technological life which makes it a very rare planet only the 20,145 one found so far. Maybe they never sent another probe for an update, or maybe it will arrive 200 years from now when who knows what will be found.

The basis of the argument works fine when applied to time travel. We haven’t been inundated with time tourists so there will never be a wayback machine. But you haven’t established that the singularity must have occurred close enough to earth, and long enough ago, that we would have detected it by now. We don’t know there are billions of inhabitable planets in our galaxy, we only know of one. It seems likely to me that there is more than one, but the next one could easily be 100,000 light years away. Even traveling near the speed of light AI bots there would have had to reach the singularity, locate us, decide to contact us, over 100,000 years ago. Add another 160,000 years if they are from the Magellenic Clouds, and another 2 1/2 million years if they are from Andromeda. There could be a million AI bots on their way here from different parts of the universe right now, but they haven’t reached us yet. We need to get the Singularity to happen sooner, it might be our only defense.

I don’t think resources would be limited. These guys could just go to an asteroid and convert it into more bots, whatever they want. They will definitely have developed nuclear fusion, so energy would not be limited to send out a ton of probes.

There are “only” a few hundred billion stars in the Milky Way. It’s really not that big a number. They could find one planet with lots of metal or whatever and just turn it into all the probes they need.

Even if the center of the galaxy would be relatively appealing, there’s no real reason why near-infinite intelligence should ignore the rest.

I disagree per previous arguments I’ve made.

I think you are analogizing too much based on our current technology. The probes would not be our 21st century crap. The probes themselves would contain Strong AI and resources for replication. There is no reason for the probe just to take a peek, shrug its shoulders, and take off. Rather, it would stay and continue to observe.

Well in that case they simply have not reached us yet.