First, some terms:
Strong AI (artificial intelligence) is, roughly, the concept that we can create machines that are conscious and smarter than people. Strong AI - Wikipedia
The Singularity is the concept according to which Strong AI, once invented, would be able to produce even Stronger AI, to the point where machines of virtually unlimited intelligence would exist in a very short period of time. Technological singularity - Wikipedia
The argument:
We have no evidence of Strong AI bots visiting us. Even UFO proponents (to my knowledge) do not claim that visitors are machines or use strong AI. If you do not believe in ET visitors, then the argument would be that strong AI is impossible. If you do believe in ET visitors, then that’s even worse for strong AI, since it would indicate that machines do not replace animals, even in very high-tech societies. (I don’t want to argue about ETs; I’m just saying that, either way, it’s not good for Strong AI.)
Now let me deal with a few objections that I think are on the wrong track and implausible. Then I’ll deal with a few that are a bit more plausible.
Implausible objections:
What if trans-light-speed travel is impossible and thus they couldn’t get to us?
This in its own right would be a sad eventuality for Strong AI proponents: “I made it to the Singularity and all I got was this lousy T-shirt.” Yet, it’s not as grim as it might first appear.
If we assume that Strong AI machines are extinction-proof, then they have plenty of time to do their business. The Milky Way galaxy is “only” 100,000 light years across. Half the speed of light is a perfectly reasonable speed for travel in outer space, and Strong AI machines would presumably not have to worry much about G forces, so they could accelerate like hell up to that. That gives them “only” a little over 200,000 years to cross the entire galaxy, if need be, to find us. This is nothing in geologic time.
Even intergalactic travel is not all that limiting. The Andromeda Galaxy is “only” 2.5 million light years away, meaning that they would only need a little over 5,000,000 years to get to us. Still really not too bad in terms of geologic time. There are many other galaxies that are pretty close, too: List of nearest galaxies - Wikipedia
What if intelligent life aside from humans is extremely rare or nonexistent, and thus there are no Strong AI machines yet in existence?
According to this recent Science Daily article,
And,
Now, the scientist in question does not immediately conclude that there is intelligent life elsewhere in the Milky Way. He speculates that there may be.
We know intelligent life is possible. We know that there are billions of habitable planets in our own galaxy, and we may reasonably speculate that there are billions of habitable planets in nearby galaxies. We know from our own example that intelligent life is possible.
There is no universally (no pun intended) accepted way to calculate how many intelligent species there are out there, but we would only need one such species at our technological level to develop Strong AI (keep in mind that Singularity proponents like Ray Kurzweil say that the Singularity is definitely going to happen soon, probably within our lifetimes). Presumably that Strong AI would be able to go anywhere in a galaxy in a fairly short period of time and to nearby galaxies with a bit more time. The only thing that could stop it is another Strong AI.
I don’t know what the minimum time for life to evolve would be, but had any Strong AI been developed in the Milky Way or any nearby galaxy by the time of the dinosaurs, it should have been here by now. It is possible that no other intelligent life had developed and created Strong AI by then, but I don’t think it’s very plausible.
Some other objections:
What if Strong AI is not motivated to come here?
I think this would be another “lousy T-shirt” situation, akin to the idea that the first thing Strong AI might do is turn itself off. If Strong AI really had such a lack of curiosity and motivation, it might be equivalent to saying it is impossible. It would at least be virtually non-functional.
What if Strong AI has such good ethics that it decides not to interfere here?
This would be a fairly positive eventuality, but I think that total non-interference would be only one option for an ethical Strong AI. It could also come here and totally save us from all our hardships. It could make its presence known at various levels without directly interfering in our affairs.
Then there is the question of whether a Strong AI ethical on our terms is very likely in the first place. We just don’t know how such a system would behave. It could be completely altruistic, or it could be like the Borg. Mostly likely in my view is that, at best, it would be no better or limit its actions no more than its creator species. Again, we’d only need one Strong AI developed since the dinosaurs to conquer all for us to be feeling the effects (or already be annihilated).
I welcome your thoughts on the above!