We have not yet encountered extraterrestrial Strong AI machines, hence the Singularity is impossible

Disregarding the possibility that we are the first/only, I would argue the most likely scenarios do not involve strong AI being impossible. The mere fact that our brains exist point quite clearly to it being possible to have at least human level intelligence in a tiny package, made out of readily available resources.

The most plausible scenarios, in my opinion, are these two:

First, FTL travel is impossible, and intelligent, tool using life is decently rare, such that it is unlikely to arise more than once or twice per galaxy. This would put most life/AI at quite literally an unreachable distance from us, since it could not outpace the expansion of the universe. And considering the difficulty and energy required to cross space, only a small fraction of areas in space would be subject to exploration by such an intelligence. Granted that a machine intelligence would have a lifespan quite a bit longer than a man, it is very much a stretch to imagine it is easy or viable to create a machine that can last for the millions of years necessary to cross the gulf between galaxies. It would certainly not be a task undertaken trivially, or on a whim, as such a craft would require a gross expenditure of resources and time.
Second, if FTL travel is possible, then they have come, and perhaps are even monitoring us now, but are simply not interfering. Who can say why. Some robotic 3 laws/Prime Directive sort of thing, perhaps.
Actually, the non interference would apply for any AI of the sort, simply because they have no reason to interfere. We are not a threat, nor a competitor, nor do we have anything to offer them. It would take only the vaguest whim to weight the equation towards leaving us alone.

Yeah, but if the goal isn’t conquest/expansion, what’s the point? What are these hypothetical robot monitors monitoring us for? It’s not like they’re going to report back to the biological units that created them, who might be extinct in any case.

My problem with both of those points is they both assume that all AI would act the same. Even if you had 5 different highly evolved civilizations, it is unlikely all 5 would agree to not interfere or make their presence known.

My view is if there are more than 5+ intelligent species with access to us, it is unlikely that all 5 would independently agree not to come here and/or not to interfere here. Besides not interfering is a sign of bad ethics since strong AI would be capable of helping us solve a variety of problems not just for ourselves but for other life forms (ie, how to industrialize w/o destroying the environment, how to grow meat w/o animal suffering, etc). I tend to assume any species which develops strong AI would have to be compassionate (or something similar) since developing an advanced society should require millions and probably billions of organisms living together in relative peace and harmony the way humans do (I said relative). Tit for tat is the best survival strategy, show compassion and cooperation until people take advantage of you, so being cooperative and empathetic has a survival advantage you may need to have a society (then again maybe there are other methods to keep a large society running w/o empathy). So any human made strong AI would probably make contact and try to help other civilizations. I don’t see why aliens would be different.

The concept of being limited by the speed of light isn’t something I am totally sure about either. There are concepts about designs to propel ships faster than light by expanding or shrinking space in front of and behind a ship. It seems like there might be a way around it.

I tend to assume we are one of the first intelligent species who also have advanced tools. The universe is not that old, we live in a 2nd or 3rd generation star system, and no intelligence could evolve on a first generation system since there weren’t enough heavy atoms. I have no idea how many generations of stars will continue to support biological life. I think stars are supposed to keep burning, decaying and burning for tens of trillions of years. If so, maybe stars that are 1000th generation when the universe is 2 trillion years old will still be creating life spontaneously.

Being intelligent isn’t enough though. Pigs and dolphins are intelligent too. You also need culture to pass on knowledge generation to generation as well as a physical body that can make tools. Not only that but you have to have a society that values those things. Human society was capable of industrializing but didn’t until the 1800s.

I honestly thought you were going to dovetail these by suggesting we are the self-replicating AIs sent here as sleeper agents for a who-can-say-why reason.

Close. Not all of us are Cylons. Most are merely human. :wink:

If I were a member of an elder species who discovered the secret to FTL after achieving a stable and benevolent culture, then I would recommend searching for other intelligent life. Why? Because perspective is everything. All cultures eventually stall out, become decadent and eventually wither if they are infused with new ideas and insights. Also, if it appears that life is ubiquitous and intelligence isn’t, then succoring younger cultures VERY carefully would be a reasonable goal. This could be done by introducing agents who might subtly change events to lead a culture away from self-destructive behaviors. Or, at the very least, some mechanism to record the immolation that occurred when passions outran intellects.

The interactions of several thousand intelligent beings is more complex than the fluid dynamics of a nebula collapsing into an event horizon. I posit that there would be nothing more interesting than the deep study of any culture alien to our own. This would be useful even if we never interacted with the studied culture because useful knowledge is gained if only that which teaches us more about our own culture.

Maybe we are the first. The first sentient species in the Universe, that is, or one of the first. That’s one of the stock answers to the Fermi Paradox.

Imagine us being the ‘elder race’.
It boggles the imagination.
But, that is a valid solution to The Fermi Paradox.

I don’t understand what they mean by ‘currently non-intersecting light cones’. Are they talking about how other civilizations may be sending out signals but they are traveling in a different direction of 3D space so we never see them?

According to Steven Pinker, intelligence is an adaptation to help people survive in social units (smarter people can build coaltion and they can take more resources out of society than they put in, while preventing others from freeloading the way they themselves are). He claims it creates an arms race of intelligence. but evenso intelligence is meaningless if you don’t have the physical resources (our planet is full of minerals and fossil fuels) to build a civilization, or a working hand to build tools, or the capacity for culture and language, or the capacity to live in large groups peacefully, etc. And despite the fact that we have all these things we didn’t even start industrializing until a few hundred years ago. We spend the first 200k years of our existence as nomads or as pre-tech civilizations.

I tend to believe we are the first. I disagree with the concept on that webpage of ‘Intelligent beings are vulnerable to (self)-destruction’. Our powers to destroy ourselves grow as our tech grows, but so do our abilities to survive catastrophe. We already have models, at least on paper, on how to avert asteroid collisions. We know tons about growing agriculture (even w/o the sun we can use artificial light powered by nuclear power if need be to grow crops indoors) and about public health to fight pandemics. The permanent threats to our survival become smaller and smaller the smarter we get.

Wesley Clark, as far as I can tell, a light cone uses Time as its primary axis dimension with the base circle describing the extent in a hypersurface that in turn represents the three normal dimensions of space.

It is a way of describing the movement of light through spacetime.

There is a fairly coherent description at Light cone - Wikipedia

No civilization bothers making self-replicating hyper-intelligent probes that can survive millions of years in space* because they invent the Matrix first.

*that’s probably the most unbelievable quality

Well, it’s not much of a Singularity, is it, if it leaves our daily lives unchanged?

If it can’t be described, that does not mean it can’t be duplicated.

If they’re strong AIs, they might simply be scientifically curious on their own account.

No, it means the signals are limited by the speed of light so they haven’t gotten here yet.

And with all of space and the matter/energy within accelerating away from each other, there are certainly domains that will never share a light cone.

I don’t really believe there can be anything that’s a physical thing (or an attribute or function of a physical thing) that can’t be described.

Well, you can use words to “describe” anything, but that does not mean your description wil have meaning. For example, how would you describe color to someone who has been blind from birth? Granted, you could talk about different wavelengths of light, but how could you make a blind man understand the experience of seeing yellow as the sighted experience it?

That’s quite far away from anything I meant though. Is there anything that we know exists, in the context of the human mind, but can reasonably label as defying description (i.e. not just beyond our current understanding, but actually beyond the possibility of grasp)?

I don’t buy the arguments that strong AI are necessarily so durable that they can tolerate high acceleration and the indignities of interstellar travel. They’re going to be prone to breakage from collision and acceleration, other forms of deterioration, and of course freezing, same as biologicals.

Just because something is artificial doesn’t mean it is ageless and magic.

Why would that number be any higher than one (or, alternatively, why would the decisions be “independent”)? If Civilization A has the idea that X is How Things Are Done, and Civilization B shows up and starts doing Y, the inevitable result is that they will either 1)agree on X, Y, or perhaps Z, or 2)one of them will convince (maybe literally, maybe euphemistically) the other to cease and desist from its unacceptable activities.