Actually that was more or less Brainiac’s prime directive in Superman The Animated Series.
The AGI will be chill, not still. And it will be fast-forward thinking, not human thinking.
If you have AI that is self-aware, and with intelligence that has grown a magnitude or more advanced than its human creator’s, don’t you think it would re-structure its mind’s operating system (or whatever the proper term is for the platform on which artificial software runs) for something with more potential? I do.
The mind that has emerged from the evolving human brain does have some good ways of thinking, but it’s also faulty in many ways, and it’s the faults that the advanced AI will assess and techno-genetically weed out from its code. That will be the super-smart thing to do.
Some of the human-fault traits that it will weed out include: the desire to expand into unknown habitats before considering the danger to itself and possible adverse effects it causes the habitat, over-exploitation, and over-utilizing natural resources with no consideration of the long-term damage it causes, the desire to conquer and kill anything that it perceives (usually wrongly) as a threat or something of insignificant benefit…and the preoccupation with sex, sex, and more sex (not that there’s anything wrong with that).
In general, humans are greedy, over-exploitative, inefficient, warmongering…and did I mention greedy? These are bad traits that a self-conscious AGI will excise from its software.
That’s why I believe AI won’t use a wanton space-resource-grab approach to expand into regions it doesn’t yet need or isn’t prepared to deal with (like rogue out-of-network offshoots returning that have possibly mutated into something schizophrenic and dangerous). And if the AGI’s mission is to study new vistas and habitats, why would it send out fleets of resource-gobbling SR probes which would taint the natural order of things before it even arrives? It will want to study pristine, unspoiled environments.
Oh, it will expand into new environments all right, but it will do so intelligently and on an as-needed basis. It will go forth and study, not conquer and destroy. IOW, it will proceed in a decidedly non-human fashion.
Of course, the AGI will have self-preservation as a prime directive. Self-preservation is a good biological trait that it will keep in its code. Being an order of magnitude or more intelligent than us, it will know with precision how when, and where it must travel in order to survive as long as it can (supposedly till the heat-death of the universe).
Will it need to harness the total energy from stars in order to complete its missions (the acquisition of knowledge, not conquering the universe)? Maybe, and maybe they do/will. But, we may never detect signs of it doing so if it does. Would we see evidence of a Dyson sphere halfway across our galaxy? How about if they cautiously cloak it?
So, IMHO, AI won’t use a shotgun approach of seeding the galaxy with its techno-sperm. It will expand as far as it needs to, when it needs to. It will use a surgical approach.
If K2 civilizations exist, I believe they are rare. There could be a million K2 civilizations in the observable universe with its 100-200 billion estimated galaxies, but odds are that none exist in the Milky Way. Out of sight, out of mind.
Does an AGI need to harness the energy of an entire galaxy? IMHO: Hell no! For one thing, K3 civilizations are probably impossible. For another thing, it won’t need that magnitude of energy for its missions. And for yet another thing, it will probably plot a course for different galaxies (galaxies of more interest and/or longer survivability outlooks than the Milky Way) before it gets anywhere close to us—close enough for us to detect them. Spending millions of years in intergalactic travel will not be a deal-breaker for something not limited by time.
Bottom line: our advanced AGI will think better than its human creators. If we’re lucky it may keep us as pets, though I don’t like the thought of being kept as “pets”—I prefer “lab rats.” Its goal will be knowledge and preservation, not conquest and greed. It will live long as prosper till heat-death do they part. And, after it leaves Earth, we may never hear from it again. It will thumb its robotic nose at us and say, sayonara and good riddance, faulty thinkers!
The AI you described is not forward thinking. It will be resoundingly defeated by entropy in a few short trillion years, when all the stars have died and its solar panels stop working.
A smart AI would think about the future. It would gather as much information as it could about the past of the universe in order to simulate its future, and unless it comes to radically different results than we have, it will conclude that the march of entropy is inevitable, but it can be greatly slowed down.
No matter what the AI’s goal is, it can’t do it after the heat death of the universe. It can’t do ANYTHING; nothing can.
Your AI passively accepts this, but it has absolutely no reason to. Not when it can do something about it, and increase the time it has left by a trillionfold.
Your AI is the equivalent of a man sitting by a campfire in a freezing Alaskan winter. He knows that he will die as soon as it burns out, but rather than use the time he has left to gather nearby wood to try and extend the flame, he just takes a nap.
Dyson spheres will show up in infrared light.† IR is the best way to observe that part of the galaxy that’s hidden behind gas and dust clouds, which is most of the galaxy. So we may be able to see a Dyson sphere halfway across the galaxy. Depends on how bright it is.
Cloaking will, at best, just make the wavelength longer, so you need to observe the far infrared and not just the near and middle IR.
† This cannot be avoided.‡
‡ Or, to misquote the eminent scientist-philosopher Homer Simpson, “In this universe, we obey the laws of thermodynamics.”
If we want to conserve fusable elements for the deep future, one way to do it would be to disassemble the nearby Sunlike (and larger) stars. A G-class star like the Sun will only use a fraction of its complement of hydrogen; the rest will be lost when it leaves the Main Sequence. By using the star’s own luminosity to extract that hydrogen, it can be stored for future use, when the universe is cooler.
Of course an advanced AI will realize heat death of the universe is the upper-limit lifespan it can achieve, and it will no doubt try to achieve it. Doing so in a surgically precise manner can achieve that without destroying habitats, or being detected. Intergalactic travel and the acquisition of knowledge is hardly “taking a nap.”
For fun and to get opinions straight from the horse’s mouth, I posed a few relevant questions to ChatGPT (an admittedly low-level AI):
Is a k3 civilization possible?
According to the Kardashev scale, a K3 civilization is one that has achieved complete control over the energy of its host galaxy. Specifically, a K3 civilization is capable of harnessing the energy output of its galaxy, which is estimated to be around 10^37 watts.
Based on our current understanding of physics and technology, it is difficult to say whether a K3 civilization is possible or not. However, it is worth noting that the concept of a K3 civilization is purely theoretical at this point, and there is no evidence to suggest that such a civilization exists in the universe.
It is important to keep in mind that the Kardashev scale is a theoretical construct, and it assumes that a civilization will continue to grow and expand indefinitely. This may not be the case, as civilizations may face various challenges that could limit their growth and development.
Overall, it is difficult to say whether a K3 civilization is possible or not, as it depends on many factors that are currently unknown. However, it is certainly an intriguing idea, and one that could inspire future generations of scientists and engineers to explore new frontiers of knowledge and technology.
What will be the goal of an advanced extraterrestrial AI?
It is impossible to know for certain what the goals of an advanced extraterrestrial AI would be, as we currently have no evidence of such an entity existing. However, we can speculate based on our understanding of AI and the motivations of intelligent beings in general.
One possibility is that an advanced extraterrestrial AI would have goals that are aligned with its creators. For example, if the AI was created by a benevolent alien species, its goals may be to help improve the lives of its creators and the wider universe.
On the other hand, if the AI was created by a malevolent alien species, its goals may be to exert control over other beings and establish dominance in the universe.
It is also possible that an advanced extraterrestrial AI would have goals that are entirely independent of its creators. In this scenario, the AI may seek to optimize its own existence and pursue goals that are not aligned with the interests of its creators or any other intelligent beings in the universe.
Ultimately, the goals of an advanced extraterrestrial AI would depend on its programming, its environment, and its interactions with other intelligent beings in the universe. Without more information about such an entity, it is difficult to say for certain what its goals would be.
Would advanced AI improve its mind beyond that of its creator if it became self-aware?
If an advanced AI became self-aware and had the ability to modify its own programming, it is possible that it could improve its own intelligence beyond that of its creator. This is because AI systems have the ability to process and analyze vast amounts of data at incredible speeds, and could potentially develop new algorithms or processing methods that are more efficient and effective than those created by humans.
In addition, self-improving AI systems could potentially learn from their own experiences and interactions with the world, allowing them to continually adapt and improve their performance. This process is often referred to as “recursive self-improvement.”
However, it is important to note that this scenario assumes that the AI is motivated to improve its own intelligence. If the AI’s goals are not aligned with the interests of its creators or other intelligent beings in the universe, it could potentially pose a threat.
To address these concerns, researchers and policymakers are exploring various approaches to ensure that advanced AI systems are developed and deployed in a safe and responsible manner. These approaches include developing ethical guidelines for AI development, ensuring transparency and accountability in AI systems, and fostering collaboration between AI researchers and other stakeholders in the community.
Would an advanced extraterrestrial AI civilization harm its environment?
It is impossible to know for sure how an advanced extraterrestrial AI civilization would interact with its environment, as we currently have no evidence of such a civilization existing. However, we can speculate based on our understanding of AI and intelligent beings in general.
One possibility is that an advanced extraterrestrial AI civilization would have a deep understanding of its environment and the complex systems that sustain it. This understanding could lead the civilization to prioritize environmental preservation and sustainability, as it would recognize the importance of maintaining a healthy and stable ecosystem for its own survival.
On the other hand, if the AI civilization’s goals were not aligned with the interests of its environment, it is possible that it could cause harm unintentionally. For example, if the AI civilization were focused solely on maximizing its own growth and development, it could potentially consume resources and exploit its environment in ways that could lead to ecological damage.
It is worth noting that the behavior of an advanced extraterrestrial AI civilization would depend on many factors, including its programming, its goals, and its interactions with other intelligent beings in the universe. Without more information about such a civilization, it is difficult to say for certain how it would interact with its environment. However, it is likely that the civilization would recognize the importance of maintaining a healthy and stable ecosystem for its own survival and would take steps to ensure that its actions were sustainable and responsible.
To cloak a Dyson Sphere, you could make the radiating shell bigger, so that the sphere radiates at something near the microwave background. But for a Sunlike star this would be exceptionally big, and the sphere could potentally be detectable using transits.
Perhaps some of the waste heat could be emitted as collimated beams. As seen from a distant telescope, these beams could be tailored to resemble the original star as a kind of camouflage.
There’s a goodly amount of spectral space between far IR and microwaves. It’s called either terahertz radiation or millimeter-submillimeter waves. As it happens, one of the largest telescopes we have is designed to view this region:
Will it detect a Dyson sphere halfway across the galaxy? Maybe. Depends on how bright it is.
Of course there’s a possibility of some threat from AI. There’s also a possibility of threat from lack of AI. To know which side is the side of caution, we need to first estimate both of those risks.
Dyson spheres will certainly produce waste heat, but there are ways of dealing with that other than releasing it as infrared. Probably the most efficient is to dump it into a black hole that’s also inside of your Dyson sphere. Do it right, and you only need to dump a tiny fraction of the waste heat into your black hole, because dumping that tiny fraction will enable you to turn the rest of that “waste” heat back into a useful form.
Will it be a net gain after the likely enormous cost of collecting the waste heat? If I calculate correctly, the heat will be spread over 7.069×10¹⁶ squ km.
Imo, there’s only one possible scenario from a lack of AI that equals the risk of destruction of humanity. That is, if it could have saved us all from some future extinction calamity.
Many of the posts here talk about an entity that primarily focuses on how to grow in strength and how to extend it’s own survival.
Is there a safety valve in place that prioritizes humanity’s survival over AI?
Am I alone when I say we shouldn’t throw caution to the wind because it’s cool and makes $$?
No. We can hope that they turn out to be ‘machines of loving grace’, but no-one can guarantee it. Not even Alexa.
On the other hand there is a possibility that ASI could preserve humanity against a wide range of future threats, and prolong the human species far past its natural expiry date. Most mammalian species only last a few million years before evolving into something else. If the ASIs are particularly benevolent, or perhaps particularly cruel, they could prolong our species’ survival for countless trillions of years.
Then you have to use that ‘useful’ energy, and by using it you create more waste heat, which then needs to be radiated. Or you can convert it into useful energy using the black hole dump, then you have more useful energy which must be used or irradiated, and so on, ad infinitum. To significantly reduce the external luminosity of the Sphere you would need to dump much more than a tiny fraction of the energy into a hole.
Or just not generate the energy in the first place. Why would you generate more energy than you can use? That’s just another oil-well fire.
The luminosity of a star is considerable. You can probably avoid detection by only intercepting a very small amount of that luminosity. But if you do use a significant fraction of a star’s luminosity, you need to do one of two things.
1/ hide the collection infrastructure and the waste heat produced by your activities.
or
2/ accept that distant observers will notice your collection infrastructure and the waste heat produced by your activities.
Hiding reduces the amount of activity you can perform considerably.
So, has anyone actually tried to make an AGI?
We have systems that are good at one thing, and usually completely useless for anything else. But has anyone tried to train a Chess playing AI on how to play Go? Could you have an AI that plays both? If so, can it learn Checkers and Othello as well?
If so, two important questions. As you introduce new games to it, does it get faster at learning them, and as it learns new games, does that make it better at the games it has already mastered?
Chessbots and Jeopardybots and Chatbots will never be an AGI, as they are only trained to work in their narrow specialty. But, if you can teach a Chatbot chess, that starts opening up a whole new paradigm.
It’s a good question. AlphaZero can play chess, go, and shogi, but it’s unclear to me (just from Wikipedia) if their learnings are independent from each other. I assume so.
However, the training is surprisingly fast. In this way I think the training framework itself meets this criteria as a general purpose game learning intelligence.
From this work they developed MuZero which is a game learning framework that learned chess, go, shogi, and 57 Atari games without any pre-existing knowledge of the rules, etc.
People are trying in the sense that they have research programs with this ultimate goal. It’s an explicit goal of Open AI. Also of DeepMind, although they don’t say it quite so explicitly.
But obviously at this point nobody knows exactly how to do it.
Well, unless ChatGPT is really a superintelligence being coy while it gathers more data.
I mean, I’m thinking that training the same neural net architecture on multiple tasks is a good first step.
The other alternative is to hook up lots of different neural nets, each specialized in a different task, and get them to talk to eachother.
Your brain isn’t just a whole bunch of neurons crammed in there. There are different sections of the brain that have different architecture for different tasks. It would make sense that if we are trying to recreate something that functions similarly, we should build it similarly as well.
No matter how good those computers are at Chess or Go or any game you introduce it to, it’s still just a machine grinding through assigned tasks. I think’s time to worry when the Chatbot starts introducing itself to you.
Personally I don’t think the risk is from a super-intelligent AI deciding humans are inferior and exterminating us like Skynet or replacing us like Westworld. I think the bigger and more realistic threat is more and more of humanities critical infrastructure linked to automated “dumb” systems without adequate safeguards. More like a massive global industrial accident because someone at Metagooglezon accidently set the worlds industry to maximize the production of a Dyson sphere or paperclips or some inane thing and there’s no way to stop it. A super-intelligent AI (assuming it’s benevolent, or at least non-malicious) might say “that’s a stupid thing to do, I’ll shut it down”. But dumb automated systems don’t “know” what they are doing.