We do not know whether computational evolutionary techniques will result in a useful GAI algorithm in a way that isn’t intractable.
Is an evolved GAI gonna be any more intractable than the GIs we currently make the old fashioned way, with a man and a woman? Judging by the way people have always talked about the following generation, I think not
Waiting 4.2 billion years for a result does not seem practical to the degree where I would say GAI is inevitable because we can evolve one the same way nature has evolved one. Allow me to amend my statement…
There is no known or predicted algorithm with a corresponding (practical) mechanism by which it can be replicated that provides general artificial intelligence.
(and note that of course I think it is at least possible to have GAI; otherwise, I wouldn’t be working on it)
The brain simulates a picture of the landscape, not the landscape itself. Would you say the brain is simulating the moon when you look at the moon? It doesn’t. It makes a representation of a landscape or the moon that is both less complex than the objects themselves, and less complex than the brain.

There is no known or predicted algorithm with a corresponding (practical) mechanism by which it can be replicated that provides general artificial intelligence.
Even accepting this at face value, there’s also no known or predicted principle that would prevent GAI from being accomplished (unlike FTL), and we have examples of it in existence (unlike vacuum balloons). So it seems much more imminently achievable than either of those things.

Waiting 4.2 billion years for a result does not seem practical to the degree where I would say GAI is inevitable because we can evolve one the same way nature has evolved one. Allow me to amend my statement…
We aren’t simulating chemical soup and waiting for microbes to evolve into worms into fish into amphibians into protoreptiles into mammals into rodents into monkeys into apes into humans. We are taking a decision-making algorithm and allowing that to evolve. Even if it takes as many generations to go from our initial AI to GAI as it did to go from fish to humans, there’s no reason to believe each generation will take as long as it did for us.
Achievable is a far cry from inevitable.
Is GAI possible? Sure.
Is GAI likely? I don’t know, but if you put a gun to my head I’d say yes (or no, if saying no will keep you from shooting me).
Is GAI inevitable? No.
The problem (again) isn’t saying “Hey, let’s talk about ways we might achieve GAI.” (right on) or “Hey, what are the implications for society if GAI happens?” (right on), it is “Hey, if we don’t blow ourselves up, GAI is definitely going to happen.” (no, not based on our current best science on the subject)

Would you say the brain is simulating the moon when you look at the moon? It doesn’t. It makes a representation of a landscape or the moon that is both less complex than the objects themselves, and less complex than the brain.
That is my point. You don’t need to simulate the whole landscape, just the relevant parts. The brain will do the rest. The tricky part of the simulation is permanence, as mentioned upthread. This requires a lot more data storage, and also requires cross-connections between the observers to make sure their perceptions don’t contradict each other.

The problem (again) isn’t saying “Hey, let’s talk about ways we might achieve GAI.” (right on) or “Hey, what are the implications for society if GAI happens?” (right on), it is “Hey, if we don’t blow ourselves up, GAI is definitely going to happen.” (no, not based on our current best science on the subject)
Only because of your chosen terms. Sub “if we don’t blow ourselves up OR give up on scientific research, we are on a path that seems to lead towards GAI” and I don’t think it’s ridiculous at all.
Seems is a far cry from inevitable.

we are on a path that seems to lead towards GAI”
Based on what evidence? What scientific accomplishment has occurred that would suggest that we are on that path right now, today?
Just for fun, I just did a literature search for recent literature on “general artificial intelligence.” I wasn’t expecting much, but there was even less than I thought would be there. Quite a bit of the literature is on social implications, there’s very little technological papers on the subject (at least relative to most other research areas I can think of). So what is that you know that computer scientists don’t seem to know?
Again, to be clear, there is no known or predicted algorithm for GAI right now. There is known or predicted no practical mechanism by which we can produce such an algorithm. That’s the state of the path. How do you get from that to “we are on the path that seems to lead to GAI”?
The recent article in Scientific American gave examples of ‘AI’ that were just developments in industrial automation. The only commercial opportunity they identified for machines that respond like humans was a multi-billion dollar market in sex dolls.
What is the definition of the GAI? What need will drive it’s development? Sex dolls?

Again, to be clear, there is no known or predicted algorithm for GAI right now. There is known or predicted no practical mechanism by which we can produce such an algorithm. That’s the state of the path. How do you get from that to “we are on the path that seems to lead to GAI”?
We see it in nature; that’s my evidence that it can be built.
Imagine a member of a primitive society who has never seen or heard of the concept of a boat.
On coming to a river, they may conclude it is impossible to get across safely (he doesn’t know how to swim, either).
However, if he spots a leaf floating on the water, that’s evidence that actually, it may be possible to build something that can sit on the surface of the water. He cannot sit in a leaf to get across. But this is evidence of the concept of “bouyancy”. If he sees a duck sit on the water’s surface, that’s evidence that living things can make use of the same natural phenomena. If he sees a fallen log many times his size float by, that’s evidence that buoyancy can work for large objects too, not just at leaf or duck scale.
That man may not yet know how to build a boat. It may take many generations (and many innovators at the bottom of the river) before a successful boat is launched. But if he concluded from what he saw that crossing the river was possible; that in fact if he and his people worked at it long enough it was inevitable – he would not be wrong.

The recent article in Scientific American gave examples of ‘AI’ that were just developments in industrial automation. The only commercial opportunity they identified for machines that respond like humans was a multi-billion dollar market in sex dolls.
What is the definition of the GAI? What need will drive it’s development? Sex dolls?
General Artificial Intelligence doesn’t mean the AI will respond like a human (though an AI that responds like a human would require a general AI).
General Intelligence is exactly what it sounds like – intelligence that can be applied to multiple situations. So for example, we can build an AI that is far better at chess than we are. You can’t sit down and have a conversation with that AI, but you also can’t give that AI a bunch of numbers and tell it to graph them and look for patterns, or give the AI a list of the machines on your factory floor, their capacity, the things you are trying to produce, and ask it to optimize your production. It can do exactly one thing: play chess.
Now, that last example had to do with manufacturing, and with the field I’m in. We have an algorithm (or rather a set of algorithms) that takes your factory capacity, your employees, the sales you’ve made and have to fulfill, etc; and optimized your production.
But these are all predetermined algorithms that give results based on parameters, and then maybe feed those results into a second algorithm.
A general AI would approach such a problem the way you or I would. It isn’t a function where you plug in some variables and get back a result; or rather, it IS like that, but it can add new variables or change their weighing on the fly, the same way you could if you were weighing a decision and are presented with additional facts.
Eta: I hope it is clear how immensely useful this would be in every single field, not just ‘sex bots’. In fact GAI would make a terrible sex bot – someone buying a sex bot is probably not interested in a deep, intellectual partner, but in a sex toy. You don’t want a self aware sex toy any more than you want a self aware toaster. You could have a real relationship with an advanced GAI, not the sort of thing you want from an appliance.
Well, then please assess the following.
Human beings can build skyscrapers where we once could build only mud and straw huts. Evidence of both construction technology and progress. We know that solar systems can occur from natural forces. Dust clouds surrounding stars can produce planetary bodies by gravitational and other physical processes. Therefore, humans seem to be on the path to constructing solar systems.

Therefore, humans seem to be on the path to constructing solar systems.
Yeah, I’d agree with that.
Will we get there? Maybe, maybe not. But is there anything stopping us? Not really.
First I think we will be deconstructing solar systems, though.
Certainly. If we survive as a species long enough and keep advancing during that time, that is a potential path we could take. That’s what a “K3” civilization.
Building a solar system is an incredibly simple process. You just need to push rocks and gas around. It takes a ludicrous amount of raw energy to accomplish, though, so we aren’t gonna be doing it any time soon.
The thing is, “build a solar system” is a process that takes billions of years. So the whole “if we last long enough” is a pretty big IF. But yes, if our civilization lasts a billion years and continues advancing throughout that time, building solar systems is feasible.
Of course, a general AI takes much less energy. Consider your brain, and the brains of every human that’s ever lived. Consider all the thinking they’ve done, and all the computations their brains made. Consider all the calories they had to consume to power that many brains for that much time.
Take that energy and apply it to the Earth. How much is the Earth’s orbit going to shift by?
The difference between the energy cost of “building a solar system” and “running a human brain” is… astronomical
Even if building a solar system is by far the simpler task, it is incredibly costly (in terms of energy). If we had no examples of general intelligence, you’d be free to speculate that it might take an incredible amount of energy to accomplish (though how you are speculating without general intelligence remains a mystery). But in fact, we KNOW that you can run a human brain and a bunch of supporting machinery on about 8 million Joules a day. Even if GAI is orders of magnitude more computationally expensive than our brains are, that’s still orders of magnitude of orders of magnitude less than the energy required to build a solar system. The two are just not comparable.
Continuing the discussion from What do you think about the simulation theory?:

Even if such a simulation were possible isn’t the probability that we happen to be living in the timeframe that such a simulation was created and maintained fairly slim?
How so? The whole point of the simulation theory is that we are currently experiencing the simulation. So the odds of us being “in the part of the simulation that is right now” is 100% as an a priori basis.
Here’s another take my wife pointed out:
Wife: Hey, remember that game Sim City (from circa 1990?)
Me: Sure.
Wife: There was a free demo version, but if you played it too long or used cheat codes to keep playing it beyond a certain point, the game would eventually figure it out.
Me: Oh yeah! And it would visit your simulation with disasters, like hurricanes, or Godzilla attacking your city.
Wife. Right! And you keep talking about how our reality is some kind of simulation…?
Me: … Oh. My. God. You mean our reality is not just a simulation…
(Wife and me in chorus): …but a PIRATED SIMULATION!
This explains 2016-2020 and beyond COMPLETELY!

That is my point. You don’t need to simulate the whole landscape, just the relevant parts. The brain will do the rest. The tricky part of the simulation is permanence, as mentioned upthread. This requires a lot more data storage, and also requires cross-connections between the observers to make sure their perceptions don’t contradict each other.
Actually you’re missing two things - you need object permanence, and you need the objects in the environment to interact and change over time in a natural manner, even outside of your view. (Which is sort of like saying we need object impermanence - if you plant a sapling and go away for thirty years, when you come back you don’t expect to find a sapling.)
So your brain’s “simulation” of reality as exemplified in constructing a image of reality from visual information is merely missing object permanence and objects interacting.
Which are pretty much the defining properties of a simulation. So no, vision is not simulation.

You both are assuming that humans are the point. Given the universe that we see, that’s far from clear. Humans could be an unanticipated side effect of the interactions of the arbitrarily generated layout of the simulated matter.
Honestly, given the universe that we observe, my best guess as to the purpose of our simulation is “Jupiter-brain quantum computer screensaver”. The starscapes out there form a pretty picture, don’t they?
The first part of my response specifically considered a whole universe simulation. Several people have been trying to get around this problem with a human-centric simulation - the second part explained why this made no sense either.

Power isn’t a problem - you can just run it slower. Things inside the simulation wouldn’t know the difference, and even a modern computer could probably update the state of a few quarks a second, and given enough seconds, it would have advanced all of space a single unit of planck time. Rinse, repeat, repeat, and repeat some more and poof: simulated universe.
The real problem is storage space. There’s no working around that - which means that each internal universe will be sparing less and less ‘memory’ for the simulations it runs inside it. It won’t take many iterations before emulating things at the atomic level isn’t feasable, and another level or two down the simulated worlds are going to look like a level of Wolfenstein 3D. And that’s why it’s not simulations all the way down - not because of power limitations, but because when you try to simulate a universe on a 386, your simulated nazis are going to be yelling “Mein Leben!”, not making simulations of their own.
Storage is certainly a problem, but power is also. Unless you are running on really old technology, modern computers have leakage current even if not doing anything. This is called Iddq - for Idd quiescent. 30 years ago this could be measured in microamps, but today it gets close to amps.
However, information theory tells you the minimal amount of energy to do a computation. Really spreading it out might reduce peak power, but not total power.
And not to mention that simulations much slower than real time are going to take stellar lifetimes to complete. If you are studying what is going to be the future of your universe, through simulation, it doesn’t help if your universe suffers heat death before you get an answer.
It might be amusing to figure out what the laws of physics must be for a universe in which a practical simulation can be done.

Why do we play the Sims?
If we are in a simulation, it is a game being played by higher or more advanced beings, simulating or playing in a game. It doesn’t have to encompass the entire universe, just the Earth. It doesn’t have to simulate the who history of the universe, just the part that they are interested in.
We could all be at a Dave and Busters, playing “Earth Life, late Twentieth and early Twenty-First century edition.”
There doesn’t need to be all that much modeling. I’ve only looked through a telescope personally twice, and both with barely enough resolution to make out the rings of Saturn. Everything else I’ve seen has been on a computer screen. I’ve never seen anything smaller than a cell with my own eyes, anything more advanced than that was also just on a computer screen. There may be nearly 8 billion people on this planet, but I’ve only met a few thousand of them, and only had interactions meaningful enough that could determine a simple talker AI from sentience with a handful.
If playing the Sims required a planet’s worth of energy production, we probably wouldn’t play it as much. Actually, the Sims is analogous to instruction level simulation, where you just don’t model anything that goes on at the microlevel of the processor. If people in the Sims became aware, I suspect they’d quickly figure out they are in a simulation.
And we see how you go from a universal level of simulation to an Earth level of simulation to a personal level of simulation. In the Heinlein story they rushed to build Paris - in yours they’d have to rush to build a full simulation model of Paris.
Plus, this personal level of simulation would almost certainly not include writing another simulator, so the probability arguments in favor of us living in a simulation no longer work.
One thing I wonder: is there any way to tell from inside the simulation that it’s a simulation? In particular if we had a “Theory of Everything” that perfectly explained the fundamental laws of physics in our universe, could it be demonstrated that those physical laws are mathematically equivalent to the computations needed to run a simulation? Or is the disconnect absolute- the higher order reality is completely unknowable from within the simulation?