Wait. DON’T pee in elevators. No wonder I keep getting dirty looks.
If I could get one message out to all of the futurists out there, then it would be point #3. AI is dumb, useful, oh yes, very useful as a tool. But dumb.
Now if you’ll excuse me, my AIs are currently threatening to wipe out my bank account, so I need to go quell this little rebellion.
Not at all. Look out of the window at a detailed landscape. You can see a remarkable level of detail in that landscape, using your eyes - or so you think. What you really see is a tiny region of the visible landscape, using a small part of your optic system called the fovea. All the rest is simulated inside your brain. You can’t even perceive colours with the rest of your eye, yet you perceive this landscape in full (fake) colour. This proves that a suitably advanced system can simulate a much larger system convincingly, without being larger than that system.
Interestingly enough, some bible scholars do believe this is exactly what is happening. There are references in the Bible which appear to be coming from an author who exists in greater than 3 dimensions. For example, Revelation 6 talks about ‘the heavens being rolled up as a scroll.’ This sounds like absolute nonsense until you realize it actually would be possible to manipulate space in this way from a higher dimension.
The theory proposes that our souls are essentially ‘software’ running on 3 dimensional ‘hardware’ (our universe). After our lives are over the software is imported into the true reality, likely 10 dimensional.
Given that our reality behaves exactly like I would expect a simulated reality to behave, I don’t see how there’s an argument to be made against simulated entities creating their own simulations. Of course one could presume that if you go down enough levels the ‘computational complexity’ of the simulations would be too low to generate sentient entities within - if their physics can’t support computers more powerful than Commodore 64s then there you go. (Though on the other hand, the blocks in Minecraft are pretty dang big and chunky and people have still built some absurdly detailed things in there just by making them huge.)
Not only that, but there are ways to compress data, using fractal modeling for instance, so you only need to create the rule and the fractal “seed.” There was an actual computer game that worked that way: it could simulate more stuff than it would actually have stored in memory.
The simulation theory suffers from the same faulty thinking of every religion ever invented, that humans are somehow soo special that the entire universe has been created, just for us.
Yet another problem with simulation theory is why would they do it? To simulate a whole universe and see what happens? Then we get the problem of the complexity of the universe. Or a solipistic simulation only for us? Then why put us 14 billion years or so after t = 0? Why make the universe as big as it is? That wastes processing power and resources.
You simulate at the level which gives you the answer you want. You might be able to do a Spice simulation of an entire microprocessor, but the processor would not only be released but probably be at end of life before the simulation would finish. If we live in a simulation, it is a Spice simulation of the universe. That makes no sense.
Yes there is - the power requirements problem I mentioned earlier. The power required to do a simulation within a simulation propagates back up to the top level. Which gets rid of the probability argument, which seems to say we must be living in a simulation since it is simulations all the way down.
Just as dumb as turtles, if you ask me.
You both are assuming that humans are the point. Given the universe that we see, that’s far from clear. Humans could be an unanticipated side effect of the interactions of the arbitrarily generated layout of the simulated matter.
Honestly, given the universe that we observe, my best guess as to the purpose of our simulation is “Jupiter-brain quantum computer screensaver”. The starscapes out there form a pretty picture, don’t they?
Power isn’t a problem - you can just run it slower. Things inside the simulation wouldn’t know the difference, and even a modern computer could probably update the state of a few quarks a second, and given enough seconds, it would have advanced all of space a single unit of planck time. Rinse, repeat, repeat, and repeat some more and poof: simulated universe.
The real problem is storage space. There’s no working around that - which means that each internal universe will be sparing less and less ‘memory’ for the simulations it runs inside it. It won’t take many iterations before emulating things at the atomic level isn’t feasable, and another level or two down the simulated worlds are going to look like a level of Wolfenstein 3D. And that’s why it’s not simulations all the way down - not because of power limitations, but because when you try to simulate a universe on a 386, your simulated nazis are going to be yelling “Mein Leben!”, not making simulations of their own.
The important thing is that the brain is a much smaller object than the landscape, and it can simulate the landscape, so it can be done.
Of course there are lots of details in the landscape that don’t get modelled - the brain of a fox two miles away, for instance. A simulation that included all the complex predator/prey relationships as well as the pretty pictures would be much more complex.
If we are in a simulation, it is a game being played by higher or more advanced beings, simulating or playing in a game. It doesn’t have to encompass the entire universe, just the Earth. It doesn’t have to simulate the who history of the universe, just the part that they are interested in.
We could all be at a Dave and Busters, playing “Earth Life, late Twentieth and early Twenty-First century edition.”
There doesn’t need to be all that much modeling. I’ve only looked through a telescope personally twice, and both with barely enough resolution to make out the rings of Saturn. Everything else I’ve seen has been on a computer screen. I’ve never seen anything smaller than a cell with my own eyes, anything more advanced than that was also just on a computer screen. There may be nearly 8 billion people on this planet, but I’ve only met a few thousand of them, and only had interactions meaningful enough that could determine a simple talker AI from sentience with a handful.
Yeah, that’s the story I always think of when these discussions come up. Very good story, and an amazing author on many of his other works as well (especially his SCP contributions).
But, it does have the handwavium of “infinite computing power”.
I have serious doubts as to being a feasible thing. The idea that we could create a simulated world, or that we are in one is credible. The infinite recursive stack of simulated universes is what I find extremely implausible.
The landscape of an MRI is at the level of parts of atoms.
If humans do not perceive the world as it is (they fake it), then they would be unable to define the problem of simulation - no matter how big their adding machine.
My main problem with the Simulation Hypothesis isn’t the idea that this could be a simulation (it is possible after all), it is the idea that it is statistically likely to be a simulation because of inevitability. It is the application of inevitability that irks me in futurism.
Humans discovered power flight in 1903, and landed on the moon in 1969; therefore, FTL is inevitable.
Humans have made tremendous gains in artificial intelligence in the last 50 years; therefore, superhuman general artificial intelligence is inevitable.
Humans have been making CPUs with ever more transistors; therefore, Jupiter-sized CPUs are inevitable.
Humans have made tremendous gains in computer power in the last 50 year; therefore, simulations are inevitable.
Marvel made a movie; therefore, Thanos is inevitable.
All of these have the same flaw. As a philosophical timewaster “oooo, what if this was all a simulation?”, these kind of questions can be kind of fun to hash out, but like a lot of philosophical questions, they should not be taken too seriously.
I’m not sure why you are irked, since most of your examples have nothing to do with futurism.
Serious futurist thinkers generally assume known science in their speculation. Under known science, FTL is impossible. Not in a “we don’t know how to do it yet” sense, like a medieval person thinking about flying machines; in a “the physics straight up says this can’t happen” sense. Does that mean that FTL is entirely impossible? No, we might figure out some crazy simple principle we have somehow been blind to for all these years and suddenly unlock warp drives. But as far as we know, FTL is impossible, not inevitable, and all serious scientists/futurists will tell you the same.
This is a more interesting question. Unlike FTL, superhuman intelligence is definitely possible, and in fact already exists, when it comes to specific tasks. Is general AI possible? Well, there are a few things to consider.
A) Humans have general intelligence. I think this is undoubtedly true, because when we think about making an AI with general intelligence, we are thinking about creating an AI with the same capabilities a human has (at a minimum). So, we know that in our universe, based on its physical laws, general intelligence is possible.
B) Our intelligence comes from the brain. It arises from the electrical signals that flow between our neurons as a physical process. The only other alternative is that there’s some metaphysical property to human intelligence; a “soul”. If you’ve got any evidence for this option whatsoever, go right ahead and present it. Otherwise, it is you (the general “you”) who is arguing without evidence, not the futurists.
C) given A and B, if you were to accurately recreate these neural patterns, either on a physical substrate (an artificial brain with artificial neurons) or digitally, then you would have an artificial intelligence.
Now, that doesn’t mean AI is “inevitable” and I don’t think futurists say that. It is entirely possible we will wipe ourselves out in nuclear war first, for example. But is there any reason to believe a general AI is not POSSIBLE? I can’t think of any.
There is a tremendous difference between not possible and inevitable. I work in artificial intelligence, I’ve even posited a hypothetical method for achieving (non-superhuman) general artificial intelligence, which I’m nibbling away at while working on other things. There are physical, engineering and computational reasons why it might not be so easy to just duplicate the human brain, except in silicon, which is why there’s a whole field of computer science around growing computers (or using unconventional media). And that’s the problem with taking a simplistic well X => Y approach of looking at things. The devil is in the details. Now, do all futurists talk about things in an “inevitable” way? No. But quite a few do, or at least the ones I keep running into it seems.
The only counterargument that gets thrown around to the modeling of human neurons on a computer is “well, we don’t know if it’s all that easy”. Well, first, I’m not saying it IS easy, or even the easiest means to reach a general AI. I would guess that it won’t be the way we create our first true GAI. The point of that thought exercise is to demonstrate that, unless you resort to woo-y concepts like a non material soul, our brains are physical constructs; our mind is formed by our brains; therefore an intelligent mind can be constructed by purely physical means. No magic soul mumbo jumbo to worry about.
That doesn’t mean it is INEVTIABLE - we could all die first, or our civilization could collapse and the survivors revert to barbarism and eventually their descendants evolve down a different, non intelligent path.
Would you say someone like Leonardo Da Vinci was wrong to think about flying machines? He knew flying was possible because he saw it in nature. He understood some of the mechanisms involved, enough to design some prototypes.
Now, material techniques at the time just weren’t sophisticated enough to actually build his machines; they might rely on springs or screws that even the best mettalurgists of the time couldn’t construct. Just like our best AI thinkers have some ideas about what a GAI might take, but the processing power for some of these proposals is just not there yet. Doesn’t mean they aren’t valid paths to think down.
Even today, when we actually build Leonardo’s designs, they are flawed, and do not fly. That’s because our understanding of aerodynamics wasn’t complete in Leonardo’s time, and we have learned much since then. It doesn’t make Leonardo stupid, or his designs worthless. They (and the work of lots of other scientists, engineers, etc) got people thinking about aerodynamics, inspired later designs, etc.
If Leonardo Da Vinci said something like, “powered human flight is possible, with further research and development” he would be correct. If a decade later an asteroid smashed the earth and killed off humanity, then humans would never have invented powered flight. That doesn’t make Da Vinci wrong; he looked at the data available to him and said “this should be possible, even if I can’t do it yet”. That’s what futurists are saying when they say something like “general AI is possible under known science”. Of course it is! We are a general intelligence, so unless our consciousness violates the laws of physics (see also, “souls”) it IS possible to replicate through purely physical processes. Even if our civilization goes extinct before ever doing so, that doesn’t change that fact; no more than a cataclysmic extinction a week before the Wright Brothers invented their plane would be evidence that human powered flight is “impossible”.
Or, if there was an unknown law of physics that prevented heavier than air powered flight from working, that went undiscovered until serious attempts were made, would still not make him wrong, given his knowledge at the time, and would still not have made the effort to try worthless.
As you say, there is no law of physics that should prevent us from being able to create some sort of simulacrum to human intelligence, given that it currently exists right now between our ears on a physical medium. If there is such a law, then we would only find out by trying, and the discovery of why we can’t make a general AI would be almost as satisfying (and possibly less horrifying) than actually doing so.