What do you think about the simulation theory?

You could have just said “I agree with you” and saved yourself a lot of typing. Although, your first sentence is not really correct. The counterargument is not whether it is easy, it is a question as to whether it is possible. I don’t want to make this too much of a tangent; however,

  • it is unclear if silicon-based computing elements can reliably reproduce the signalling that occurs in biological neurons in a manner that allows for replication of neural activity (physical),
  • it is unclear if silicon-based CPUs can be scaled up in size appropriately (engineering); and,
  • it is unclear what algorithm is being executed in the human brain, and no clear way to determine how to reproduce it (computational).

Any one of these could be a limitation to building a silicon-based replica of the human brain. This is not to say, of course, that replicating the human brain is the only means by which to produce general artificial intelligence (the approach of working on does not use rely on this technique).

And that is also assuming that we use silicon to create a super human AI, rather than just learning more about the wetware we carry in our heads, and learn to program and improve that.

An interface that can take care of the things that a a computer can do better, like complex math and memory, combined with the human brain’s sentience, could result in some pretty powerful computing.

Yeah, exactly. There’s a whole field of research (unconventional computation) that’s examining this. And in that area, it is unclear whether it can be done in a practical way. Of course, we known it can be done. Pairs of humans do it all the time (Oh my!), but can we capture that process in a way that is unique from human biology (growing a human brain in the non-traditional manner would be scientifically interesting, but probably not that useful since we can already do so, in that having an “additional” human brain laying around doesn’t really help).

Well why do we have to build it out of silicone? We know it can be built out of carbon based nanonmachinery, because that’s what cells are - naturally occurring carbon based nanomachines. So, silicone may be easier to work with; but if we find a real roadblock down the silicone CPU route, we already know a mind can be built using carbon based machinery (of course the brain is 75% H20, and there’s a lot more than just carbon there).

Again to be clear, my objection isn’t to pondering the possible. That’s what scientists do. My objection is when it transcends that it being inevitable. Which is a problem with simulation theory in particular since it says that our reality isn’t like the reality because it is far more likely that there are many simulations (S). With S >> 1, and the probability being 1/S, then we should assume our reality is a simulation. The root problem is this assumption that there are many, many simulations without a realistic way to get there.

Is it possible? Sure.
It is likely? I don’t know. I don’t think there’s any way to compute such a probability right now.
Is it certain? Ha ha ha ha ha. No.

Certainly, and the term ‘artificial’ has a lot of wiggle room too. If we recreate a human brain atom by atom, is that an AI? What if we create a human using a regular sperm and egg, with an artificial womb? What if we use a regular egg and sperm but only after scooping out the genetic material and replacing it with our own?

Under the current definition of AI, it probably wouldn’t be.

You are right, we don’t necessarily disagree. I think GAI is very possible, and I’m relatively confident I will see examples of it in my life time. But I don’t think that’s necessarily evidence of Simulation Theory.

Ok, but the further you get away from how we do computing right now, the further you get into science fiction. And again (and again and again), I have absolutely no objection to speculation. Curiosity is a big part of why humans have scientifically progressed to where we are today (for good or ill). My objection is when futurists speak in terms that state (or heavily imply) inevitability.

What if those genetic changes we made to the sperm and egg are specifically designed to adjust brain development in a predetermined way, though?

The whole idea of “artificial” vs “natural” as a dichotomy is pretty silly to begin with. If cities are artificial, are ant hills and termite mounds?

How are “organic” plants “natural” when the changes we have made to them through artificial selection vastly greater than the changes that most GMO produce sees?

How can we call any environment on Earth “natural” when humans have completely reshaped almost every biome on the planet starting tens of thousands of years ago, before any semblance of civilization even existed?

That’s above my pay grade. I just work here, I don’t make the definitions. :wink:

I would be very surprised if we ARE doing computing the same way we are today even a century from now. Analog computers like the Abbacus or the 3 4 5, 12 knotted rope were used at first, then replaced by punch cards and vacuum tubes, then transistors and CPUs. I find it very likely that our substrate will change as well.

Fair enough! :wink:

I honestly have no idea. The big one people are looking at, of course, is quantum computing. The professor who used to be right next door to me worked in that field so I used to talk to him from time to time about it. He would always say it was “97% promising, but the last 3% is the hard part.”

Of course, the other main approaches have been more cores and moving software processes to hardware. I hate to make predictions, but that approach seems unsustainable to me. I think there are some serious engineering challenges to overcome to make this the way forward, and no obvious way to overcome them.

So, I’m very certain that current and future scientists will continue to look at the problem, but whether a solution can be found is unclear.

The simulation hypothesis is one of many, many, possible solutions to the Fermi Paradox. Dyson suggested in 1960 that we should look for infra-red sources consistent with his concept of a Dyson Sphere; in 1964 Kardashev noted that a signal transmitted by a civilisation that could utilise all the power of a star would be detectable at an extremely large distance (depending on the size of the receiver, as well). Since that time, both of these signatures have been sought by various means, with no luck so far.

A suitably advanced interstellar civilisation might be expected to build a Dyson swarm around every star it colonises; that could mean more than two thousand Dyson swarms within 100 light years of the Earth. But we do not see this. One possible reason why we don’t see all these structures is that we exist inside one, in a simulation that has had all the Dyson swarms erased from the sky. Do I believe this? Not really - I’d give it about 1% chance of being the answer. But it may be possible.

Incidentally the computational substrate of a Dyson swarm need not be silicon; there is more carbon in our local abundance of elements, so I’d expect carbon to be used instead of, or as well as, silicon.

Quantum computing is one possible avenue. Another is to use a biological (or at least a bio-mimicing) substrate, IE running a computer on neurons. The key could be an entirely new application of known science, taking known methods and adapting them to achieve new results. Or it could be that, as noted, all of the above is physically possible, but while we’re working out the practical considerations, we find some entirely new branch of physics that lets us achieve GAI with much less cost/effort.

I think when futurists talk as if GAI is inevitable, there are a couple of hidden assumptions. Assuming we don’t wipe ourselves out first, and assuming our culture doesn’t change to such an extent that we no longer value innovation, then GAI is on the trajectory we are headed down. So from that perspective, it seems “inevitable”. It’s always possible we will all die first; but futurists tend to be an optimistic lot, and adding that caveat every time gets pretty old :wink:

But see, those are the assumptions that are not actually that important. Yes, of course, if a nuclear war happens, then we are not developing GAI anytime soon. We’re also probably not growing crops next year either. The limitations I wish futurists would address more often are the scientific ones, as these are the ones that are really the most interesting. I find there’s a lot of handwavium involved with addressing the existing scientific limitations when discussing futurism.

What scientific roadblock is preventing GAI?

I think futurists talk about such roadblocks all the time, when we have reason to believe that they exist. For example, futurists will point out that FTL is probably impossible, because the rules of the universe as we understand them forbid it.

Let’s compare 3 possible ideas and look at how a futurists would think about them: general AI, FTL travel, and a lighter than air vacuum balloon.

FTL is impossible by known science. By definition, any FTL will cause time travel paradoxes (the full explanation is above my pay grade to reproduce – I’ve watched a few lectures that explain it, and it makes sense in my head, but I wouldn’t presume to try and give a coherent summary). So most futurists will say something like, “FTL is impossible under known science. However, we do not have a complete understanding of the laws of physics, especially when numbers dealing with energy, velocity, mass, density, etc get extremely high. It is possible that there are methods that there are unknown physics we do not understand yet that would allow FTL. But this would turn out understanding of the universe on its head.”

A vacuum balloon doesn’t violate any physical laws. However, when we look at the specific case of building a vacuum balloon that would float in Earth’s atmosphere, we run into a problem. The pressures involved and the material strength of known substances (INCLUDING substances we have theorized but can’t mass manufacture yet, like carbon fiber nanotubes) is just not strong enough to allow a balloon that both resists the atmpsphere’s pressure AND light enough to float. HOWEVER, a vacuum balloon wouldn’t require any new physics; just materials (or metamaterials) that are stronger than anything we’ve come up with so far. We can’t know whether or not such a material is possible; we haven’t found one yet, but the laws of physics don’t forbid it like they forbid FTL. So a vacuum balloon doesn’t require new physics, just the ability to make stronger materials, which may or may not be possible.

Now let’s compare that to Artificial Intelligence or nanomachinery. Over the last 3 billion years or so, random chemical processes came together, became self sustaining, created nanomachines capable of adaptation and self replication, and then after trillions of generations arrived at a complex machine, built of many many tiny component nanomachines, that – through random mutation and natural selection – ended up with us. Self aware, Generally Intelligent beings, built entirely out of nanomachines.

So not only are nanomachines and General Intelligences possible under known science; they ALREADY EXIST. That’s why futurists are so certain that a GAI is possible under known science. If you could go outside and see vacuum filled balloons made of some super strong metamaterial float by, and these balloons were created by a natural process, or were part of some animal, but relied on materials we couldn’t synthesize yet, I would confidently state that a vacuum balloon is possible under known science.

If there were space whales that fed by latching onto comets and harvesting all their resources, then once they were full they created a wormhole and popped over to the next solar system – well, I’d be much more optimistic about our ability to create machines that go faster than light, too.

So, in summary: FTL - impossible under known science; no examples in nature; hence there is no reason to believe we will ever achieve it. If we ever DO find evidence of FTL travel, it would require us to rethink physics from the ground up.

Vacuum balloon floating in our atmosphere - not prohibited by known science, simply requires more advanced metamaterials than we have. However, no evidence one way or the other as to whether such metamaterials will ever be possible. Hence, no reason to believe this will be something we achieve; but if we do, it doesn’t turn our understanding of the universe upside down.

General Intelligence/Nanomachines - not prohibited by known science; plenty of examples of both in nature; we are a GI made of nanomachines (cells).

What is CRISPR but a nanomachine? Just because it is built on an existing virus doesn’t mean it’s not a man-made tool. Are sharpened rocks man made tools, or a bow made of wood and tendon?

I don’t agree that this is “handwavium”. If anything, the idea that a GAI is NOT possible requires handwavium – it requires there to be something non-physical to our minds, beyond the interactions of the neurons in our brains. Some magical handwavium that takes a dumb mound of flesh and turns it into a thinking brain. In other words, a soul. And there’s no scientific evidence for this.

Well, let’s take the simplest. There is no known or predicted algorithm with a corresponding mechanism by which it can be replicated that provides general artificial intelligence. There is no known or predicted technique or method to produce such an algorithm.

That’s kind of a big one right? We have no idea whether GAI can be accomplished algorithmically in a replicable way, and we don’t even know if there is a way to find such an algorithm.

We DO know of a way to arrive at GI - natural selection. For this reason, machine learning seems like a promising avenue.

Of course, that leads you down some strange rabbit holes. If machine learning creates a program that can solve a variety of problems and interact with humans in a way that’s indistinguishable from a sentient being, did you create sentience? How can we ever know if a being that reacts like a sentient one has an internal subjective experience the same way we do? Did you really “create” anything, if all you did was set up an environment and selection pressures and then allowed intelligence to evolve?