Do you think the singularity is real? and will it be the end of the world?

Seriously? You think it’s likely? I mean, I’ve heard the arguments for this, I just didn’t think anyone took them seriously.

At least not since the Matrix came out…

Some people have suggested that it might. I have yet to see any good reason to believe that it would, though.

I’ve been looking for just this metaphor. Thank you!

Cracka links to a philosophy prof’s paper that aims to demonstrate that, logically, the possibility that we exist within a simulation is, IIRC, impossible to dismiss, and not just trivially possible. It’s a long, long way from arguing that we do, in fact, exist in a simulation. More of a Pascal’s wager with The Matrix rather than God.

It’s incredibly likely. Look at it this way.

How many “real” original universes are there? Tossing aside the “many worlds” hypothesis as untestable, the answer is one.

How many fine-grained simulations are likely to be created within that one universe over the billions of years of it’s existence? How many of those simulations will be detailed enough that the simulated intelligences within those simulations are able to make their own simulations?

It seems very likely that the answer to the second is a number greater than 1, and not unlikely that it numbers in the billions.

Note that these simulations do not have to model everything in the universe. They can simply have an abstract version of one planet inhabited by a few billion simulated intelligences that do not realize they are simulated. The simulation can react to the behavior of these intelligences - if they build telescopes and look deep into the sky, you can add a simulation of the patch of sky they are looking at, based on your real-world observations. If they build microscopes or particle colliders to look at the miniscule, you can do the same thing. You don’t even have to react very fast, as you can simply pause the simulation whenever that happens and nobody within it will detect it. If they look at something that was never looked at in your “real” universe, you can either make something up, or pause it and go find out what they would see in the real universe. If they catch an inconsistency, you could even edit their reality and memories to cover it up (or let them blame it on the chaos of quantum physics).

I think that if you admit it is possible that we exist in a simulation, then logically you have to move over to the premise being overwhelmingly likely. If you accept the premise that it’s possible to create a simulation containing entities that believe they are thinking beings in a real universe, then it is almost certain to happen over the course of the universe (unless you believe we are the only intelligences that will ever exist in the universe and that we will become extinct before we have a chance to do so).

No, it’s not inconceivable, but what you’re suggesting is an opaque possibility. We can formulate the idea that a superior A.I. ‘emerges’, but not with any real idea how, or what it would be like… your suggestion is akin to saying “Is it inconceivable that people spontaneously combust?” No, it’s not inconceivable, but there’s a big difference between a possibility we can conceive, and what goes on in the real world.

Well, this was the interesting question Frank Herbert raised in several books, namely is the way in which we’re conscious perfect or complete or the only way in which one can be conscious.

But again, your question wanders off into the territory of opaque conceivability. We can formulate the idea that another consciousness would occur, and that it would be distinctly inhuman, but what you’re doing rhetorically is saying “okay, imagine something that is 1) conscious, but 2) not in a way that we are conscious.” It has a bit of a feel of “X and not X” to it, doesn’t it?

I’m floundering a bit here. What I’m after is observing that we can formulate thoughts that are superficially plausible, but fail on closer examination to be relevant, interesting, or even possible. An immovable object can’t exist in the same possible universe as an unstoppable force, by definition; yet the question involving the two superficially makes sense (my brother likes to say that the force is deflected). If a consciousness existed that was not like a human consciousness, how could you recognize or interact with it? In what sense could you consider it a consciousness if it wasn’t a human consciousness?

This is, I think, a deep problem with the field of A.I. We don’t even have a full-blooded idea of what human consciousness is, just a functional “I know it when I see it” idea–that’s what gives rise to the Turing test, which says that if you know it when you see it, and you see it, then what you see is consciousness, regardless of whether or not it exists in neurons or silicon.

This feels a lot like the ontological argument for the existence of God, with “simulations” in place of God, and statistics in place of the necessity operator.

Or better still . . .

I don’t really see the correlation, sorry.

Now, if the God hypothesis was “If you accept that it is possible for a human to become/create a God, then it is exceedingly likely that a God will exist”, that would be closer. If you do accept that original premise, then yes, it does seem very likely that there will be a God…but it does not suppose that a God has already existed, just that, at some point, there will be one.

I don’t think it’s possible that a man could become or create an entity that fits the common description of God - an omnipotent, omniscient being that created the universe. For one thing, the universe is already here and has to have existed for humans to exist. Secondly, I don’t believe omniscience is possible, because to know everything you would have to have a mind bigger than the universe to fit everything into it.

I do think it’s possible that sentient beings (and not necessarily mankind) have the ability to create simulations of reality that contain entities that believe that they are in the real universe. It may be hundreds or even thousands of years beyond what we are capable of now, but it does not seem to be beyond the realms of possibility. In the billions of years before the end of the universe, I’m almost certain that something, somewhere, will start making simulations of parts of the universe that include intelligent beings.

The similarity of which I’m thinking was that the existence is entailed by the concept–in the case of the ontological argument for God, it was that the concept of God necessitated its existence.

In the case of the simulation, the argument seems to be that, if it’s possible to occur, then the odds are overwhelming that it will. The rest of the argument, if I understand correctly, is that if it will exist, then we are overwhelmingly likely to be inside it, rather than outside. Is that a fair description?

In both cases there’s a hidden premise, which is that possibility in conception equals possibility in reality.

I’m not saying that possibility in conception means possibility in reality. I’m jumping straight to “possible in reality”. I can’t think of anything in our knowledge of the rules of the universe that makes simulations impossible. We are already making simulations that are getting more and more complicated as computing power increases.

If you think it’s possible that we are in a simulation, than you think it’s possible to make such a simulation. I agree, and if it’s possible to make something and there is a benefit for doing so, I think it’s highly likely that someone, sometime, will make it.

An analogy is the statement that “Someday a man will walk on Mars”. I’m not saying because I can conceive of a man walking on Mars, that it will happen. I’m saying that “It’s possible for a man to walk on Mars, so eventually someone will do it”.

I know there are a lot of technological hurdles between our current space travel capabilities and putting a man on Mars, but it is physically possible, and people will want to do it, and someday, it will be done.

Well, for one, the AI wouldn’t be legally human; it would be a slave, although we wouldn’t call it that. For another and less nasty incentive, if we can make a human level AI at all, it seems likely we can build on that success and make one more than human. And also, even only with a human level AI we could likely run it ( or more likely them ) at a much higher speed than a human mind.

It took centuries to achieve flight, but it was clear from the beginning that it was possible - birds do it. For the same reason, while we can’t predict when human level AI will be achieved, we can be sure it’s possible. If it wasn’t, we wouldn’t exist.

As for my opinion of the Singularity; while specific technological ideas may be implausible, it seems inevitable that at some point, something like a Singularity will happen. After all, we’ve been through a Singularity before - the transition from the slow pace of unthinking evolution, to the fast pace of human consciousness and culture. Eventually, either we will build a superhuman AI or augment ourselves above the human, and it seems unlikely to me that the result won’t be a transition to a world beyond our comprehension - a singularity. It would pretty much have to be, being built and controlled by beings we can’t understand, either.

As for attempts to compare the Singularity and the rapture, there are differences. The first being that not everyone who buys the idea of the Singularity thinks it’s a good thing. And second, it’s at least theoretically possible for something resembling Singularity to happen; there is no chance of the Rapture happening. It’s the difference between enthusiasm and being outright delusional. And third, as a practical and moral consideration the Singularitarians ( or whatever you want to call them ) don’t think destroying the world beforehand is a good idea or acceptable; the Rapturists do.

The difference between “it’s possible we live in a simulation” and “it’s possible someone will walk on Mars” is that, in the latter case, we have a deep understanding of what’s involved–transporting people there, providing life support, etc. In the former case, the concept is only superficially sensible.

Look at time travel, for instance: long a staple of science fiction, and seriously considered by a lot of very prominent physicists. Very possible in conception. We can recast the simulation argument thusly:

  1. It’s possible that people will travel in time.
  2. Over the course of the universe, what is possible is overwhelmingly likely to occur
    Therefore:
  3. It’s overwhelmingly likely that there are/have been time travellers.

The problem here is the first statement. Superficially it looks sensible, but a variety of physics and logical paradoxes tell us (now) that it’s impossible to travel in time as traditionally conceived, so much so that previous forays into the concept look quaint. It has always been possible in conception, but possible in reality wasn’t something we’re capable of judging until long after the conceivable possibility has been realized. And the reason we can conceive of a possibility that turns out to be impossible in reality is that the original conception is opaque, depending upon an incomplete understanding of reality.

That’s my underlying epistemological issue with A.I. and simulations: They’re still opaque concepts, so talk of the possibility of them is premature at best.

If a human can make a human-level AI that is faster than a human mind, then a human-level AI can make a human-level AI that is faster than itself, and faster. This will lead to a geometric increase in the speed of human-level AIs as they keep making AIs better than themselves.

It may not be possible to make something smarter than yourself, though…we haven’t got close yet. It is also possible to choose not to do so, and though we may not be smart enough to refrain, it’s possible at some point there will be something smart enough not to make something smarter than itself.

You mean “Destination: Void” - “D: Moon” is a 1950s movie that Heinlein wrote.

(edit) one of many good post-singularity stories is this one
http://www.davidbrin.com/stonesofsignificance1.html by David Brin, which also discusses the simulation issue

Well, up to a limit. I’d expect a human level AI to be faster because it can be made of materials more suited to computation than poorly conductive jelly; but an AI trying to make faster AIs will already likely be built of the best material for that presently available.

It’s perfectly possible; we are more intelligent and aware than evolution, and here we are.

Well, that’s if it considers refraining from doing so to be smart - and if the smarter something is isn’t just itself, upgraded. And then there’s the question of how much choice it has, if it doesn’t want to be left behind by everyone else.

On one level I agree but in another I do disagree 100%, we do create our machines to be more powerful than any human in some aspects, otherwise they would be next to useless for a human. If you are indeed limiting to just human level intelligence and ability to be able to change and multi task there is no huge benefit to make it human-like; however, economically speaking, there is an incentive right now to make better than human systems that are integrating many diverse inputs and they are increasing in complexity in leaps and bounds.
While AI may not the end of those efforts, AI is bound to do better than the average human because there is a need for it to be useful for some humans in their efforts to get an edge on others.

I have come to the conclusion that AI has failed so far because there has not been a good theory of how the human brain or human intelligence works, once good theories are supported by evidence then is when we will see the effective use of those theories in silicon. However I do think that the geometric progress in computer power will function like a brute force approach into reaching the singularity, the issue to me is now if a theoretical approach will get us there first rather than a brute force approach and I do think that will make a big difference on what will take place when a singularity is reached.

I agree that that is the most likely future for a singularity. Either Marvins, or intelligent toasters that will never end to bug you into preparing any toasted item for you.

Lister: We don’t want any! No muffins, no toast, no teacakes, no buns, baps, baguettes and bagels, no croissants, no crumpets, no pancakes, no potato cakes and no hot cross buns and definitely no smegging flapjacks!

Toaster: Ah, so you’re a waffle-man!

This statement appears to completely ignore the possibility that physical limits exist.

Try it with smaller than itself instead of smarter/faster and you’ll see what I mean. Try it with larger and you still hit a problem when things start collapsing under their own gravity.