Nope, completely wrong. If you only simulate detail when an intelligence observes it, there’s no way for intelligent beings to spontaneously emerge from the simulation (and later develop enough to run their own simulations), so it fails at actually producing intelligent beings. You can do the ‘brain in a jar’ thing if you hook someone to that, but you don’t actually have a simulation that will independently develop as a world. And you never come remotely close to the Trilemma situation.
No…Mostly they just “teabag” other players and make racist comments.
It’s a meaning less question akin to “how many angels can dance on a head of a pin”. If you put all the mass in the universe in a “bucket” it would basically collapse into a singularity (black hole) at which point, our understanding of physics can’t really describe what happens.
That’s not how simulations work. Now, it is possible to not update something which is unchanging, true - not that much in our world is truly unchanging. But it is pretty much always going to be faster to just update the thing in each simulation cycle than to constantly monitor - at a high level - every potential observer and then figure out how to change the stuff that only he sees and no one else does.
This has nothing to do with who writes the simulator, or the language used, or the computer used. It is fundamental to simulation.
??? I haven’t ever heard that it is necessary for a sim to admit such a breakout.
A purely imprisoning sim is still a sim.
I’m not completely sure of the terminology, but are we confusing “simulation” and “rendering” (as in computer graphics)?
If you simulate a universe, you presumably have to simulate the whole thing in some sense, even if you are smart about how your software works and don’t repeat the same calculation a trillion times. But you’re not just in the business of “showing things” to sentient beings. You want to see how the universe develops. You’re still simulating the bits that no sentient being happens to be looking at. I mean, for the first 10 billion years or more there probably are no sentient beings.
Agreed that we’re stuck on different kinds of sims.
I think a “sim” is perfectly good and valid, even if it doesn’t code for all possible details. For instance, giant boulders might not actually have anything inside them, and just be hard, rocky surfaces. What’s the point of coding for the inside of rocks that no one is ever going to dynamite apart?
But a different kind of sim would code for all possible details, including the insides of every rock.
These are simply two different approaches to simulating an environment. I don’t think it’s valid to demand that only one is “truly a sim.” (And there are probably more than just these two approaches.)
But there are really big problems with mixing simulation modes like that. And you can’t simulate it only when intelligent life observes it. Say you have a tree which is hollow, and you have a woodpecker which does simple woodpecker things. During its woodpecker thing, it pecks right through the bark and then takes up residence inside the tree. Now an intelligent being comes along, and you have to unwind everything. If your woodpecker model knows the difference between a real tree and a fake tree, you get really, really, complicated. I don’t think humans alone think that lots of complex buggy code is worse than a little bit of simple code.
Now, if you are fooling one person you don’t have a simulation, you more or less have a holodeck. And I doubt the program for Victorian England when Data played Sherlock Holmes also modeled what the Vulcans and Klingons were doing at the time.
One possibility is simulated last-Thursdayism. There are intelligent beings in the universe because it was spawned that way last Thursday with an authentic-looking history. In that sort of case - and especially (but not exclusively*) if the purpose of the thing is to study intelligent beings, it’s not necessary to simulate most things below the level at which they are being observed. The picture on the wall behind you can be reduced to a grey square equivalent to the average reflectivity of the actual picture, when your back is turned. Jupiter was just a Newtonian point mass until we invented telescopes; most of the universe was just bright spots on a dome until it needed to be otherwise, because we looked closer.
There has to be an overseeing ‘engine’ in this case, that manages the level of simulation for objects based on necessity (there’s an argument that such a thing would have to be sentient in its own right, but maybe not). I guess it ought to be possible to ‘trick’ the simulation into producing detailed results that were inconsistent with the bulk/simplified original though.
I trust that you have never written a simulator.
Your code for telling whether to render something would have to take into account both sentient beings seeing the picture, as well as cameras. I assure you, the simulator will be faster and simpler if you just simulate the damn thing instead of lots of special cases, including the possibility of someone looking into the window with a telescope.
The problem with last Thursdayism here is that you’d have to initialize the simulation to a complete and consistent state as of last Thursday. If you can do that, you probably don’t need to do the simulation, since you understand it pretty well already, well enough to not initialize to an inconsistent state.
Now, this isn’t a problem if you just render the universe for one person. But that is solipism, not a simulation. It is not a simulated universe, it is a Potemkin universe.
The world and everything we see in it is nothing but a hologram not real. When you sleep, as far as you know, nothing exist. Where is the world (as you an individual are sleeping)? Does it shut off like a light in this illusionary room and the illusionary bed that I think I’m laying in? And then upon waking, turn back on to the last page I remembered so I can carry on with my illusionary problems?
In this sleep state, your problems are completely forgotten, only to be remembered upon you waking up into this self projected hologram world we each create for our self.
The song Row, row, row your boat, gently down the stream. merrily, merrily, merrily, merrily, Life is but a dream.
Is more true than most people give thought to. Might be helpful in how to live a simpler life.
If I understand you correctly, then that’s not quite right—any computation can be made reversible, that is, occur with no energy dissipation. It’s only if you do something irreversible, as e.g. delete a bit, that you incur an entropy increase (of kTln(2), where k is Boltzman’s constant and T is the temperature of the heat bath).
The real problem with the ‘universe is a simulation’-idea is that simulations aren’t things: a computer only simulates something if it’s interpreted as doing so. Some black box merrily chugging away can in principle be interpreted to implement any kind of computation; it’s only the implementation function that ultimately roots the meaning of the symbols being shunted around in the user’s mind. So a computer in the parent universe might be capable of being interpreted as running a simulation of our universe, but one can equally well interpret it as, e.g., calculating pi to the umptillionth decimal.
But that our universe should depend on how the functioning of a physical system is mapped to logical states of a computation in the parent system is absurd—after all, nobody really believes that if you read a novel, interpret the characters therein as people doing things, holding beliefs and having thoughts, then there is some meta-level on which those people could be said to have anything more than a nominal existence. If I read the sentence “Fred felt a sharp pain in his ankle”, I don’t thereby call into existence a Fred whose ankle hurts.
I’m not sure I would use the word “absurd” in this connection. Your statement is correct at an elementary level in the sense that, for instance, an inhabitant of the Sims world could not be said to have an actual existence in any meaningful sense. But what happens when the complexity of the AI modelling increases to the point that the beings acquire consciousness and become sentient – on the theory that this is simply an emergent property of a sufficiently mature AI? It seems to me that this is when we cross the critical threshold because, to paraphrase Descartes, for their existence to be instantiated only requires their conscious belief that they are real, that they are free-willed sentient beings interacting with their environment. What other “physical reality” could there possibly be?
The simulation argument has been given considerable credence by a number of thinkers including, most recently, the philosopher Nick Bostrom. The New Yorker just published a short article about the subject, though I don’t think it adds much to the conversation except some interesting literary references (I think I might like to read Permutation City).
The problem is rather that there’s no absolute fact of the matter that the algorithm the computer implements is ‘The Sims’, rather than, say, a computation of the digits of pi. Fundamentally, a computer shuffles symbols around; the meaning of those symbols comes from the user. This is difficult to notice because the symbols our computers eventually output—sounds, words, and graphics, thanks to carefully constructed peripherals designed to yield immediately human-graspable representations—are so immediately familiar to us that we see right through them, and don’t notice any interpretive work being done, so we’re tempted to say ‘this computer is running The Sims’.
But take it to a more formal level: a Turing machine starts with some symbol configuration on its tape, and finishes (if it does) with another. Now, what’s the computation that has been done? The question has no unique answer: depending on what you choose the symbols to mean, it could have computed pretty much anything.
Or, take a physical system that has a two inputs, and one output (note that there is already considerable interpretational work that has been done here: identifying some of its components as inputs or outputs, defining the right level of resolution for viewing it as a functional unit (i.e. below the level of the whole planet, way above the subatomic level), and so on; but we’ll take that for granted here). Now, you can apply at its inputs either a high or a low voltage (further interpretational constraints; again, taken for granted), and you observe the following behaviour: when you apply a high voltage to both, there will be a high voltage at the output; otherwise, the voltage will be low.
What logical operation does this device perform? Of course, there’s no way to answer that: you, for instance, could want to identify high voltage with a logical 1, and low voltage with a logical 0; and consequently, the device would be an AND-gate. I, with equal justification, could want to identify high voltage with a 0, and low voltage with a 1—consequently, the device would be an OR-gate. None of us is any more right than the other.
Indeed, the problem goes deeper: I could insist that high voltage is a 1, if it is applied to the first input, but a 0 when applied to the second; indeed, I could assign every permutation of logical values to every combination of voltages and inputs. And of course, a computer is basically nothing but a network of such gates. Thus, in fact, we can consider our device to implement every possible Boolean function of two variables, and it is only based on a choice of the interpreter which one is actually being computed.
The problem now iterates through networks of such gates—any such network with n inputs and m outputs can be ‘read’ as implementing any Boolean function from n-bit strings to m-bit strings; and hence, what computation is being performed is entirely in the eye of the beholder. Thus, there’s no absolute sense in which you can point to a device and claim ‘this device executes that computation’; and hence, in particular, you can’t point to the posited parent-universe computer and claim that it simulates our universe.
You should, it’s excellent! As most things by Greg Egan, really. Here’s an excerpt from it on the author’s webpage (which contains many more interesting things, besides other short stories also things like explorations in recreational mathematics and physics).
I see your point, but I think objections can be raised against it. Suppose, as a first step in this objection, that one has created a sufficiently mature AI that it appears to exhibit sentience and the exercise of free will in its interactions with our ordinary world. Or alternatively – and I would say equivalently – consider a human from the standpoint of the computational theory of mind. Certainly no one would argue (or at least, shouldn’t) that neither one constitutes an actual intelligence because any number of interpretations could be put on the underlying computational processes. It doesn’t matter, because we make our judgments based on behavioral observations.
If one grants that one has thereby created (in the first case) a true sentient intelligence, and satisfied ourselves to that effect by how it interacts with our world, then let’s consider the case where it only interacts with its own simulated world, which for added fun also has a multitude of such autonomous intelligent entities – all of them, let’s remember, we’ve already agreed possess consciousness. What’s the difference? I think I understand your point precisely – here all we have is a bunch of logic gates and electrical pulses flying around, whose effects bubble up to the real world through various symbolic interpretations, so how can we say we have conscious beings in a simulated universe? I would argue: precisely the same way we can say it about ourselves – behaviorally. It requires us to struggle with what we really mean by sentience and consciousness, but if we can observe autonomous beings behaving in all the ways we deem necessary to conform to those traits, how can we deny that there is a reality in which they are, in fact, doing so?
It seems to me somewhat analogous to the much more trivial “Chinese Room” problem proposed by John Searle. He tries to argue that the hypothetical philosopher in that room is blindly following rules and therefore no actual “understanding” of Chinese exists – it’s all a function of symbol interpretation by the outside observer, quite literally. Yet the observation is everything, and I would argue that clearly a genuine understanding exists if the observer is satisfied that he is carrying on a Chinese dialog.
So again, it doesn’t matter if the output of our simulation is a set of symbols that may be subject to an arbitrary number of possible interpretations. It matters only that one of those interpretations – the one we have chosen to make our observations with – reveals consciousness. This may lead to some interesting ontological questions about what it is, exactly, but that’s a different question. If we agree that consciousness can be a property of a computational system, then it’s a consequence of information and not a physical manifestation. That we have the right information and the right computational model is evidenced in the observed behavior.
We might even devise some clever way we could try to communicate with these creatures. If they’re a lot like us, those would be ones turning into religious fanatics!
P.S.- thanks for the comments about Permutation City – definitely going to read it. I knew nothing about Greg Egan but it seems that he’s a “hard science” scifi writer. I love that – it’s all too rare. According to Wikipedia, he also sounds like a weirdo – never signs a book, never allows his picture to be published!
Well, one thing upfront: I do believe it’s possible to create an artificial, conscious being (after all, we ourselves are the best examples of such beings; whether it’s the product of evolution or of many hours of painstaking work of roboticists doesn’t seem to matter much—though I suppose a teleofunctionalist like Ruth Millikan would disagree). I don’t believe it’s the case that the consciousness of this being is due to computation, due essentially to the above argument. (Perhaps of some interest, the guy who originally came up with computationalist functionalism—Hillary Putnam—later disavowed it because of exactly such considerations.)
Now, as for how that might work, several proposals exist in the literature. Consciousness, or more accurately for our purposes, intentionality, might necessitate embodiment—such that a simulated mind might not be conscious, but a mind controlling a robotic body would be. (Note that one can be quite lenient about what ‘embodiment’ constitutes—any interaction with the world might suffice, such as is for instance given by the questions and responses in a ‘Chinese Room’-like scenario—indeed, since all we get from our senses are signals, messages, in some sense, we’re basically all in such a scenario.)
Or, it could be the case that there’s some aspect of a physical system that doesn’t carry over into a simulation thereof—its haecceity, or ‘primitive thisness’, if you will. A simulation ultimately only instantiates the relational properties of what it simulates; there might be intrinsic properties that fail to be properly instantiated in a simulation, which might include those responsible for proper intentionality.
Or, meaning might be intrinsically tied to action: some representation means something to us if it prompts us to act in some certain way—act in the world, that is, not act in a simulation, since then, we would need prior intentionality to interpret that simulation as one in which we act a certain way, which is circular.
So in order for a being to be conscious, it might be the case that it needs to be connected up, in the appropriate way, to the natural world; hence, there would be no conscious beings in simulations—unless, of course, you let those simulations interact with the real world, in which case, they’d really just be a part of that world.
I agree with this: a being that acts like a conscious being ought to be considered to be one, even though believers in p-zombies and the like would beg to differ.
That’s where things go wrong: let’s say we have a conscious mind, properly embedded into a robot, with all the characteristics of a conscious being—it acts properly in response to all stimuli, say, it cries ‘ouch!’ when it stubbs its toe, it weeps at the sublime beauty of a rainbow, it’s fun at dinner parties, and so on. Now, we upload it into the simulation.
You say it would interact with the simulated world in the same way as it interacts with ours: but the simulated world is just one possible interpretation of the 0’s and 1’s (or whatever other symbols) being shunted around in the computer—there’s no world for it to interact with. Moreover, its own mental states, now no longer grounded in the appropriate interaction with the world, are themselves just mental states under one possible interpretation; under another, it’s just an amazingly detailed calculation of the exact topography of my navel. So there’d then no longer be a conscious mind, capable of grounding the interpretation of the simulated world, and likewise, no world to ground the references of the symbols within the erstwhile mind.
It’s in fact one of the arguments Searle has used (on occasion) to bolster his interpretation of the Chinese Room: the ‘no semantics from syntax’-argument. One way of his of putting it is to point out that one can interpret the wall in his office to implement the WordStar-program (the argument’s a bit dated), to highlight that what we consider a computation to be about is completely arbitrary (and since our minds aren’t, computation is the wrong sort of notion to ground a theory of consciousness).
This stance commits you to a very extreme form of panpsychism: we can interpret basically every system to instantiate every possible computation. Hence, not only is there conscious experience associated with that rock in your backyard, every possible conscious experience is associated with it. This has worse consequences than the simulation hypothesis: you could be a ‘simulation’ running on a rock in somebody’s backyard, or in a hot cup of coffee, or the heart of a star.
To me, this simply doesn’t make too much sense. There is an interpretation (an encoding) under which the random typing of a monkey on a typewriter is exactly equivalent to the works of Shakespeare. Should we give the chimp a book contract?
I have a hard time believing such a simulation can be done to be reversible - and that the computation engine could be 100% efficient.
Such a simulation clearly assumes that at least some simulated entities are simulated to be self-aware. If you don’t buy that you don’t buy the simulation hypothesis irrespective of its other problems. What the creator of the simulation is getting out of this is unclear. However it is certainly possible to do a computation and throw the results away before it is output. I’m not sure what the meaning of this is.
The mind instantiated in the robot interact with the natural world through a series of digital inputs and outputs, the results of its sensors as input and the results of its output causing actions which affect the world.
If you place this mind inside a simulation, and simulate a sufficient amount of the natural world, causing the simulated world to provide the same digital inputs as the outputs of the sensors and reading the minds outputs and changing the simulated world in the same way the natural world could be changed, what is the difference?
When we simulate computers and devices, we are simulating the “natural” voltage and electron flows in our simulated world - and it works quite well.
Nothing especially complex, no. But I don’t think any of us here can claim to have written a simulated universe.
[quote]
Your code for telling whether to render something would have to take into account both sentient beings seeing the picture, as well as cameras.[/qoute]Or, you only bother modelling the output of what the camera would have seen, if it’s ever required.
Maybe, but who can assume that speed of simplicity are even design goals of the simulator? Simulating the viewpoint of the sentient observers could actually be the only cases, not special cases.
Inconsistent with what though? As long as the only parts being simulated are consistent with each other, nobody would be able to tell.
Real life flight simulators do this. You keep making absolute declarative statements, based on your knowledge, but you do not have universal knowledge.
-
You’re wrong. Real, existing, current, simulators do vary the detail level based on where the subject is looking.
-
You have no way of knowing what the alien tech-level is, how it works, whether it is even “tech” at all. You’re (pardon the pun) projecting.
But if you model Fred to the atomic level, with a working model of his brain, including an operating pain center, and stimulate that pain center via simulated nerve impulses…then you have called into existence a Fred whose ankle hurts.