Let’s assume for argument’s sake, your world is actually a simulation. What I’m looking for are the mechanics for creating the “simulated world” you live in. S
We don’t know the nature of the technology or the purpose behind it. Like you are the set designer of The Truman Show designing the set so that you don’t know you’re in a simulation.
We can make a couple of assumptions:
You are real
We are unable to extricate ourselves from the simulation. That is to say, we can’t or don’t know how to summon the exit portal, take a blue pill, rip off the Oculus-like device or yank out the jacks that connect us.
Since no one has left the simulation and returned, we don’t know if dying causes you to wake up at the next layer (like Inception) or not at all (like The Matrix).
Whoever created the simulation is either human like us, or knows enough about us to make what seems like a reasonable human habitat.
The “real” world must be far in advance of the world being portrayed in the simulation (haven’t seen any lag or clipping so far…except when I’ve been drinking).
The entire universe doesn’t need to be simulated. i.e you don’t need to model every molecule of the chair you are sitting in.
Similarly when you walk or drive/pilot a vehicle, you could be roaming a single map…but when you board an airplane, boat or other vehicle you don’t control directly, you might be just hanging out while a new map is loaded.
It is possible that at one point, you existed in the “real world” and someone seamlessly “Vanilla Sky-ed” you into the simulated one.
It is entirely possible that some or even all other people are simulated, including people you’ve known a long time. Or some might be other “players”.
We don’t know if your memories can be altered. Assuming no means any changes or alterations must happen while you sleep and must be consistent with your memories prior to going to bed (i.e. you can’t just wake up with a new job or wife). “Yes” is much more complicated.
So some thoughts / questions:
How big of a world map (or maps) do they need to contain you (i.e. how far do you think you need to walk until you reach the “debug zone” edge of the map?) Keep in mind that flying between New York and San Francisco might actually be two separate maps with 6 hour “loading screen”.
How many “bots” and NPCs do you typically interact with.
How many of these characters do you think are other “real people” like you, vs simulated AIs.
Based on your lifestyle, do you think they have “safety protocols” engaged?
How much content in terms of shows, books, etc do they need to generate. Do you think “they” are generating it or other real people like you?
Is there a point in time where you think you might have been inserted into the simulation?
Is there anyone you suspect might actually be some sort of “Tech Support”?
Exhibit A for me is those 1st two up there. The vast majority of humankind behaves in such rote, predictable, and even mindless ways that it often blows my mind. When I get around to it I will be posting a thing (in MPSIMS) about how mechanically and thoughtless most people drive on the highway (after my recent 6,000 mile sojourns in the past month, 2nd one hurricane induced). Hell I see it here in political threads all the time, so knee jerk on a poster-by-poster basis (and not just limited to trolls tho it’s most visible in their behaviors yes) that it often is bitterly hilarious (and 100% predictable).
As far as “safety protocols” are concerned, about a year ago I fell in my work parking lot. Hard. I should have broken my cheekbone at the very least, gotten a concussion. It was slightly sore, pain all gone by the next morning. I also jumped powerfully off of a trampoline 20 years ago, but somehow “instinctively” rolled with the impact, and was 100% fine. My sister was incredulous.
So much of pop culture, music especially, is so repetitive and dull now (yes, some notable exceptions) that it does seem like someone is simply restirring the pot and seeing what remnants boil to the top. And of course the “bots” eat it all up anyway.
Finally, I almost drowned in Puerto Rico when I was 8. I recall seeing my mom going crazy on the shore as a rip current pulled
me out and I lost my footing. “Woke up” on a surfboard 100% fine (she got some surfers to rescue me), not a single drop of water coughed up. Yes, read later about dry drownings, but a lot of those victims never wake up again.
Taking just your thread-title question: in The Truman Show, the makers had no holographic technology at their disposal and so had to use mechanical (and psychological) means to keep Truman away from the edges. In the Star Trek: The Next Generation (and following shows), the simulation did have physical edges–the walls of the holo-suite. But the technology was such that a user could be led on long journeys that never reached the edge, via “reasons” inserted into the simulation that would keep the user from walking in a straight line.
Presumably the simulation you’re positing would use such means to keep any inhabitant from reaching an edge. (This would support the idea that most “people” in the simulation would be non-sentient drones, so that the software would have only a few actual people that would need to be redirected away from the edges.)
Ever hear of ‘Somebody Else’s Problem’ field? It was posited by the writer Douglas Adams as a way to protect something unusual from being spotted; it compels all observers to decide not to worry about what they see, and thus they ignore anything weird about it.
If the simulation can ‘hack’ our mental processes, it could get a lot of mileage by not bothering to render anything we’re not paying attention to, and preventing us from noticing the holes. This could include things up to and including things that are in the room with you. Ever notice how sometimes a fly is flying around, and then it appears to completely vanish for a while until it suddenly lands in your ear? That’s your brain suddenly dropping the ball on ‘noticing’ (rendering?) it. A similar effect explains how you can completely overlook something while staring right at it.
Alternatively, the world could be being projected directly to your senses by a helmet/body suit, and you could simply be suspended in a liquid and not go anywhere. Which is what they did in the Matrix movie, come to think of it, albeit with a brain jack giving you sensations rather than anything external.
Very true. But the existence of interactions between simulation-inhabitants is vastly more complex to engineer in a “people in vats” scenario, than in a simulation in which the inhabitants are physically mobile.
Of course this is assuming that the simulation-creator(s) want interactions to occur.
If the focus is more on ‘how can you tell if you’re inside a simulation?,’ and not on the question of who has created the simulation (and how they engineered it), then the question of How Big A Set Is Needed? might be moot. But if the focus is on the engineering, then of course it would be important to know if the simulation-inhabitants were really moving around in a physical space, as opposed to lying in a vat.
It seems to me that the most likely scenario is that you don’t have a body at all, and just exist as data in a computer somewhere. The simulation could be paused at any time, like Grand Theft Auto, and the simulators could think about what to do next.
The simulation could be entirely based around you, with every other person you meet a procedurally-generated AI entity; or it could include the entire world, or at least all the humans (and maybe a good fraction of the higher animals too). Or it could be anywhere in between.
But nobody you would ever meet would have a real body, so you couldn’t ‘wake up’ to experience the ‘real world’ outside the simulation. The best you could hope for is to be downloaded into a robot body of some sort, but that could be fun if the body were sophisticated enough.
I don’t see why. People interact with one another in World of Warcraft without ever leaving their chairs, much less crossing continents to be within hand-shaking distance.
Of course this requires sensory input to be simulated pretty thoroughly, to allow to disparate people to think they’re making physical contact when they’re not. And I find myself wondering whether this sort of sensory input simulation is within the bound of the OP. Rule 1 is “You are real”, and that may mean that when you look down at your arm, what you’re seeing is your real arm. If so then the whole sensory input thing is out, as is being stuck in a pod. The minimum then would be a physical space sufficiently large enough to simulate standing in an open field with no visible obstructions all the way to the horizon - which would be a pretty big simulator. Not necessarily one as big as the planet -after a certain distance binary vision stops giving you useful distance information and at that point there could be a painted wall- but still pretty darned big.
And there would have to be lots and lots of shenanigans to deal with people doing cross-country drives. Maybe a giant scrolling/treadmill-like floor, to keep them near the center of the space, if you could somehow avoid detectable momentum shifts when you move it.
That’s true for this message board, too. We interact in real time. World of Warcraft and other immersion ‘virtual reality’ experiences simply add some dimensions (visuals; voice). But it’s all mediated by a keyboard. No one would mistake it for ‘reality.’ (As, I think, you yourself concluded by the end of your post.)
I’m always fascinated by the ways writers try to establish the existence of a shared virtual experience–often in the form of ‘sharing a dream’. In both the 1984 Dreamscape and in the 2010 Inception, for example, there were physical tubes connecting those sharing the dream–in other words, the mechanism was basically hand-waved away. (“A drug” can put two people inside the same dream? Okaaaay.)
Yeah, I think so; the ‘you are real’ reaction would be incredibly complex to engineer, let alone the added complexity of physical contact with other simulation-inhabitants (which is what I was referencing).
I’ve driven cross country 3 times. From NJ to New Orleans 5 times. From NJ to Florida 5+ times. Out to Ohio twice. Up to Boston area 6+ times. So in my case the simulation would be tougher than just Earth I think. I’m discounting all my flights as easily simulated. I’ll even ignore my time spent crossing the Pacific & Indian Oceans as maybe those could have been also.
It is certainly theoretically possible that somebody could make a simulation that would convincingly pass as reality. The existence of the Oculus Rift is entirely predicated on the idea that you can simulate 3D in a way that won’t make people puke. You could argue that they’ve failed, but it’s easy to see how the concept is reasonable.
Honestly the hardest sense to simulate would be touch - you can’t just strap something simple on a person like glasses or headphones or scent-dispensing nostril plugs. You’d need a full-body suit - but even that wouldn’t really work.
What would work, though, would be bypassing the physical senses entirely. You “unplug” the brain from the body and replace all sensory input therefrom with simulations. These simulations would be replicating the nerve impulses and such that your sensory organs would be producing normally, and could thus utterly change your perception of reality, up to and including touch, pain, weight, and balance (if you intercepted the signals hard enough).
Again, the Matrix has the idea of this kind of simulation at its bedrock. And it’s a conceptually sound idea.
Never seen Dreamscape. It’s been a while since I watched Inception, and I’ve mostly forgotten it since it kind of sucked. But wasn’t the drug just to knock them out, and didn’t they use technology to connect the minds?
Simulating physical contact with another person would be no more or less complicated than simulating your interaction with your toaster. Unless you’re specifically talking about, er, ‘intimate’ contact, in which case it would be no more or less complicated than simulating intimate contact with your toaster.
Also, I feel I should totally point out that even if you are a completely simulated entity, only existing as a pattern of signals in a computer’s memory, you are still, by the standards of the threads that inspired this one, fully and completely “real”.
This is not to say the OP would agree with that sentiment, mind you.
It was simple IV line, I believe, and a drug that flowed to all people connected–at the wrist!–when a button was pushed on a little suitcase-sized device.
Dreamscape was similar, though I believe the physical connection (as with The Matrix) was sited at the skull.
It’s major hand-waving.* With The Matrix, at least, the implication was that some sort of computer was part of the connection–one person’s skull-spike, attached to computer, attached to another person’s skull-spike.
I would agree with this for ‘one person in a simulation with a lot of drones.’ But how about for two actual-minds inside the same simulation?
There’s the complexity.
*I should say that I believe Nolan’s intention in Inception was to depict a dream had by a man on a long flight, rather than to posit a workable shared-dream technology. It’s the only way it makes sense.
OK, then, the simulated world need only encompass a couple of square meters. It’s been at least five seconds since I’ve looked away from my computer screen, and there’s a blank wall behind it. If I was inserted into the simulated world at some point in the past five seconds, then, I haven’t yet had a chance to notice the error messages and unrendered background just off to my side.
Similar arguments go for the case where I’ve always been in the simulation, but the simulators can alter memories. Maybe everything I remember past fifteen seconds ago (I’ve spent some time typing since I said five) is an implanted memory.
Okay, sure. In my defense, I did say the movie kind of sucked.
About the only way you could justify popping people into a shared consciousness via a drug would be to posit a sort of telepathy, that the drug allows them to access. At that point, one of the people whose brains are involved would be providing the ‘simulated’ environment for everyone to run around in. (In Inception that was supposedly the girl’s role - that was the whole deal with her making the city fold in on itself in the only interesting visual effects in the movie. Why she didn’t make the environment warp itself in ways to be favorable to the invaders during the real caper is of course, [del]a giant plot hole[/DEL] [del]a sign they ran out of special effects budget[/del] largely unexplained.)
Having a physically plugged in computer connection is way better, narratively speaking.
The toaster, when it pops up, could hit your fingers and push them up. This means that the connection is going to have to account for physical objects physically acting on you independent of your will, based on their own reasons for moving, and you have to be able to feel it. You also can see the toaster, and hear the sounds from it when it snaps; the toaster is simulated, so the simulation needs to feed the images and sounds of the toaster to you.
Another person will interact with you the same way the toaster does - you feel them, see them, hear them, and taste them. In all these cases the simulation can track which movements they’re trying to make, and while the simulation translates those motions into that person’s view of the simulation, they’re also transmitted into your view, and acted out by an avatar of that person near you, that looks like Fred Flintstone. Similarly anything they try to say will be projected in the simulation from the vicinity of their avatar’s mouth, in Fred Flintstone’s voice. Really there’s nothing about their avatar’s interaction with you that distinguishes it from the toaster as far as you’re concerned; the only difference is the initial source of some of the input signals: the pod next door as opposed to pure simulation.
Same here. I’ve driven to several destinations >800 miles in one stretch (so they couldn’t reload the maps while I’m asleep.) So the magic map would need to stretch from Seattle to San Francisco, New York to Chicago, Berlin to Florence, etc.
Presuming this is happening with you existing in physical reality Truman-show-like (per the thread title), then they merely have to treadmill the ground in the opposite direction from where you’re going, to keep you in place, while creating the new scenery to slide past you as it comes into view. This would probably require some sort of holodeck technology to pull off; it would be a bit tricky to have interns manufacturing all the terrain out of styrofoam as fast as you approached it, while also disposing of the stuff you left behind as it falls off the back of the conveyor.
The really problematic part, which even holodeck technology would be hard pressed to deal with, is the sheer physical distance you can see. Simulating open spaces by holoprojecting an image on a wall works on TV, but in real life binocular vision would suss that out pretty quick. Even with a treadmill system in the floor to keep you roughly in the center. You’d need a big skydome.
Binocular vision only works out to 30ish feet: A projection onto the walls of the dome would work just fine beyond that. And a holographic projection would work at any distance: If you had the technology to make computer-generated holograms real-time (which is perfectly allowed by the laws of physics, and hence a mere engineering problem), you could put those distant mountains (and the cornfields in between, and the road signs, and the yellow stripe, and most of the interior of your own car) in a box just big enough to contain the driver’s seat.