Simulated Reality / Wetware / "Matrix" etc

(This may be better suited to IMHO.)

Do scientists or computer programmers think that they will ever be able to develop a computer interface that will give us the ability to simulate any experience that we desire? (By this, I mean an experience so real that we cannot tell the difference between the computer-generated world and our “real” existence.) If so, how many years distant would this technology be?

Such an interface would have to involve direct manipulation of neural impulses by a computer. No matter how good the descendants of headphones and VR goggles get, you’ll still know you’re wearing headphones and goggles. But if you can directly mess with the brain (in a very precise way, of course), then as far as the “player” would know, they’d actually be wherever you programmed them to be. The technology to be able to do this is a long way off, though. Although (and annoyingly, I can’t find an online cite for this even though I know I read about it online), Japanese (I think) researchers recorded the brain activity of a cat’s visual cortex a few years ago, and were able to reconstruct that data into video of what the cat had been looking at. So if that process can be refined and reversed, you’ve essentially got the system you’re talking about.

Arthur Clarke’s ‘Brainman’ is such a system, and is described in his novels 3001 and The Hammer of God.

Anything is possible if you hooked up the right electrodes into the right places. IANANeurologist, but my WAG would be that in the very distant future there isn’t a very good chance of it happening.

I only see 2 major obstacles to this.

  1. We would have to know what every millimeter of the brain does to perception. We have a vague idea now of what area does generally what, but I imagine if you were off even a little with the electric charge it could impact the experience greatly. “Oopps thats the kissing your grandmother part of the brain, not the cool karate part.”
    This assumes every kind of experience is stored in the exact same location in every brain. Not likely.

2)Lets assume that we knew what every part of the brain did to an exact science, and it just happens to be the same in everyone. How would we get something in there that stimulates without fault the parts that you wanted it to.

Like i said before this is just my WAG and I’m sure someone who knows alot more about this than i will come along shortly to correct me.

I disagree with #1, IMHO I don’t think yadda yadda yadda
that we would have to catalog each part of the brain, we only would need to learn the way the data is sent to it and connect it up to the parts that recieve that data. There would be no reason to learn which parts store what information, only what the information looks like and where to send it too.

If you know exactly how eyes transmit picture to the brain, how touch works, you don’t need more that a fairly superficial understanding of how the brain works.

When you enter the area of directly hooking your brain up to a computer, you enter an ethical and philosophical debate.
“Are we morally justified to tamper directly with the brain?”
What is consciousness anyway?”
Technology and science don’t really enter the discission any longer.

It’s already hapenning - or at least starting. I have read about a couple of projects that stimulate the optic nerve directly, enabling some blind people to ‘see’. Currently, of course, the picture quality is not good - but the principle works.
You can read about
Electronic eye for blind man from BBC news, or Toward an Artificial Eye from the IEEE

Russell

Ohh i suppose that would be easier! Any specific “memories” could be put into the computer before hand and then transmitted into the brain.

Why do i always do things the hard way?

They already have. You’re not really sitting on your butt staring at a computer screen, but you’re in heaven, surrounded by beautiful bods, food, entertainment, etc. But your brain has been tricked into believing this is all there is. Sad, really.

There are still some huge technological hurdles to overcome that may prevent us from ever reaching the goal. As has been noted, the “easy” way to accomplish the simulation is to intercept the input pathways, but there are two main challenges to this:

(1) Blocking the normal inputs. We need some sort of neural switch that can be inserted into EVERY input. You can’t allow some inputs to pass through, or the experience would seem less than real.

(2) Massivley parallel input from the simulator. Today’s technology can handle inputs and outputs of tens of thousands, and take quite a bit of volume to accomodate. To do what you want we need parallel inputs on the order of billions (the eyes alone would take approximately 250 million inputs). You might be able to scale that back some using approximations that might not seriously degrade the sensory perceptions, but nevertheless we would have to squeeze all of these parallel signals and ‘connectors’ into just a 30 or 40 millimeters.

Many people might blithely assume that it’s just a matter of time before we shrink our technology small enough to accomodate the second issue, however we are already hitting some very practical physical limits and we’re many orders of magnitude away from the goal, so it may not even be achievable…

There’s one other potential hangup. Hiesenburg. This guy often seems to ruin the fun for the rest of us. We could very well run into limits imposed by the Hiesenburg Uncertainty Principle. The signals sent from our sensory inputs have two characteristics that make them possibly subject to the HUP: they are small and they are temporal (time dependent). We may not be able to make acurate enough measurements to reproduce accurate enough artificial stimulus - so the simulations may always seem artificial.

I’m sure there are many other challenges, but these are the ones that popped into my head…