Assuming that a technology as incredible as this would persist over generations, would it not at the very least, dull the fight or flight instinct by essentially reducing it to redundancy?
You’re offering an upgrade to my nervous system? Sign me up. I don’t know why anyone would turn down Spidersense.
…
As I said, there’s no way the situation would stay the same long enough for evolution to make a difference. And if it somehow did, we’d just use genetic engineering to sharpen the flight-or-flight instinct right up again. Or just replace it with an AI function too, assuming we hadn’t already done so.
Basically, you are worrying about the long term evolutionary effects of buggy whips on carriage drivers. The answer being, there is no long term for them.
This is my answer, although with perhaps a little more enthusiasm. I’d be all over that (like white on rice). I understand that this wouldn’t be precisely accommodated by the OP’s Google Body scenario, but one day last week the bus took an alternate route past my stop; I was dropped off in an unfamiliar area and have a horrible sense of direction. Without my phone providing GPS and a compass, I’d’ve been lost (in the rain!). How much cooler would it have been for all of that to be in my brain?
The time a car pulled out of a parking lot and rain into me (while riding a bike)—the driver’s new reflexes would’ve stopped her before she had time to process that I was there. No more absent-mindedly touching really hot plates and dishes, or slipping on wet shoes while walking on linoleum. Years ago I worked at DHL; not too long before I started a night worker walked into a propeller (!), a pretty gruesome accident that this sort of technology would prevent.
Heck, surely you could rig this thing to help with less dangerous accidents, such as leaving wallets or purses in a restaurant booth. Sign me up.
I uh er what? Where in the OP’s scenario was it put forth that this would be some mandatory thing against which “violent resistance” would be necessary? How exactly is your opinion not copacetic with “I don’t want it for myself or for my children”? Unless I suppose you’re implying that you would join an armed uprising against the mere existence of this kind of technology.
For all the threads on this board complaining about laptops loaded with spyware, hidden DRM on music, teh Internets tracking what web pages people are looking at, I’m surprised so few people are totally not interested in a technology that would quite literally be capable of overriding one’s brain.
I am more concerned about unnecessary monkeying with my brain. Just like I don’t think men should take prescription drugs if they are healthy and don’t actually have a problem that needs to be addressed (whether it is antibiotics or Viagra), I don’t think a person ought to submit their motor functions to a computer unless there is some health-related issue that needs to be addressed.
If such a computer would help disabled people, I’m all for it. But people shouldn’t monkey with the most basic functions of their biology without damn good reason for it.
Definitely my greatest fear would be spyware or bugs. I deliberately didn’t address that in the OP because I wanted there to be some room for controversy in the poll; I didn’t figure people would object to it even without the possibility of hacks/bugs.
No way I’d do it if there were the significant potential for hacks. Bugs I might accept, depending on their severity; basically, if they weren’t worse than the bugs in my nervous system that they fixed, I’d be okay with it.
I have no trouble getting rid of a fight-or-flight reflex if it’s no longer necessary, any more than I had trouble getting rid of my wisdom teeth. I don’t oppose dental hygiene because it makes those spare teeth superfluous.
The “damn good reason” would be “avoiding lethal or debilitating accidents.” It’s hard for me to imagine a better reason.
I voted yes, but I want to caveat that I’d want control over the sensitivity. Prevent me from dying or losing a limb, yes. Don’t prevent me from scraping a knee.
How it could tell the difference, I have no idea. I fear becoming dependent on this thing to keep from being a total klutz. However, I’d hate to be lying on the ground bleeding to death in front of my kids and feel that dreaded “should’ve gotten the GBodyAI” feeling.
Such technology could certainly help soldiers in combat. Being able to snap-shoot much more quickly and accurately or being able to more quickly identify the source of hostile fire could give a man a real edge in a fire fight.
But I would prefer something that would assist me, not control me. Like if the program simulated the results of years of combat training and experience rather than simply automatically moving my body.
We could avoid a lot of accidents if we wore helmets everywhere we went. But we don’t. A connection to one’s brain is so much more intrusive than a helmet that it isn’t even funny, and helmets aren’t known to monkey with one’s biology (other than being the second leading cause of hathead syndrome).
It just doesn’t make sense to me that someone would be more willing to have their brain partially controlled by a computer than put on a helmet for everyday life.
What? No it isn’t. A helmet is something you’re aware of every second you wear it: you constantly feel its weight, for example, and it may intrude on your field of vision. Your head gets sweaty and gross if you wear it too long.
This proposed technology would be something you were aware of in the instant it protected you only, and otherwise it would be completely in the background.
My major objection would be if it interfered with a person who voluntarily chose to cut themselves. If it doesn’t, then maybe.
Let’s say it’s overrideable in the same way that your current reflexes can be overridden.
I find it difficult to argue about the intricacies of how small or large magical technologies would have to be, how much power they would consume, or what colors they are available in; I’m referring to the general principle that devices that would control one’s brain and nervous system are “more intrusive” than a hat.
In what manner? For purposes of this scenario, we appear to be imagining a device for which a user would be indistinguishable from a non-user; there would be no apparent apparatus or energy cost to use. If it worked properly—which we seem to be assuming—then the only result would be avoidance of injury. So in what precise way would this technology be intrusive?
I can see how worry about the device being tampered with or hacked would be an issue, but it doesn’t sound terribly intrusive to me.
So essentailly were talking about artificial instincts?
Absolutely not. As others have mentioned, it’s replacing a part of the brain. As such, it’s hard to know what kind of impact it might have other other aspects. Perhaps our laziness in responding to dangerous stimuli will also affect our responses to less dangerous stimuli, like basic hand-eye coordination.
Also, I’m unsure about exactly how well it works. Sure, if we’re just lazy, we may hurt ourselves, but I think a lot of these sorts of situations, if not a majority of them, aren’t from laziness, but from a simple lack of information. That is, when I trip, it’s usually not because I didn’t lift my foot up high enough, but because I just didn’t see it. If I don’t see it, how can any system adjust to compensate for it? The information simply isn’t there to be processed.
Most worrisome to me is that this sort of technology introduces a lot of dangerous areas morally. That is, if a system can temporarily override our own actions, it leads into some areas like openning ourselves up to complete body and/or mind control. As such, if it did exist, I’d be much more comfortable if it acted more as some kind of alert system rather than directly overriding our actions.