You can do that?!
I’m not certain exactly what you’re referring to regarding the simple shapes, but in the case of mentally controlling robotic arms, there’s quite a bit of difference between a trained biofeedback mechanism with a cooperative subject that wants to control that arm and an adversarial interrogation setting. Basically, the CNN article from the OP is another example of pop-science run amok making some rather extreme but unfounded extrapolations from current research.
To further torture the office-windows analogy, let’s imagine that you and the accounting firms want to cooperate to achieve a common goal. They’d like to utilize you as an order-out delivery service. Given enough training sessions, you and the accounting firm ought to be able to work out a system where a giant block-letter ‘P’ on the side of the building indicates the need for you to go grab a pizza a bring it to them while a block-letter ‘C’ on the side of the building instructs you to go fetch coffee.
The cases cited aren’t examples of you successfully monitoring the computer screens or keystrokes of the office building, they’re examples where the accounting firm intentionally communicates their desires through gross changes to observable conditions and many training sessions.
Deducing the internal state of an accounting firm that doesn’t even know you’re watching let alone intend to share information with you? Quite a different prospect altogether.
It’s possible to tell if someone is, for example, thinking of a circle or a square; the primary visual cortex at the back of the brain is what we use for our visual imagination. Imagining an image causes neurons to fire in the same pattern you are imagining, if somewhat distorted. As I said earlier, though, that at this point requires invasive techniques, which limits it’s usefulness.
Well, yes, but I was thinking of cooperative subjects. In fact, the problem with using this to interrogate prisoners is worse than you say, since even a perfected, highly refined version would only be detecting explicit images and your internal monologue - in other words, the parts of your thoughts you can lie with.
Yes; in fact it’s pretty old technology. Here’s some stuff from Wiki about it; including a Dutch researcher who showed it could be done with 15 bucks worth of equipment back in 1985. I’ve read that the US government denied it could be done for years before that, while people who knew better apparently spied on corporate rivals or government agencies that bought the official line. As I recall, it was still officially “secret” back when I was reading about it as a kid in library books ( and had no idea it was supposedly secret ); I’ve heard it used as an example of the government’s occasional fondness for declaring widely known information as “secret”.
The way I understand it, image recognition in the brain relies a good bit on memory – i.e. an object is recognized by identifying it as an instance of something we’re already familiar with; we’ve all experienced that sudden ‘flip’ that occurs when a bunch of shapes and colours suddenly ‘makes sense’, and likewise the confusion that arises upon a misidentification brought on by too little data: ‘OMG there’s someone standing beside my bed! …No wait, it’s actually just the shadow of my floor lamp which doesn’t look like a person at all. :dubious:’
So, it seems to me that the images we see ‘in our heads’ are based on a sort of vector space representation based on previously acquired data (sort of like this, perhaps), i.e. our memories. However, that would mean that everybody codes images differently, based on their memories, which would seem to prohibit ‘mind reading’ (unless the apparatus is individually calibrated, that is).
(Also, I think this argumentation is equally valid against the more supernatural claims of telepathy.)
Sorry, seems like I botched the URL in my previous post – this is what I was meaning to link to.