Let’s say persons A and B are hooked up to a brain scan device of some sort (EEG? CAT? Assume current-day technology throughout).
Person A imagines a raven on a fencepost in winter. *The EEG/CAT/WhatHaveYou device registers a particular pattern associated with this image. *
Person B imagines a raven on a fencepost in winter. *The EEG/CAT/WhatHaveYou device registers a particular pattern associated with this image. *
Person A now imagines a robin on a fencepost in summer. *The EEG/CAT/WhatHaveYou device…
Person B imagines a raven on a fencepost in winter again (trying to match exactly his previous image). The EEG/CAT/WhatHaveYou device…
Here are my questions for you encephiliacs (hah!).
**When Person A thinks of the slightly differing images, do these present-day machines (bonus points for telling me which machine would be best suited for this) detect a measurable difference? **
When Person B thinks of the same image, does it create essentially the same pattern both times, despite their being thought of on separate occasions? What if the time difference was a week, a month, or longer?
**Does Person A’s image #1 create a different pattern in a substantial way from Person B’s image #s 1 and 2, and indeed would everyone’s image of a raven on a fencepost in winter ‘look’ different on the scan? **
Are there technical constraints that I am not considering (i.e. our technology is just not that accurate yet to detect differences).
In case you are wondering, I have a plot device that I am trying to flesh out.
I have no definitive answer and have absolutely no grounding in anything medical, but I would imagine that the similarity between EEG or CAT scan readouts would depend a great deal on several factors:
The degree to, and accuracy with which either subject is able to conjure mental imagery in general,
The similarity (or lack) of the images each conjures,
The ability of the subject to maintain the image without distraction or embellishment.
It is also unlikely an experiment like that would require the subjects to think of so complex an image given the inevitable degree of variance between what each subject personally envisions. You would likely get them to think of something very basic (“Think of an orange square on a white background”) in order to make sure that there is as little difference in what each subject is thinking as possible. That would take care of number 2 above, and probably most of number 1. (Most people would have little difficulty rendering such a simple mental construct, and the law of averages says that most people would probably think of a square of approximately the same size overall)
Given that the brain works pretty similarly in most people, it wouldn’t be unreasonable to assume that ten people all thinking of an orange square on a white background would probably generate very similar EEG or CAT scan results. I do not think however that one could take any of those results and determine by them alone, without any other evidence, whether or not someone is thinking of an orange square on a white background.
I should add that it is unlikely, too, that the results would be significantly different if any one or more person was thinking of something a little than the rest – an orange triangle, for example, or a green circle on a gray background. This would require a degree of accuracy and brain construction that I’m tempted to think is nigh impossible.
It is my understanding that current technology allows for seeing what part of the brain is active, and to what degree, but not for what the actual subjective content of that activity is. Thus, if the activity is “imagining animals,” the same area of the brain might be involved, but the degree or type of activity won’t be externally measurable as differeing for “raven at midnight” or “polar bear in snowstorm.”
Thank you both for your help. I guess the real answer then comes down to my question 4, which is that we simply don’t have the technology to do what I am envisioning (i.e. being able to record a significant difference in brain activity when subjects envision similar yet slightly different images).
I thought, though, that if one hooked up, say, a VERY sensitive electro-meter of some sort, that due to the idea that ‘polar bear’ = one set of brain impulses and ‘raven’ = another set of brain impulses, it would be feasible given technology to (perhaps just eventually) tell the difference. I’m NOT saying that by looking at a brain scan one could tell WHAT a person was thinking, but simply that there IS a measurable difference between the two thoughts. In fact, logically speaking, there HAS to be an eventually measurable difference, hasn’t there?
Not necessarily. Think of a CD player – it’s drawing the same number of watts whether it’s playing grunge or Grieg. Someone mapping electrical activity – which is what brain scans currently do – won’t be able to tell anything about the difference in output, regardless of how clear that difference is subjectively.
Yes, I agree that total electrical output would be uninformative. But a spectrum map - showing where and in what frequency the little dendrites are firing - would essentially produce a ‘picture’ of a polar bear or a raven or what have you. My questions revolve around whether or not that picture (okay, given perhaps a superhuman ability to concentrate repeatedly on an image) would be in fact unique to each person, and whether if one thought of the exact (within reason) same thing next week it would create exactly (within reason) the same pattern/picture/map.
I know enough to know that’s not how the brain works, but not enough to explain how it does work. (Of course, there’s still shitloads that no one knows about how the brain works.) I’m out of my depth, though, so will have to step aside and let someone else take over.
I of course do not mean that the person watching the brain scan/map device would actually see a little outline or drawing of whatever a person would be thinking of. That is, of course, ludicrous. I mean that by thinking of a particular image/topic one could capture a particular ‘pattern’ or distribution of electrical impulses that could be compared to other captured patterns.
I went to graduate school in neuroscience. We simply do not have the technology to do those readings and get meaningful results at the level of detail you are talking about. The best brain scanning technology we have (MRI’s, PET scans) allow us to see brain activity at an extremely gross level of detail. Some estimates say there are about 1 trillion neurons in the human briain with about 10 times that number of supporting cells (glial) cells. Those types of readings on even small brain areas are picking up the activity of millions to billions of individual neurons. Combine that with the fact that we have no idea of the “algorithm” that the brain uses except in the smallest sense and you can see the problem.
There have been many studies done of the type that you describe. Subjects do show similar activity for some types of tasks. Brain activity is also repeatable for some types of tasks across trials as well. Brain lesion studies as the result of pathology give useful but often give baffling results. For example there are lots of people including my adopted cousin that are missing large parts of their brain and suffer few if any detrimental effects from it. We discussed a savant here recently that is missing whole parts of his brain including the corpus callosum (connection tissue between the brain hemispheres) and this somehow gave him the ability to memorize over 9000 books. The brain seems to defy attempts at a straightforward mapping of mental processes.
Brain science is much more in its infancy than most people tend to believe. Fancy sounding machines and long names for neurotransmitters tend to comoflauge the fact that we don’t even have the basics down at all. Much of neuroscience is still involved in finding about how neurons fire and connect to each other. This is the equivalent of isolating a single transistor out of millions on a computer chip and trying to figure out how that 3-D game on the screen is working. It is a necessary step but the goal is very far away.
Thank you very much for your very complete and lucid reply.
Not to beat a dead horse, but in summation is it basically still just an ‘equipment sensitivity’ problem that we are dealing with here? I mean (speaking purely theoretically now), given a machine that could map neuron firings to an nth degree better than we can now, is is assumed that the same neurons would fire for the same image, or (as my wife feels) do neurons and connections rewire themselves all the time and mean that todays firing neuron set that means polar bear could mean tomorrows firing neuron set that means Krusty the Klown?
Please don’t just say ‘I don’t know’. Lie, if necessary.
There are some general trends. The hippocampus has a very important role in memory. Some of the more “basic” brain structures like the hypothalamus and pituitary gland are somewhat well understood. We know what life functions the brain stem supports. However, you are talking about areas of the cortex where we know very little. The cortex is divided into lobes that are named for the rough types of functions that they are involved in (frontal, temporal, parietal, occipital). The lobes aren’t differentiated well in neither anatomy or function. Damage to substantial parts of the cortex doesn’t always wipe out the functions that are believed to be processed there.
There have been many studies that try to localize brain activity for certain kinds of tasks. Those are consistent for many people (sex differences show trends too) and for the same person over time. However, there are plenty of people that sustain brain damage in the areas thought to be responsible for that activity and they get along with little or no impairment. The brain can certainly rewire itself on demand with varying degress of success. The types of tasks that researchers have used are doing mental spatial manipulation or listening to music. I don’t think they have had much success with people visualizing a bear versus an elephant riding a bicycle.
The brain is not a computer. In fact, it has remarkably little similarity with a computer. Localizing processes is not straightforward at all.
Thank you again for your responses. I guess the basic answer is that we are still way too far away to do what I want to do, which is fine for me - it is a science fiction idea anyway!
I just always had the suspicion that with our current level of technology (including seismometers that can register small earthquakes half a world away), we’d be able to register highly detailed electrical patterns associated with imagined scenes, if not interpret them.