This sort of thing has been credibly called into question by more recent studies, though—part of the trouble is that the sort of artificial scenarios considered in Libet-like experiments don’t yield any true means for a subject to decide between options; their preferences are symmetrical. The suggestion is then that it depends on a chance factor to ‘tip things over’, and that what’s been called the ‘readiness potential’ is just that—a chance fluctuation in the random noise of the brain just strong enough to break the symmetry of the situation, and produce one or the other outcome. It wouldn’t then be indicative of ‘the brain making a decision before we become aware of it’, but rather just part of the random background working of the brain.
Indeed, when comparing the brain activity of a group of subjects in a Libet-like experiment with that of a group of subjects that were asked not to move at all, a difference only emerged about 150 milliseconds before participants of the first group moved—i. e. at the time they became conscious of making a choice.
I fail to see why instantaneity conveys authority whereas a time lag discredits the process.
Also, consciousness is absolutely not the same thing as verbal analytical thought.
When I accidentally brush the back of my hand against the electric heating element in the oven’s broiler, I have a dearth of rational thoughts coursing through my brain; it’s occupied by sensation and emotion. I could not provide you with a transcript of any coherent thought process that concluded with “I really oughta snatch my hand away from that before it does more damage”.
There was also a time lag involved: the rate at which sensory nerve endings transmit signals is a whole lot slower than the speed of light, and even if my reaction had been to compelling visual stimulus, the speed of light isn’t the same thing as instantaneous. Then the brain had to process all that, at several different levels. That scorched hand was well out of the oven’s maw long before the rational intellectual verbal frontal cortex got the memo.
But I was conscious. Oh hell yeah, I was conscious. More so than I particularly wanted to be.
The study cited was specifically designed to attempt to address that issue, to distinguish between a general preparation for movement, a readiness potential, and an actual specific decision being made.
AHunter3 in the example you give you experience the qualia of pain as your gets the information. In the experimental set up the brain is making the decision and afterwards the mind is thinking of itself as having made the conscious decision that in fact it was not consciously making. The lag time of one does not involve the brain lying to the mind while the other does. It is not the time lag; it is the lie.
That’s not the issue the article I linked to raises, though. Rather, there, it’s shown that the ‘readiness potential’ simply isn’t anything of the sort—it’s just a chance fluctuation randomly occurring in the brain, whether or not any movement is made afterwards, that, however, in certain cases may suffice to tip the scales towards taking an action, or break the symmetry between equivalent cases. If that’s the case, it’s simply not the brain making an unconscious decision, but rather, a factor influencing what decision the brain makes, akin to other motivators, like hunger, but just occurring at random.
Consider Buridan’s ass, who’s as hungry as it is thirsty, and as far away from a source of food as it is from a source of water. There’s no means by which it could make a decision based on its preferences, those being equal by hypothesis; but now, suppose that its internal ‘detectors’ for hunger and thirst (or lack of food and lack of water, if you prefer) are somewhat noisy, and subject to random fluctuation (as they would likely be in a real-world case). Then, it might occur that on such a chance fluctuation, hunger outweighs thirst, causing the ass to go towards the food source.
That chance fluctuation is what the readiness potential turns out to be, and it’s not any more ‘the brain making a decision’ than the random fluctuation of hunger is in the ass’s case.
By what non-conscious process was the “brain” making a decision that the “mind”, later, thought it had consciously made instead?
It reads to me like the authors of the Nature article are saying that the mind begins processing sensory input, makes decisions, and then after the fact processes the fact that it did indeed make such a decision. Despite the way they worded that, I don’t agree with them, or you if you’re endorsing their description, that that means the decision was not “conscious”. There’s a difference between “consciously making a decision” and “being conscious of having in fact made a decision”. The latter contains an additional level of self-referential behavior, the very type of self-referential behavior we’ve been discussing throughout this thread, Hofstadter’s strange loop, if you will. It makes sense to me that the mind can’t be aware that it, itself, has indeed made a decision until after it has done so.
That doesn’t, to me, mean that the decision-making itself did not involve consciousness.
But one of the definitional aspects of consciousness is that it’s *immediate * - that’s implicit in the frequent use of the words “aware” and “awareness” in definitions - not an edited greatest hits reel.
Unless you’re working with a different definition of consciousness that you’d care to share?
I don’t see why that is necessarily the case. I don’t think that consciousness is immediate, I don’t think it is possible for it to be immediate. The structures of the brain handle inputs and make decisions, we become aware of those decisions at some point after that has happened.
Defining consciousness or self awareness is a bit slippery and malleable but I don’t think there is any requirement that they must relate to a real-time and immediate mental phenomenon.
I think your analogy of an “edited greatest reel hits” is pretty good. how can it be otherwise? Given what we know that model certainly works. I don’t see how the brain-processes can lag the “awareness” nor even how they can happen at the same time. Seems far more likely that there is at least some (even neglible) delay between the brain-process and the phonomena of becoming aware that such processing has taken place and that awareness is what I would term “consciousness”.
But then consciousness just gets reduced down to memory, and there’s *usually *a distinction made between consciousness - awareness (implying “awareness now”)) of both internal and external phenomena - and memory - review of past phenomena.
It’s fine if you say they’re the same, but that would be idiosyncratic usage of the word “consciousness”, I feel. The general usage seems to imply if I engage in introspection now, I will be aware of phenomena now. Not some undefined time later.
I can see the brain devoting processing power in different ways and prioritising that for the “immediate” situation and handling it differently for a deep memory search of past phenomena. I can see how they might appear different (in terms of speed) to whatever view of that we have (and that I term “consciousness”). however I don’t think there needs to be a huge distinction made.
Whatever is going on under the hood we are “aware” of the of the immediate phenomena through the same mechanism as we become aware of the search and recall of deeper memories, even though the “experience” of that may be qualitatively different.
Idiosyncratic to some extent perhaps but then I think, as I have said elsewhere, that we run in difficulties with the language we have available to us when talking about these concepts. We may well need another word or phrase to refer to what I term “consciousness”, happy to take suggestions on that.
Heck, even “now” is problematic. Is the sun shining “now”? We only know it was 8 minutes ago. There is an inherent limit to our ability to be “aware” of the sun’s state. Likewise I suspect that even experiencing something “now” must have a necessary lag from when the sensing of the phenomena actually occurred. I don’t think the laws of biology and physics allow otherwise.
Could all be wrong of course and research is sure to shed more light on this in future.
In my opinion the only way that consciousness is an illusion is that consciousness includes the illusion that it’s simpler than it is. It’s a car with the hood down - you see wheels moving and make the natural presumption that they’re powered by a single large hamster wheel connected directly to them rather than the complicated melange of tiny suplexing hamsters that is actually inside there.
The fact that you can detect the triggerings of a thought a few moments before the thought emerges onto the surface of consciousness is simply an aspect of the fact that the mind is made up of moving parts rather than a unitary sole soul. That’s the illusion there - it feels like there aren’t moving parts, when in fact there are.
So there is an illusion, but it’s equivalent to the illusion that the chair you’re sitting on is anything other than a bunch of molecules that aren’t even touching each other. The solidity of the chair is an illusion. That doesn’t mean the chair isn’t real, of course. Or your consciousness either.
I think the illusion is like a car that you think is propelled by pushing on the gas pedal without realizing that pedal is connected to an engine that you can’t see.
There’s no presumption about a hamster wheel, you actually believe it is your foot, or even your mind that is making the wheels turn. You can’t explain how that works, it doesn’t make much sense if you try to explain it, but almost everyone sees it similarly.
I was talking about the thing generating the consciousness being more complicated that it appears, even to the consciousness itself. So there would be no foot.
If you start talking about the foot then the response it to start talking about the hidden inner parts of the foot which make it work, which eventually leads us back to…the brain. And analogizing the brain with a brain kind of feels wrong to me.
I wasn’t really looking at your analogy that way. Not sure what you are saying but it is very tough to discuss this subject because of the inexact and varying terminology.
Maybe I can find an analogy to present my own thoughts on the matter.
ETA: We are aligned on your first sentence though.
I suppose I’ll be lazy and just say that I’m a computer programmer. We build everything out of smaller things - numbers are ones and zeroes, letters are numbers, text is letters, panels and buttons and labels are collections of text and numbers arranged an an expected way, forms are panels and buttons and labels arranged in an expected way. The programs themselves are composed of execution blocks composed of functions composed of commands composed of even more basic commands. Everything is made of smaller things.
And all the things pay attention to only one level of things smaller than itself, if that. The forms know about the controls on them but don’t care about the state variables that define the controls, nor do they care about the underlying code that defines how those controls are managed. The controls care about their state variables but don’t care about how those variables are stored. It’s all compartmentalized, essentially on a need-to-know basis.
I’m pretty sure consciousness is similar; we’re unaware that our consciousness is actually a continually updated self-referencing state reviewer because knowing that doesn’t help. We’re unaware that our thoughts are developed and processed over seconds before emerging at the forefront of our consciousness because it doesn’t matter. So there’s an illusion of simplicity, but it’s really just some parts of our mind not bothering to keep the other parts of our mind in the loop.