Voyager, so glad you stepped in! Early in the thread I was thinking of your answer to me regarding parallel vs. sequential Turing Machines from a couple months back. (I told you I’d credit you whenever it came up.)
I look forward to what you have to say, and hope I didn’t botch any computer related stuff up too badly…
There was a debate about accessing memory. There seems to be a lot more going on in the brain than we are aware of. We definitely don’t sort through memories when we access them. They seem to be linked in all sorts of interesting ways.
When a microprocessor accesses memory, it doesn’t directly read data from where you would think the data is. There are several levels of caches, which optimize memory storage in ways invisible to even the assembly language programmer. There is even the concept of short term memory, with small, very fast, caches holding the most recently accessed data, and data around it, since you often index through chunks sequentially. An awful lot of the transistors in a processor are involved in dealing with the cache. I can’t imagine that we haven’t evolved similar mechanisms to handle our memory. Not exactly identical , of course, but performing the same function.
BTW, a lot of software does learn - and not just recent stuff also. Samuel’s checker playing program, from the late '50s and early '60s became champion. Samuel never programmed in strategy - he programmed in a way for the program to improve, using weights I believe, and played one copy against the other. Unlike Lib’s contention, the same position might result in a different move after learning. I haven’t read any papers on chess playing programs in ages, and a lot of the stuff is move processing in hardware, but I would be very surprised if no learning was going on. But I’m not surprised that no AI people look at it anymore - the problem is well understood, and they’re way past the point of diminishing returns, and with the funding they’d get they couldn’t possibly do better than IBM or the companies working on it.
BTW2, it would be easy to add something to chess programs to change strategy based on who they were playing. I think poker playing programs do that - they learn a player’s likelihood to bluff in different situations.
Thanks. No one got anything too wrong, but the internals of processors are so damn complicated these days, and programmers are moving so far away from the bare machine, that no one knows everything anymore. You have to work for a company that designs these things to get access to the microarchitecture manuals. It also helps to have worked on architecture when things were simpler.
BTW, I’ve followed AI for over 30 years, and met with and had classes from the old great names at MIT, but never worked on it. I didn’t do well enough on LISP, so I flunked Minsky’s test. My opinion is that it is possible, but the AI people are going about things totally the wrong way. I think we’ll get a computer intelligence through simulation of an actual brain long before we write code to create an intelligence. One of the biggest fallacies most people have is that if you have a bigger, faster, computer, you automatically get bigger, faster, smarter software that scales. Microsoft is a counter example to that. I worked at a place once where I had NT on a 100 MHz Pentium. XP is running on my 2.8 GHz machine is sure not 28 times better than NT.
Fine, except that I think humans are on the top step of awareness, or at least on the shoulder of the curve. How the first few steps take place is another whole ball of string.
Obviously memory has to be at least a potential for awareness to occur, or else awareness is always nascent. The perpetual infant. Yet, without awareness, what purpose does memory serve that it might develop? Entirely deterministic behavior that has the potential to be modified by iterative changes in neural response is not quite memory, and could perhaps be an intermediate step.
(But does that imply that only neural interaction can support awareness? So much for grokking the grasses.)
And not to open worm cans, or anything, but we have neural interactions with our environment long before birth. Do they create changes that are analogous to memory? Such memories have no referent in common with our post partum environment, but still, the process might well be underway at the formation of the brain.
Or, it might be taking place in any organism that has an integrated system for altering its own structure in systematic response to external stimuli. Sponges remembering the sweet oceans of ancient times. Not stimulating conversation, but still potential awareness. Defining awareness itself is much more subtle than watching changes in it as it develops.
Awareness IS a state. Nothing that you quoted or that I said contradicts that. There IS a process going on, but that process is not awareness. Awareness is merely one aspect of the process. It ought not to be necessary every time a statement is made to reiterate that awareness is a state. As was already said, the blending of immediate memory and past memory occurs BEFORE awareness. Perhaps I have pushed too much at you. You are quite busy with multiple discussions, and therefore might be having to read with less attention than you would like.
It WOULD be non-aware if YOU weren’t there. The awareness state happens to arise in your cognitive area (your frontal cortex), which is constantly under examination by your temporal lobe. You aren’t learning anything or growing upon the presentation of the event image. You are merely cognizant of it. It is AFTERWARD that you make inferences and whatnot. It isn’t that awareness is an evolutionary advantage because you know you’re aware. Awareness is an evolutionary advantage because it is a state that allows you to make sense of what’s there. It could just as easily occur in the parietal lobes, but it just DOESN’T. Your complaint about awareness occuring consciously is like complaining that sight conveys color. Why do we need color? Why can’t the brain just process the color without informing me of it? Why can’t I just see blackness and not be bothered with color? Light just transmits the information necessary for color, and the brain just processes it. That’s just the way it works. What you are doing is trying to strip yourself out of your own brain, and wondering why you are there.
Actually, it’s Carterian, but that’s neither here nor there. I’m surprised that it’s surprising to you that there is a portion of your brain that is you. Your temporal lobe IS you. Remove your limbic system, and you are a reptile. You would be conscious, but you would not be aware in the sense you are now. There is no point in confusing consciousness with awareness at this point.
You perceive whatever you’ve learned to perceive. Perception, like consciousness, is not relevant to awareness, other than tangentially.
You’ve come to a state in which you have forgotten things already covered, and are calling them up for discussion. That is doubtless due to the sheer volume of discussions you are having on multiple topics with multiple people. I notice that you’ve complained to Hoodoo that you’re exchanging some “dense” posts with me, and I can understand how that might be a burden to you. To say that you do not yet know what awareness is despite our having deduced a perfectly coherent definition without any logical flaw is to reveal a certain fatigue, perhaps. I’ll just back off a bit to give you room to breathe, perhaps taking up with someone else who might be interested in carrying the deductions through to determining whether memory is physical. I completely understand pile-ons and how onerous they can be. Thanks again for the interesting discussion.
Again, busy today so I only have time for brief observations - why oh why did I start this thread near the end of the financial year?
Note that pretty much everything’s a continuum when you get down to it. If there is “weather” and stuff that you wouldn’t say was “weather”, it must gradually emerge by putting together water molecules and energy. I’d suggest that quibbling about the exact number of molecules or the exact configuration thereof which forms the threshold is a little unnecessary. Weather, and life, and cognition, and the rest of it, still has a physical basis.
Yes, erosion could provide some kind of “memory” to the cliff face. I’m suggesting that access to, comparison of that memory, is the basis of cognition. That, to me, seems to start with insects and complex control systems just as a “cloud” requires a pretty big load of molecules.
I don’t know whether it’s relevant, but my way out of the Chinese Room is that “interprtation” or “meaning” is the association, the literally physical neuronal link between a memory (visual, say) and a sound or symbol (ie. a linguistic referent).
I have to disagree with your view on the genesis of awareness. I would think instead that awareness must start at a more fundamental level, beginning with the awareness of difference; with an awareness of otherness following rapidly on it’s heels: “that is warmer/softer/ bigger than this” (otherwise, babies wouldn’t cry because they wouldn’t be aware of any difference between their hand and mom’s breast), followed by “that is warmer/bigger/softer than me”.
Earlier, I had tried to explain why I thought we’re perhaps being a bit hasty in setting down the parameters for memory and awareness. Tris, thanks once again for articulating my thoughts, and in manner that’s far more coherent than I’ve been able to muster.
Well, my confusion stemmed in part from post #168, where my question “Why the “middleman” of awareness?” was followed by: “Because the process of blending immediate memory with past memory establishes a continuity of familiarity…”, leading me to think awareness was the blending process, hence my confusion. But you’re right; I am feeling rather splayed out here. I really want to grasp your conception of an awareness state, Lib, so let me once again slow down, and try smaller bites: my understanding of the descriptor “state” is that states are not dynamic, they’re static.
IS awareness a static state, a passive “screen” the memories are displayed on, or am I entirely booting the definition of “state”?
Yes, that is exactly what I was trying to do. If it’s all ultimately “brain gets input, brain sends output”, and our stimulus-response system is sophisticated enough, why would we have any need for an “inner” awareness of consciousness, color, etc.? In the field of cognitive science, I’m far from alone in asking that question.
I was hoping that asking why we need awareness in the first place would help in our search for the nature of awareness. The only reason I brought it up is because I’m still trying to comprehend the whole “presentation” angle. The presentation of awareness (to consciousness?) seems to be the only important difference between a human and an automaton.
But bringing it up has badly confused the issue, so I shall, from this point forward, cease to bark up that particular tree.
What’s “Carterian”? Could you provide a link? (I got 50,000 hits for Jimmy Carter)
This is a nitpick… no… on second thought it’s not. So far, no one has established that there is a portion of the brain that is you. There are a number of contenders, all with their detractors and counter-contenders. A number of researchers think that “you” are not a portion of your brain at all, but a coherency in neuronal firing rates. You’ve correctly stated that the frontal cortex is constantly under examination by the temporal lobe, but that’s sort of a “so what” statement, considering that the temporal lobe is also constantly under examination by the frontal cortex. If you remove the cortex and keep the limbic system, then you’re a reptile. However, if you remove the limbic system, there ain’t no “you” no more.
Do you equate “consciousness” with “self”? I’m not going to agree or disagree with your answer, I’m just trying to understand your use of “consciousness” as opposed to “awareness”. I’ve been assuming all along that "consciousness"was more akin to “awareness” than “self”.
I completely grant that the definition has no logical flaws, but I’m having trouble granting it’s coherency. That’s what I’ve been trying to explore here.
Perhaps backing off is a good (although somehow sad) idea; it will give me time to mentally digest and keep the discussion from going down the rabbit hole.
I would appreciate it if you could answer the questions in this post about awareness as a static state, and if consciousness =self. I need to clarify those if I’m to make any headway here.
Yeah, but you know my response will be that, if everything’s a continuum, we must be differentiating molecules and energy as such cognitively. Does cognition have a physical basis? Well…
Then maybe it isn’t so much whether memory is a physical thing; it’s whether cognition is a physical thing. As myself and others have pointed out, without interpretation, a memory isn’t a memory at all. Physicality is necessary for cognition, but is it sufficient?
Then the question becomes, what makes one neuronal link an association, while another is not?
Well, yes, but awareness is a critical component of that process. You can’t just yank it out and remain with a process minus awareness. Just like you can’t just yank out the liquid state from heating ice to a boil.
I suppose you could put it that way, but I’m not sure how useful it is. A state is a mode; i.e., “the particular appearance, form, or manner in which an underlying substance, or a permanent aspect or attribute of it, is manifested” (American Heritage). Possibility is a state. Necessity is a state. Awareness is a state. Awareness is what it appears like when the process of memory construction manifests to you.
Alone or not, y’all are equivocating all over the map. Parts of the brain are not the same any more than your arm is the same as your foot just because they’re both parts of your body. That’s one reason computer analogies are so weak. The brain is not a single bank of RAM. With respect to need, it is asking the wrong question. Evolution is not a matter of need. There is no design. No guiding hand. No goal. All anthropomorphic references to evolution are purely metaphorical.
It is a vain hope. As I said previously, no matter what the entity, if it processes memory in the same way as put forth heretofore, it is aware. Human or automaton, it makes no difference. Like I said earlier, humanity isn’t special just because it is aware. What is special is its capacity for awareness — the sheer amount of it, and the sheer number of things it can be aware of.
I seems as though you’ve simply forgotten what was covered. Once we have established analytically that propositions A through G are true, there is no point in revisiting E. It will not become false due to any new discovery. Analytical truth is not like scientific truth. The angles of a triangle on a flat plane will add up to 180 degrees no matter what new conclusion is reached no matter how far down the road.
Trust the logic.
William Carter is a philosopher who opposes the Psychological Approach to the Persistence Question of Identity Theory. I don’t have a handy cite, but it should be Googlable.
Reptiles have only the medulla, pons, cerebellum, mesencephalon, globus pallidus, and olfactory bulbs. They have no limbic system. That begins with mammals. But again, that’s neither here nor there. Pick a part, any part, and call it X. It is still that room in the house that you’re in. Even if X is a “mere” coherency, you’re still tuned in to the house. You can say you’re watching on closed circuit TV if you like. It doesn’t matter. The process and the state are the same. Whether you are driving from inside the car or by remote control, the mechanics of locomotion are unchanged.
I wouldn’t say “equating”, but certainly relating. It is the self that IS conscious. Whether it is an identity or a predicate really doesn’t matter.
If it is logical, then it must be coherent: “Marked by an orderly, logical, and aesthetically consistent relation of parts” (American Heritage).
As you wish. I blame myself for being so long winded. I’ve been told I don’t communicate well.
This points to the weakness of my reliance on the Ship of Theseus paradox. Even though measurements of the physical state are changing, the actual configuration of hardware isn’t changing (barring FPGAs, and evidenced by the fact that the results of dumping core will always exhibit the same structure). I’m not sure how else to handle the objection; any ideas?
An interesting point, I think. How might one characterize a “memory element”? What I mean is, I can see qualifying RAM and neurons as memory elements, but it is conceivable (I think) to have a memory element that is neither. What would it be like? Might a possible description be characterized by the idea expressed in the paragraph above – that there is some physical configuration that retains continuity but whose state can change without changing the overall configuration? Nifty…I like that one, as it supplies a concrete functionality that can be “tested” for.
Ohh…I hope I didn’t imply that it was easy to explain, nor that we’d made a robust accounting, nor that “stages” of development are discrete and well-defined. It makes sense to me to think that neural development pre-birth (and continuing post-birth) is a laying down of a physical configuration that relies both on genetics (nature) and environment (nurture), which is inherent in the boot-strapping process.
I think that this parallels the macro-evolution debate, exhibiting all the same objections and suppositions. (Which I also find interesting, as one can apply micro-evolution to various “modules” of the brain.) I find it to be convincing; if it’s not, I’d like to hear why it’s not. (Note that I don’t mean this as a challenge, mind you, but am asking for a pointing out of where the weaknesses are that I may be overlooking.)
Lib, thanks for sticking with me. This last post gave me a much clearer grasp of how you’re using both “state” and “awareness” and I realize I have been chewing on the wrong end of the stick.
You’re right, “need” was sloppy, fatigued wording on my part. I fully realize evolution is neither a matter of need or design.
Boy, I am so tempted to uncork a barrage of curiosity about the relationship between capacity for awareness, consciousness, and what you have termed our “essence”. But that’s an OP in itself.
Well, whadda you know. I returned to my bookshelf to prove you wrong, but after consulting additional tomes, it turns out that Maclean’s schema is different from Panksepp’s which is different from Pinel’s. Some assign parts of the limbic system to the basal ganglia and vice versa. No wonder neuroscience is so much fun.
But I take your point. The process and the state are the same regardless of how they’re realized.
I’m curious; what do you make of claims of “non-dual” experience? That is, of consciousness without subject?
In my opinion, it’s not that you don’t communicate well, it’s that your communications are dense; like premium dark chocolate. You can’t just gobble them down by the bucketful like popcorn. Actually, I’m reminded somewhat of a friend of mine who routinely employs triple entendres: at first, they whoosh right past you, but when you finally get it, it’s pretty damn funny.
You have indeed. And again, thanks for sticking with me.
Dagnabbit, I forgot to make my point. While such memories may not share a common referent – with no sense of other, how could they? What might qualify as a referent? – they still share a commonality. The senses possess a fundamental continuity; in other words, the sense of touch in a fetus may be less developed than in a toddler, but it is still touch. So long as memory of the senses can be stored, recalled, and compared, there is a foundation for establishing difference, which can in turn be used to establish otherness and identity of self. Not that this is a complete explanation in the least, but it certainly seems (to me, at least) to provide an adequate foundation.
I woke up this morning thinking about SentientMeat’s post regarding the statistical nature of our senses. This seems to dovetail nicely here: senses retain continuity over time via their physical basis. Memories of sensory perceptions are paritally due to statistical reinforcement. “Memory elements”, as referred to by Voyager, provide the units on which the statistical reinforcement can act. The continuity of the physical configuration of memory elements provides the means to maintain memory at some level, while also allowing changes in state (not configuration) to alter memory (to a greater or lesser degree). The ability to store, recall, and compare memories provides the foundation for establishing difference. Nice, I think.
Evolution appears inescapable in consideration, and we seem, in my opinion to be likely to find it in individual iterations of the phenomenon of awareness, and in the general case of the genesis of awareness. Folks discussing the possibility that sponges remember things shouldn’t be worried about including the concept of evolution in their argument.
In the most primitive examples of both cases, it seems very unlikely that we will be able to define a single point in that genesis where awareness unarguably exists and is immediately new. While a set of parameters that do define such a point might exist, it is inherent in the nature of the phenomenon that it (the immediate genitive state for awareness) is ephemeral by necessity. Our own personal experience with awareness is quite similar. By the time I unarguably had awareness, I had already had it for quite a while. I not only don’t remember becoming aware, I don’t remember remembering it at any time. Of course we haven’t progressed far enough in the examination for a minimum definition of awareness yet either, so perhaps it is premature to consider its origin.
But, I suddenly think, the cosmological origins of awareness as a phenomenon, and the individual experience are fundamentally similar to an extraordinary degree. Do we not expect the “aware ones” to be specifically “like me” in that character, however much dissimilar they are in every other character? Do I not search for a reply? I don’t feel compelled to acknowledge any awareness until it alters my own perception in some way. And specifically, in some way that has at least some aspect of communication. I might explore the philosophical case that a rock has awareness, but I don’t spend much time talking to the rock. If the rock starts changing color when I speak, I probably will start talking to it. And if this particular rock has IBM written on it, and starts talking back, I will impute awareness, and even personality to it, in short order. (And develop emotional prejudices for or against it and Apples) But the existence of a single awareness, unique in its environment has inherent problems.
The unique awareness is not differentiatable from its environment on the criteria of awareness. It has no aware elements in its universe. So, all things are only divided into things I can affect, and things that remain unaffected. Nothing responds. No image of myself. Crushingly lonely, and cripplingly limited with respect to developing the concept of self. Yet the concept of self seems so integrally associated with awareness as a state of being. I think awareness must come many times, and then fail, before the coincidence of multiple aware entities make it possible to develop into a self sustained state. That seems probable, in the genesis of the phenomenon, and in the development of the individual.
(There could be many different threads on the moral, religious, and social aspects of that! But let us not wander off.)
Tris
“Don’t you want somebody to love?” ~ Grace Slick ~
Can you provide a little more to go on? I tried to google on “william carter” “identity theory”, but came up with only a single page of hits, none of which seemed applicable.