Can computers currently form private-key-encrypted memories and reactivate them?
I am presuming that no one participating here believes that there is “a ghost in the machine”? Given that let me ask a few questions:
Are our experiences of qualia, consciousness, and a sense of “self” a function intrinsic to neurons acting in concert, or is it a function of the dynamic patterns of activities that those neurons engage in that result in a cognitive/emotional system that handles its data in such a way as to result in such an experience, patterns of activity that could, theoretically, be created by non-neuronal elements?
Now several different writers (from different perspectives and with different specifics), and some data, suggest that the particular ways in which this processing loops about itself is what results in the sense of subjective self with qualia and consciousness. Does this seem right to each of you?
Yet this subjective sense of a unitary self experiencing a world emerges out of many many individual elements: neurons are seperated from each other and just each doing their jobs, unaware (so to speak) of the consciousness in which they participate (I’ll ignore glia et al for now). In my way of thinking, the brain is a massively nonlinear processing system and those principals of chaos theory come into play. In particular I am thinking of the tendency to self-similarity at different levels of analysis. If this is true, then neurons operating in subsystems are able to produce a certain level of awareness and those systems operating in concert are able, despite their communicating across some finite distance of time and of space, to result in a the experience of a unitary conscious subjective self. It follows that, without any particular intentional action to do so, those individuals brains acting in concert will also organize self-similarly, even though they as unaware of that than individual neurons are aware that they are participating in a single brain. Would such organization of many indidual brains also result in an emergent metaconsciousness operating on a different timescale with something selfsimilar to qualia of its own just beyond our individual comprehension?
It sounds like (don’t let me put words in your mouth here) that you consider subjective awareness epiphenomenal; an effect that’s produced physically, but it’s basically just a by-product and does not itself produce any physical effects (a similar and frequently quoted analogy would be the steam from a locomotive: it’s produced by the train but has no effect on the train whatsoever).
I haven’t come across anything that would lead me to conclude that consciousness, whatever it is, is intrinsic to neurons and neurons alone, so I vote for your latter proposition.
Yes, but I want to be careful not to conflate the sense of subjective self with consciousness; the two are not synonyms. The sense of subjective self is one of the things I’m conscious of.
Damn interesting question. You might want to google “Global Consciousness” (or maybe it’s “Global Brain”). I recall encountering a few essays on this very speculation, most of which were fluffier than “What the bleep”, but a few were quite erudite and thought-provoking.
Of course - the last time you resubscribed to the Straight Dope your computer used Public-Private Key cryptography. Now, like I say ad infinitum, the huma brain is different to the silicon one in your PC in all kinds of ways and so humn subjective awareness will differ from whatever subjective awareness a PC ever has. However, my point is that the private, ineffable aspect of this process called subjective awareness is a mundane computational matter at heart. Just because I can’t access your (or your PC’s) consciousness doesn’t make consciousness any more myseterious than my not being able to read your credit details.
In what ways is my PC’s subjective awareness different than mine? Also, since subjective awareness is private in both cases, how do you claim to know that they’re different? Most importantly, how does your claim differ from panpsychism?
It’s ‘aware’, if such a word applies, of completely different inputs. The signals which propagate from our sensorial equipment along these mushy, analogue channels called neurons are themselves very different to those propagating along the metallic channels of the chips, an the filtering and processing thereof yield yet more significant differences. When I look at my newsagent, the signal reaching my visual cortex is very different to that reaching the PC RAM of a digital photo of her.
I don’t claim to know anything for certain, but unless one wishes to entertain some kind of ludicrous pansychic equivalence, ascribing similar subjective awareness to a PC isn’t so much different to ascribing it to a rock. It has working memory, unlike the rock, but the similarities largely stop there, so ascribing a similar whole process would be like drawing equivalence between the life of a plant and the life of a chimp.
Because it suggests a basis for ‘awareness’ that most things don’t have: working memory.
We’ve already established that when it comes to subjective awareness, the signal is irrelevant. What’s relevant is the information the signal encodes. For example, if I go to the store and see letters on a sign that say “We’re closed”, I’ll know not to bother trying the door handle. If a blind person touches the Braille dots on the same sign, she’ll get the same information, despite completely different sensory input and processing. Of course, the exact information and the meaningfulness of that information will be unique to each of us, but there is apparently enough similarity in the information we acquire (and it’s meaning to us) to produce the same behavior, i.e., not trying the door handle.
I can see that with computers and humans, there might not be enough overlap. But how is the awareness a difference in kind rather than merely a difference in content? After all, we’re shuffling physical particles around to encode memory too; our particles are just wetter.
I don’t see how one can reliably determine whether or not a working memory is in play. What measures would be used, given that a private-key-encrypted memory can be indistinguishable from noise?
As I’m a firm believer in strong AI (most likely with some caveats, if pressed), most assuredly the latter.
Yes, although I’m not sure “loops about itself” is enough. In other words, to agree without qualification might misrepresent my actual position.
And I think that that, in a nutshell, strikes close to my current line of research. I balk at using the word qualia, though, not just because I think the notion is silly, but also because what I’m currently doing is far from anything that might be considered that way. Especially since I don’t work at the level of neurons, as that’s too “low-level”; that is, it would take decades to implement various components of an intelligent system from the neuron level. Rather, my line of work is to make individual components “aware”, put them in a planar graph relation or even a hierarchy of some sort, and provide the mechanisms by which “awareness” can propagate throughout the system, both in bottom-up and in top-down fashion.
Not exactly, which is why I made the attempt to qualify the term “by-product”. There’s a definite effect; I think I’d rather stick with “inherent in the process”. I don’t think you can divorce the subjective awareness from the physical substrate, so it’s neither separate nor distinct from brain function.
Which of course is why I said “the particular ways in which …” So what are your qualifications?
Now you know that you have to tell us more. I showed you mine, show us yours. Details my freind. How do you define “awareness” in your system, for example, and how would you measure it?
BTW, I’ve been thinking about how to implement my concept of analogy making as geometric transformations of fuzzy-edged n-dimensional conceptual objects. Would you, off-line, be at all interested in participating if it seemed doable and not too much effort? I think that if I presented it right, and brought on board someone like you who could both participate (in spades) on the conceptual side, and also on the implementation side, that I could possibly tempt Grossberg into another article … called ARTanalogy perhaps. (I don’t think that I entirely wore out my welcome, although I suspect that my lack of experience with the protocols of publishing preparation began to annoy after a bit. Being as it was aimed at audiences both clinical and theoretical and also being as it by necessity required convering both a large amount of background on autism research and a comprensive review of Grossberg’s models … well let’s say we had a few more revisions required than he is used to having.) Let me know if you want the details of what I’m thinking; my e-mail link is part of my profile. (Alternatively I might have to eventually contact clinical experimentalists to follow up on our autism predictions … and collaborating with them is a lot more work!)
Have we? I missed that meeting. I think it is entirely possible that the kind of signals which propagate through neurons are simply not reproducible by other means, thus possibly making all the difference in the world.
And I agree entirely that so subtle and ‘many-layered’ a communication might not be “understandable” by silicon for many decades yet. I’d suggest we stuck for the moment on car computers recognising road signs and taking appropriate action and the like, which is rather simpler.
It’s a threshold I’m drawing on the continuum of complexity, just as I could say that my life was more complex than that of an amoeba. If you disagreed, so be it.
Statistical deviation from noise, as I’ve already explained. The aliens, given enough time, would find the RAM of the PC and see similar inputs becoming similar outputs repeatable and reliably in that configuration of physical particles. The encryption would just hamper their understanding of what those configurations referred to in the world (if anything).
Ok, I got confused because earlier you had said that it was a result of brain function and here you seem to be saying that it is brain function (and presumably, theoretically circuit function as well).
My questions here would be similar to Dseid’s; How do you measure awareness? How do you measure the effect of awareness? What function does awareness perform that can’t be performed by neuronal activity sans awareness?
Well, since nobody knows how information is encoded in neural activity, the speculation that whatever it is, it may not be reproducible by other means is extraordinarily moot. And again, the issue is not the signal in se, but the meaningful content of that signal (if any).
Perhaps we’re getting our terminology crossed. When you say “kind of signal”, are you referring to the original input that the meaningful content of the signal is based upon?
Not sure what you’re saying here. Are you saying that amoebas have some subjective awareness, my PC has more, my brain has even more, and human communities and/or the internet may have even more, ala Dseid?
So any system that appears to be reliably producing similar outputs from similar inputs is guaranteed to have a working memory and is processing meaningful information. Is that correct?
Well, my qualification is mostly due to the potential expansiveness of “looping about itself”. The notion of feedback and stable states (which I’m linking to “looping”) is definitely necessary. Which is why I like ART – the resonance / vigilance mechanism is an elegant solution, made better by it’s prediction of empirical findings. But without a more complete description, I can’t determine the possible implications.
First, a caveat: I’m approaching this from a computer science / engineering perspective. I think it’ll be wholly unsatisfying as a philosophical stance, but I also think that in just the same way we leverage the (deceptively) simple process of computation to do a lot of work for us, so can a rudimentary characterization of “awareness” be built up into something surprisingly meaningful.
I think earlier I gave a definition of “reflection” as “the ability to observe and reason about internal states”. “Awareness”, at the level I’m talking about, might simply be characterized as a monitor of some sort. But not a monitor as in a sensor (e.g., a photovoltaic cell that detects light), but something that can detect the state of another part of the system (e.g., in the simplest case, a toggle that indicates that the photovoltaic cell has detected light). I see little reason to discount extending it to more abstract…um…things. (Actually, as an aside, I think the way people generally use the term “reflection” assumes self-awareness, turning it on its head in a form of question begging.)
Measuring it, in a generic sense, is a tricky business. I believe there’s a whole subfield of control theory (or cybernetics) devoted to exactly that. How can we tell an oscillating signal (worse, a chaotic signal) is not functioning appropriately? Is it possible to determine proper function of a black box solely by looking at its input / output couplings? And those questions are made infinitely more difficult when we’re guessing about (or reverse engineering) the subject matter under examination. At this point, that’s out of my area; in general, I have to be satisfied with explicitly assuming (or, if I’m designing the thing, defining) the boundary conditions. As I say, unsatisfying when discussing the wonder of consciousness, but it’s one avenue into the system that’ll hopefully pay off.
I’m suggesting the two issues may not be separable: read on.
No, the entire process which takes place, given the enormous feedback we know occurs in our sensory processing. What feedback means is that, for any signal, there is then a little bit of processing of it, which modifies the next portion of signal, which is then processed again, ad almost infinitum. The massively parallel processing which takes place in biological brains might simply not be anything other than merely approximated by silicon devices due solely to the weird, poorly understood analogue nature of the channels themselves.
OK, there’s two words here: subjective and awareness, and were getting caught up in conflating aspects of one or the other. My privacy and encryption arguments concerned the subjective aspect. We’re now talking more of what counts as awareness and what doesn’t. I’m suggesting a threshold based on working memory, which I can’t really say an amoeba has at all. I don’t count human communities as having one, either, - they’re rather separate working memories: a multitude of awarenesses, if you like. The PC just about has some kind of working memory containing inputs with some “relevance to the world”, but the processing thereof is so meagre that it can just barely be said to be ‘aware of’ those inputs, rather like an insect’s brain, say.
So, my ‘awareness’ thresholding, which you can of course consider arbitrary and absurd if you like, goes rock: none; amoeba: none; PC: just about some; human: lots; human community: none. Similarly, you might say regarding life, rock: none; virus: just about some; amoeba; yes, etc.
Reliable processing of inputs to outputs is what I’m suggesting comprises working memory, yes, but those inputs aren’t necessarily meaningful, no. Like I said, the inputs can statistically deviate from noise (ie. be information) but still be nonsense (ie. not be meaningful), like me making up words which still obeyed general rules of English pronunciation (such that the strings weren’t absolutely random, most of which you simply couldn’t pronounce).
Sorry about that. I try to qualify things and provide context, but am not always successful. I’m not sure where the line gets drawn between “result of” and “inherent part of” brain function. One thing that needs to be accounted for is the structure and complexity of the system (echoing SentientMeat’s sentiments). That is, while snails (I forget the genus/species of the particular example I’m thinking of) have brain function, I’m not sure anyone would attribute self-awareness to them, much less consciousness (in extremely rudimentary form). And yet, they have neurons just as we do.
I’d contend that there’s a continuum along which awareness (or self-awareness, or consciousness) arises as part of (or a result of) brain operation. One benefit of this is that it provides some sort of answer to the question “is a snail / fish / chipmunk / monkey / human self-aware?” Well, if you mean “self-aware” like a human, then only humans are (with caveats about degnerate cases like persistent vegitative states). If you mean “self-aware” like a monkey, then perhaps humans are too. If you mean “self-aware” like a snail, then maybe, maybe not. These answers also depend on how strictly you mean “self-aware like X”; can humans know what it’s like to be a bat? Not exactly, but we have some idea of what it would be like. There’s an essay by Aaron Sloman that you might find interesting that examines this question.
See my response to DSeid. Is that adequate?
Feedback only confirms that meaningful content is the issue that still needs to be dealt with. After all, if the signal had no meaningful content, it couldn’t be processed: no processing, no feedback. Also, if feedback “modifies the next portion of signal”, then it’s no longer the same signal; it’s effectively a new signal and we’re right back where we started.
But no worries; we can run with the idea that PCs may only approximate the subjective awareness humans have. However, that addendum (along with the addendum I address at the end of this post) disembowels your original answer:
… leaving my original question (below) unaddressed.
Well, I have to admit I’m very interested in exactly what non-subjective awareness would be like. What is its nature? What properties distinguish it from subjective awareness? How do you measure non-subjective awareness?
Since the inputs into my working memory are meaningful, you still haven’t even addressed my original question:
Maybe it would be easier if I highlighted what I see as the main problems with the following scenario/questions:
In order to become aware of something visually, photons have to enter my eye and be absorbed by receptors in my retina. At this point, any information the light energy carries has been encoded as electro-chemical energy.
Sometimes the information triggers behavior on my part before I become aware of the information, if I become aware of it at all (There are so many examples of this in the literature that I assume there is no need for me to dig up references). Sometimes, however, I do become aware of (at least some of) the information (“Hey, look, it’s a poodle!).
So in one case, some sort of neural activity is able to decode a sensory signal as having enough meaningful content to trigger behavior without awareness on my part.
In another case, however, some sort of neural activity is able to decode a sensory signal (not necessarily the same signal of course, but one that necessarily encodes at least some of the original information) as having meaningful content that I am aware of.
So… is the first case a case of subjective awareness even though there is no awareness on my part? If so, how is that determined? What’s your best guess as to what exactly is aware?
In the second case, what measurable difference is there between the neural activity that’s my awareness and the neural activity that’s not, considering that the neural activity that’s my awareness is information that is unbreakably encrypted, and to anyone but me, indistinguishable from noise?
In this second case (and possibly the first), I can’t see that there is any way to physically discern between neural activity that is my awareness, and neural activity that is not.
Yeah, mostly, but with the same caveat I gave DSeid: Awareness and self-awareness are not synonyms. A sense of subjective self is one of the things I’m aware of.
Hang on, what? Why can’t a signal which deviates from Gaussian noise not still be processed regardless of it representing something in the world or not? Are you sure you’re talking, like I am, about signal processing in telecommunications terminology?
By that criterion, there’s no such thing as a signal at all, since voltage/action potential levels must change with time in order to carry information: an infinite string of zeroes is not a “signal” as such.
Sorry, I though I’d clarified the context with the immediately preceding sentence: “Everyone else can only see the neuronal activity caused by that past stimulus whenever the memory is reactivated.”
Perhaps a few more italics will be helpful:
subjective awareness is the reactivation of private-key-encrypted memories. (I’m not, of course, suggesting that awareness is solely memory reactivation. In fact, I’m not really addressing the “awareness” part of the phrase “subjective awareness” here at all - that’s rather a threshold for you to draw yourself wherever you dare, as I said in subsequent posts.)
Me too, but this is rather a bifurcation. If we were talking about what constitutes or explains biological life, an exclamation of interest in nonbiological life would rather come out of left field, so to speak. For what it’s worth, if “awareness” is based on inputs to working memory which somehow relate to things in the world, then two devices whose inputs and associated connections therebetween (as per connectionism, referenced earlier) were identical might be said to have the exact same awareness, such that neither shielded anything from the other by any encryptive process whatsoever. The word “subjective” might make less sense in this case, since both devices could experience exactly the same rather than the differences necessitated by biological devices, which simply cannot be identical. But, like I say, this is rather unfocussed jazz-improv debating if you ask me.
Whoa, where did this come from? If awareness (subjective or not, but let’s put that aside) is a process, then the “measurements” we can make are not things like “weight” or “hardness”. Processes are measured in other terms (like, for a start, whether they’ve happened or not according to some demonstrable outcome.)
Not all of them: Spendle crarm debensier pinogrets. Read that sentence a few times: input it to your working memory. It is not meaningful.
Ah, right – yes we’re a long way from what I was talking about with Digital here. We’re now full square in the enormously complex quicksand of human consciousness/awareness, and if the threshold of what counts as awareness is raised so very, very high, then it’s as though you’re asking me to explain sociology in terms of cells or something. My guess, and it is only a guess, is that the ‘awareness’ in the first case is so ‘faint’ compared to the “full” awareness which succeeds it that the “you” which such processing comprises is statistically almost totally the latter (ie. if, somehow, the initial stimulus-response could be disconnected from the rest of the neural activity, “you” could be said to be aware of it, given that “you” would be more like an insect than a human). But we’re well into the weirdness of blindsight and the like here, so I don’t make any particular claims about what’s explained and what isn’t, because so much isn’t. (Just like any other science has gaps, of course.)
You mean, what measurable difference is there between noise-activity and unbreakably encrypted activity? None. I never said you could “measure” such attributes of activity as whether or not they constituted “awareness”, which is why I was surprised by what came from left field earlier (although heterophenomenology could be said to constitute such measurements, I suppose - see next).
For other people than you? Of course there isn’t. I’m just trying to couch that in everyday computational terms so that it doesn’t become this big scary “mystery” everyone bangs on about. Not being able to “physically measure” something directly is irrelevant - so long as there are still physically detectable consequences then we’re OK, it’s still science. In cgnitive science, the only way other people can reliably tell what activity you’re aware of or not is by asking you repeatedly in controlled conditions (ie. the physical measurement being an audibly uttered “yes” or the like).
Actually, o-w, I’ve been reading some past threads and I think it would be helpful for just the two of us to discuss a whole load of simpler things we have somehow got rather confused over in the past. If I set forth a fairly general OP, will you have time to take part?