So, as I mentioned earlier, along with Dennett, I agree that the terms we use (e.g., “pain”, “intention”, perhaps “hard-wired”, which is what I’m questioning) are useful, allowing us to discuss…well…let’s just say “things” that it wouldn’t make a lot of sense to talk about in other ways. This is one of the reasons I brought up “data structure” above – it is an abstract concept of just this sort, but we can most assuredly trace it back to it’s elementary physical properties (i.e., wires on a circuitboard). I’m not sure if talking about these things at the “individual billiard ball” level is “doomed to fail”; rather, I think that while that level may be confusing and hide the properties or characteristics that we want to talk about, it’s there nonetheless.
The former (that is, “experience-expectant”) is, I suppose, what I’m asking about. Very interesting. The question in my mind is whether we conflate the natural growth process with “innateness for” something. Perhaps there’s not really a difference; perhaps it’s just a manner of speaking. I’m not sure, but question it nonetheless.
Yes, the two struck me as being similar enough that I thought you’d appreciate the reference. Do post more about it as you get through it…
No apologies necessary, so long as it’s clear now. But the confusion is rampant and propagating, IMO, in Lanier’s writing…
This is (one of) my issue(s) with Lanier – he conflates various things in, actually, not so subtle ways. For instance, there seems to be no distinction in the essay to which you linked regarding “computer” and “program”. Perhaps referring to – what was it? the “metor shower”? – as a “computer” was misguided and confusing to me. His essay is rife with such things. (wank, wank, wank)
Is it true that zombies aren’t supposed to have subjective interpretations? That doesn’t seem right to me – I doubt sincerely that Dennett, that paradigm of zombieism, would deny it.
At any rate, the little reading I did in HTMW last night gave Pinker’s view on some of this:
He then ties this to Turing machines and algorithms, qualifying it in saying that the human is brain is not necessarily a Turing machine. He then sets up the notion of “virtual machine”, but that’s where I stopped.
At any rate, my question to Lanier might be – does it really matter if information can be interpreted in various ways? If I have an encrypted data stream that I cannot decipher, is it not still information? Why is that an issue with the notion of computer?
I’ll just quickly comment on this: noise is Gaussian. Any statistical deviation from a Gaussian distribution distinguishes itself from noise. Yes, at first glance the toaster and the PC just show some kind of activity or other, but if the aliens investigated both more rigorously and studied the outputs given various inputs, the PC would show statistical quirks in its outputs. Sure, they might waste time on all kinds of dead-ends like the cooling fan or the like but eventually, the aliens would look at the states of teh switches in the RAM and realise that the PC could be characterised by a working memory which, given a particular input configuration, could take on another output configuration in a repeatable, resettable non-random manner. Now, they might very well never understand what any of that activity means (cometh once more the encryption metaphor!), but the activity clealy isn’t random. The toaster, on the other hand, would just yield fan-like noise no matter how closely it was examined.
To extend the analogy, there’s me in their lab, next to my first indentical triplet comatose and my second identical triplet dead. By studying the response to various inputs, they can distinguish that I’m processing the inputs in some statistically odd way, which the unconscious brother didn’t (although further PET scanning migt yield such variations) and the dead one certainly didn’t. Saying “there’s no objective test for consciousness” is only as useful a statement as “there’s no objective test for life” (or, indeed, anything).
Let me just clarify something; I am by no means championing Lainer, I merely find the few points he makes (once you’ve unearthed them from the Ego-Lube) more interesting than the endless hair-splitting I’ve seen in most Zombie debates.
From the website I linked to:
Conscious experience is inherently subjective experience (or at least, that seems to be the most widely-held view in Philo of Mind).
Here’s a quote from Richard Dawkins you might be interested in:
It might be, or it might be meaningless gibberish, and that’s the rub… there’s no non-subjective way to tell which is the case.
I think the caveats are raised because, to the best of my knowledge, no one knows how information representation and processing works in people, let alone computers.
Well, a couple things here: first off, the whole question is whether pattern-detection is subjective and open to interpretation, and that certainly appears to be the case. I might detect a pattern where none exists, or I might not detect a pattern where one does exist. So might the aliens. Secondly, the non-randomness of a signal is no guarantee that it has meaningful content, and meaningful content is the issue here.
Well, if the inputs were, say, aiming a megaphone at you and bellowing: “Angelina Jolie is waiting for you, stark naked, in the next room”, then yes, your response would show that you’re processing the input quite differently compared to your comatose and dead brothers. However, if the lab was run by Transylvanians and they bellowed in Romanian, the measurements of your input-processing wouldn’t be all that different from your brothers. To you, Romanian is noise, to the Transylvanians, it’s signal.
Of course, which is why those aliens would need their paper to be peer-reviewed., like ours would be if we analysed their stuff.
Then that’s effectively a mundane encryption issue, which I dealt with.
I’d still make a noise and move in response to 20Hz-20kHz frequencies, while they wouldn;t. Again, meaning is a mundane ‘encryption’ issue, while “statistically interesting behaviour” isn’t.
No, human utterances are not Gaussian noise, unless the entire language is one infinitely continuous sibilant s.
You mean peer-reviewed by other aliens with the exact same modes of perception and perceptual processing? How would those peer-reviewers detect a pattern the original aliens couldn’t?
I either missed the reference or didn’t understand it. How is meaning a mundane “encryption” issue?
Okay, I’ve finished Edelman’s book. First off my quibbles.
Despite my liking his phrasing of conscious experiences as a process, not a thing, I am dissatisfied with his portrayal of conscious experience being caused by the neurologic processes. To me the proper perspective is to consider issues of qualia/consciousness and issues of neuronal activity as different levels of analysis. I seem to recall that was Pinker’s perspective as well.
I am also annoyed by his harping on his “the brain is not a computer” mantra. For a smart guy he has a limited conceptualization of computers: his image is exclusively a sequential processing device reading out some tape. Of course the brain is not a computer but obviously computers can be, and today sometimes are, parallel processing devices operating in massively nonlinear manners. His point is that the neurons are special; my view is that how the neurons are organized to produce and process representations of the world which includes its everchanging self is special, but not by definition restricted to neurons.
He also seems unaware of the importance of the cerebellum in cognitive function and in attention, although to be fair, that work may have been quite recent when he was formulating his concepts.
And his emphasis on “degeneracy”, that is multiple paths that represent the same output, is nice, but he fails to make much of an evidenciary case for it, or to be convincing that such is necessary.
All that said, there is much of note in his presentation. I very much appreciate and respect his attempting to place the development of consciousness within an evolutionary context.
Most notable are the parts of his presentation that strike themes that others have (I believe independently) also espoused. I have of course already noted Grossberg’s portrayal of conscious states as being resonant states and his computer modelling of how a brain does that with nested levels of processing. Much of Grossberg’s work is not incompatible with Edelman’s formulations albeit expressed in a more mechanistic fashion. But others have struck this resonance/reentrant circuit/strange loop theme as well. See, for example, this Patricia Churchland article in Science (again link is to the abstract, the quote is from the article but paid subscription required):
And that “loopy” phrase may strike a chord to some of us: years ago Doug Hofstadter proposed (in Godel, Escher and Bach) that consciousness was the result of an information processing system having levels of “strange loops” which including its ever changing self as a member of the set of information that it analyzed. The more nested and self-referential these processing (resonant/reentrant?) loops were, the more conscious an information processing system would be. Edelman brushes against this Hostadteresque concept slightly as he attempts to explain how “C states” (the conscious experiences) smoothly follow each other (p122-123)
So several trains of thought seem to be headed to the same station by different tracks. Makes one think that there may be something to it.
No, of course not. But thank you for reminding me – I was pretty careful in my original response to indicate that I was just provoking a response. In the course of discussion, I’ve stopped being quite so careful. Sorry about that.
Yargh, what the hell was I thinking? I had it in my head that “zombie == Dennett”, not “zombie == person without qualia”. Lainer successfully screwed me up…trying to assimilate his conflations set me up. I feel so dirty.
Hmm. There’s an issue here that I think is due to yet another conflation. Lanier has an essay in which he is attempting to say – what? There’s no such thing as zombies? There are only zombies? The “zombie” concept is silly on its face because we can’t identify other conscious beings, much less zombies? What?
Let’s see…a zombie is a person without qualia. Would it be correct to say that Lanier equates consciousness with subjective experience with qualia? Let’s say he does. Now he flips it (or shifts viewpoints): there is no objective (external?) test for consciousness (where consciousness is equated with information processing), as there’s no objective test for information processing. But that last is silly, isn’t it? Information is defined, essentially (although speaking loosely), as a changing signal. As SentientMeat points out, (white?) noise is gaussian. The fact that something can be identified as an information-giving process means that an information stream has been identified – regardless of whether the information can be deciphered. (There’s an interesting tie in to DSeid’s introduction of chaos theory here; at either end of the spectrum – be it a constant or totally random signal – information is uninteresting. It’s in the chaotic range that things get interesting. Also interesting, to my mind, is the notion of encryption as a transformation that makes a non-random and non-constant information stream appear to be more random than it is. Not a groundbreaking observation, but I never cast it quite that way before.)
So, then, my question is: Why does the (in)ability to decipher certain information streams invalidate the notion of consciousness? Or is Lanier simply ridiculing the notion of zombies? (If so, why not simply agree with Dennett?) Or, is Lanier simply making the (trivial) point that there are “things” we don’t know?
Yeah, I know what you mean. When other commentators set you up, they force you to take the wrong cards through smooth, skillful slight of hand. Lainer just throws the deck of cards in your face.
I don’t know, and for the record, I’ve never liked the concept of “qualia”. Sure, we need some label to even talk about this stuff, but the term is used so blithely and ubiquitously that it’s taken on an aura of legitimacy and definitiveness, even though it’s actual definition (“what it is like”) strikes me as horrendously vague.
With your indulgence, I’d like to explore this a bit. How does one determine whether something is an information-giving process in the first place? (i.e., what criteria must an information-giving process meet? What criteria would disqualify it from being an information-giving process?)
How does one tell the difference between a random data stream and an encrypted, non-random, non-constant information stream that only appears to be random?
I’d like to hold off on attempting to answer the rest of your questions (which frankly, I ain’t so sure I can answer at all) until I get your response to the above.
I am now even more sure that I’ve made the right choice in not reading much on the Zombie wars, but do allow me to comment on noise from the perspective of intelligence. While white noise is, by definitiion, Gaussian, real-life noise is not usually. At least I do not think so. Noise is just that which gets in the way of understanding the meaning of the designated signal. It is irrelevant data that obscures what we are interested in. So, for example, in functional MRI (fMRI) studies of brains performing cognitive tasks, and in evoked response potential (ERP) studies of brain responses to various stimulii and processing demands, we average out responses over many trials. The noise cancels out and we are left with a true signal, but that noise is not necesarily Gaussian, it comes from a variety of coincident processes that are not the item of analysis: heartbeats, blood flow, muscle artifacts, other brain activity not in response to the task that is being analyzed. Those noises could in fact be signals themselves which I could analyze if I approached the data set differently. Likewise anytime my perceptual processes have a complete object not in view, blocked partially in someway, yet my brain completes the whole, it has dealt with noise.
Now whether this has anything to do with Zombie wars, I don’t know …
Since I don’t see the connection Lanier is trying to make, I’m flying blind as far as context for the question. In accord with what I said at some point earlier, I would say “information-giving process”, at least in the way I think you mean it, be given a functional definition. The reason I qualify that statement in that way is that an unchanging signal does provide information of a sort – we can interpret it as “nothing is happening”, which is sometimes useful itself as information, although no information exists under the Shannon definition. So, strictly speaking, an information-giving process is not necessarily subjective, nor is it necessarily decipherable. But, in the way I think you mean it, it at least requires a subjective interpretation. One step further removed from the strict definition might very well require decipherability – although I’d point out that the fact information is indecipherable might in and of itself also be useful information.
So, as to the criteria for disqualification as an information-giving process: if it does not adhere to the functional definition in use.
I’m not sure that there’s one method. For instance, by observing the packets transferred between computers, one might notice a cause / effect relation. In another scenario, one might find a stimulus / response relation (different, I think, due to the behaviorist baggage the word “stimulus” carries with it). On the other hand, identification of simple patterns – either adherence to or deviation from – can also be used to differentiate. I suppose one might characterize it as statistical, though there might be other viewpoints I’m not considering.
Yes, they do get tedious. But they are somehow unavoidable, it seems.
I think that illustrates the issue nicely. The word “information” is really easy to twist around. People try to make it do somersaults, and generally it just happily goes end over head. But, every so often, especially when we try to exert our dominance and attempt to force it to behave in the way we want, it goes off and runs madly about the room screaming bloody murder.
The point of that bit of anthropomorphism (beyond the fact that it amused me to write it) is that it seems to me the various ways in which people use the term are generally consistent. However, information (in the Shannon sense) may not be information (in the functional / interpreted sense); on the other hand, information (in the functional / interpreted sense) may not be information (in the Shannon sense).
I also don’t know what impact this has on zombies (or more specifically, the arguments about them), as I just see the conflation without the point.
So I have a random number generator picking numbers zero to nine sequentially and transmitting them obscuring some other data. Noise? “Information”?
Now I am sequentially transmitting the digits of pi on top of the same data stream. How about now? Can you know the difference without already knowing pi or that the transmission world be pi?
Detect a statistically significant deviation from Gaussian noise? Why, just as we do in our studies of all kinds of phenomena: by repeating the observations numerous times and arguing whether or not it’s significant or just wishful thinking.
Having demonstrated statistical significance, finding out what each signal represents might be fairly easy (the same input yields the same output each time), difficult or impossible, just like the Allied attempts to crack Enigma by repeatedly using known inputs. That is still an everyday, well-understood computational issue at heart.
Perfectly efficientcoing (from a one-time hash, say) is indistinguishable from noise. So, yes, an encoded or encrypted message could look like Gausiian noise. And, conversely, a signal which deviated from Gaussian noise could still not mean anything at all, like me making up nonsense words which still obeyed general rules of English pronunciation.
But, in principle, the aliens could notice we or our computers weren’t outputting boring old noise and could learn English by flashing up pictures (or repeating inputs to the RAM) and seeing what outputs repeatedly came out. After all, this is just Roinson Crusoe an Man Friday learning mutual language by pointing and repeating.
After reading DSeid’s post, I realize I might not be using the term “noise” correctly. I was under the impression it described either of two scenarios: If I transmit the message “We’re getting married!” to you, and you receive “W&$e’r^(e %ge*@tt(@#in@#$g 2M##a%#rr)(r#^*ie%d!”, the extraneous symbol-crap would be considered “noise”. And if I transmit the message “#%%$ &^&^ #@# **%*” to you, you would:
A) Recognize the code and decode it, or
B) Be unable to determine whether the signal contained meaningful content or whether it was just “noise” that looked like it might be a signal.
That’s one of the reasons I keep harping about the role of meaningful content in a signal. The fact that a signal is indecipherable might be useful information, but that’s information about the signal; it’s not the information encoded in the signal. And if the signal is indecipherable, there’s no way to tell whether the content of the signal is meaningful or just gibberish.
For encoded information to be functional, it has to be decoded, and I think that’s where the whole Zombie question comes in: I’m aware of the informational content of my consciousness, and it’s subjective (I’m the only one that is or can be aware of it). So what exactly is subjective awareness? And how does the encoding/decoding process occur?
If we just receive numbers (ie. signal levels) from either a RNG or pi-generator, it’s noise as far as we can tell. If we were allowed to repeatedly choose our own inputs and observe the subsequent outputs, we could tell if those outpts deviated from noise in any systematic way, implying information. If we were allowed to choose inputs which we knew represented specific things in the world, we might find out what that information meant.
But we could still be foiled by (near) perfectly efficient encoding, strong encryption or “worlds” so utterly unrecognisable that they shared no common element.
It seems to me that this just points to the silliness of the entire zombie concept. I’m not trying to be evasive or cut off discussion; I’m just not sure where to go with this…if prompted, I’ll certainly try.
If I may, again, briefly interject here, the issue is again one of encryption in memory formation. The initial stimilus, whatever it was (say, the preise pattern of reflected photons) acts as the perfect one-time hash private key, which only you are privy to. Everyone else can only see the neuronal activity caused by that past stimulus whenever the memory is reactivated.
So, subjective awareness is the reactivation of private-key-encrypted memories, and the “how” of the coding process is simply one of memory formation. (And as you remember from that “Memory is a Physical Thing” thread I referenced before, those neuronal pathways can break down if the memory is not ‘rehearsed’ often enough, such that the key, the initial stimulus, can no longer be generated distinctly from neuronal noise - I think it was Hoodoo who proposed such a scenario, which seemed to get everybody prematurely excited for a reason I never understood.)
Seems to me that there’s a little more to subjective awareness than just memory, although I may not be properly accounting for your use of the term “memory” here. (I remember the thread you refer to, but don’t remember the exact points brought up, whether or not I participated, nor do I have time right now to go back and skim it.) What I mean by that is that subjective awareness can have a sense of immediacy to it – e.g., I’m very happy right now. The more visceral the emotion, say, the more immediate the subjective content. Of course, if one is including short-term (or working) memory, or if one includes the notion of resonance (as in ART, where “conscious == resonant”), then certainly I’d agree.
However, I (personally) think there’s a more explanatory mechanism that is responsible, one which incorporates memories. Perhaps this is just a “by definition” point, and it raises lots of other questions, but “self-awareness” can be attributed to the notion of reflection – defined as “being able to observe and reason about internal states”. The mechanisms that we (humans) use to reflect on our mental contents (perhaps not all that well, nor all that completely) are exactly those responsible for subjective awareness.
An analogy for subjective experience (i.e., qualia) occurred to me last night after posting my response to other-wise. The question was, “So what exactly is subjective awareness?” My answer would be that it’s an inherent result of (human) brain function. The analogy is: heat produced by a circuit. Resistance in a wire produces heat, due to the flow of electricity, during a circuit’s operation. That’s inherent in the operation – not necessary to the circuit as an information device qua information device (although perhaps it is in practice? due to entropy, or physics, or somesuch?), but a by-product of the process. And I don’t mean to trivialize subjective experience (i.e., it’s just a “by-product”), as I think reflection is part of (again, perhaps by definition) the self-awareness ability that humans have.