What Is Consciousness?

A child raised in sensory deprivation has consciousness, but lacks the means of interpretation that we share. We do respond to unlearned stimulation from our internal organs and to hormonal stimulation like sex drive. We are conscious of them but they do not elicit the same mental responses as those we have learned.

I agree that the mechanism for consciousness is hard wired but the form it takes (our interpretation) is learned.

Searle and Putnam need to read a book on General Semantics. Our current technology has raced past them.

Crane

But it does strike me as odd if you say you disagree with the conclusion of some argument, without being prepared to point out what, in your opinion, goes wrong with the argument—it sorta defeats the purpose of discussion, it seems to me.

But most of this processing never enters into conscious experience at all; rather, it seems that only some fraction of what goes on in the neocortex, perhaps most importantly the temporal and frontal lobes, really has some impact on our actual conscious experience. Consider for instance the case of blindsight: due to damage in the visual cortex, visual stimuli are no longer consciously perceived; nevertheless, upon being cued, patients may demonstrate some form of awareness of these stimuli—by, e.g., guessing right more often than chance. So there is visual processing going on without consciousness.

Indeed, all sorts of processes occur in the brain without consciousness accompanying them, so if, e.g., all those submodules (however you would call them) are conscious, then their consciousness does not play into my consciousness, which leads to the sort of strange conclusion that there are many other sub-consciousnesses in my head besides my own.

I don’t think I know what that means. What sort of process do you hold to be sufficient for conscious experience? When is computer processing accompanied by there being a way it is like to be that computer? Because plainly, not all processing produces consciousness (as evidenced by the fact that there is processing in our brains which we nevertheless aren’t conscious of).

Nevertheless, if there is a program such that executing that program suffices for conscious experience, I can find an implementation function such that the rock computes this program, just as well as I can find an implementation function such that the rock computes the square root of three, or the digits of pi—this implementation function could be, for instance, a table that takes microstates of the rock to functional states of the computation, such that I would only need to know the table, and observe the states of the rock, and would then, for instance, learn the square root of three.

So if consciousness observes the sensory input, how does it do so? I mean, if I observe something, then evidently, my sensory input is observed by consciousness, as you say. But how does observation then work for consciousness?

What would you have me do? (And…why the hell does this have to be about me? Can we please talk about the IDEAS and stop talking about me? Please?)

How do we know it “never enters into conscious experience at all?” It could be constructive of it, contributing to it overall, without our perceiving it. (We integrate vision from two eyes into a single “visual experience.” We generally aren’t conscious of having two distinct fields of vision.)

Do you accept the notion of the “unconscious mind?” That we have a rich mental landscape that we’re largely unaware of? This could be an indication of the working of a part of the mind to help support the overall phenomenon of consciousness.

We aren’t aware of our left and right cerebral hemispheres – even people who have had them surgically separated aren’t conscious of them as distinct. I rather admire the Freudian notion that some of these semi-independent entities reveal themselves via slips, errors, blunders, and subtle giveaways.

This is why I refer to these modules (or whatever) as “partially conscious.” They have their own agendas, and communicate those to us. They cooperate in the overall administration – sometimes, they serve in the opposition!

Among other things, this is why we may have trouble sometimes “making up our minds.” We have a strong potential in one direction (“Ooh, doughnut!”) and another strong potential in another (“I gained five pounds last month; I must cut back on sweets.”)

Certainly our various psychological inputs have influence on our overall decisions. We might, for instance, choose a car or a carpet differently when very hungry than when full. (This is a kind of “Free Will” argument I make a lot: we’re more guided by our stomachs than we might like to admit. Supermarkets put candy bars by the checkout lines for a devilish good reason!)

We’ve all had the experience of having to “argue ourselves out of a decision.” We’ve all made hasty decisions, sometimes even knowing we’re wrong at the time.

In some cases, we really are aware of the conflicting voices in our heads. They aren’t fully realized persons, but they are contributors to our selves.

HMHW,

Of course we do not know the mechanism of consciousness but we know it’s component parts and it’s manifestation. You experience consciousness as a result of neuronal activity. That activity can be seen as electrical signals on the surface of the skull. The manifestation is inside of the computer. You are it.

You also have experienced the process of adaptively programming that computer. You only know things you have been taught or inferences (generalizations) of things you have been taught.

***“Nevertheless, if there is a program such that executing that program suffices for conscious experience, I can find an implementation function such that the rock computes this program,…”***HMHW

I fear that I am not smart enough to understand your statement. Rocks are not dynamic electrochemical computing engines. Your statement seems to be a conclusion that is missing it’s premisses.

Crane

Well, I’m trying to get you to discuss ideas, it’s just kind of hard to do if all I get out of you in response to some argument is ‘I disagree with that conclusion’. Because with such a response, you just put a stop to all discussion, and it’s a one-size-fits-all knockdown for any sort of idea you don’t happen to like. And frankly, it kind of makes me feel that my responses here are a wasted effort.

Because we can directly measure action potentials of certain neurons, and see that they fire in response to a stimulus; nevertheless, with some clever experimentation, we can also find out that the subject was never consciously aware of that stimulus. So there is a neuron, or cluster of neurons, excitedly signalling ‘there is a line at 45° inclination in my visual field!’, without the subject having any conscious experience of the existence of that line.

Indeed, the gap between what reaches our brain and what reaches our minds is huge: an often-quoted estimate from Tor Nørretranders’ The User Illusion is that of about 11 million bits of information reaching our brains via sensory perception each second, of which about 40 (!) actually enter into conscious experience.

And yes, there is data that is thrown away, that is processed by the brain, but never makes it into experience. Consider the blindsight patients above, or more relevant to everyday experience, the phenomenon of change blindness. Here is a somewhat famous video demonstrating the effect; in case you’ve never seen it, I’d encourage you to do so now, and I’ll spoiler the following discussion.

Of course, by now you know—if you didn’t already—that the instructions given at the beginning of the video were basically a red herring, prompting you to focus your attention at one specific part of the video; and for most people, this means that the man in the gorilla suit, although very clearly visible and very salient, is missed entirely. Now this doesn’t mean that there is no neural processing going on regarding the image of the gorilla suited man: it impinges on your retina like all other pictures, it is relayed via the thalamus to the primary visual cortex V1, where the neurons responsible for the area of the visual field subtended by the gorilla costumed man will fire accordingly, and their activity will then be relayed via the dorsal and ventral pathways to the higher cortical areas. But nevertheless, you won’t (or most people won’t) be consciously aware of the presence of the gorilla—which is kind of a big thing to miss.

The question of what enters into conscious visual perception is better explained by a two-tier model: one tier being the bottom-up construction of the visual field from the light that impinges on the retina, but the other tier is a top-down process that compares what is being seen with expectations, such that manipulating these expectations leads to manipulating visual perceptions. And indeed, any other way would be way too costly and slow: if we really had to wait until the visual system had reconstructed the presence of a tiger from the visual data, we’d long be eaten; but since a part of our visual system essentially comes pre-loaded with hypotheses that are falsified by visual data, we can run if the tiger hypothesis isn’t immediately falsified—better safe than sorry. This is also why we are so susceptible to pareidolia and similar false perceptions.

But the crux is that the vast majority of visual information we process never enters consciousness; that this processing occurs wholly unconsciously, and thus, it is not the mere fact that data is processed that makes it enter our conscious minds. Hence my skepticism regarding conscious microprocessors or ants: yes, they process data, but that alone seems not to be sufficient for consciousness.

But they can do all that without being conscious. Nowhere is there anything it is like to be those modules, at least not necessarily. Consciousness is not defined by there being an agenda, or communication, or anything like that—it’s defined by there being a subjective sense of there being something it is like to be, say, one of those modules. And the problem regarding the connection between function and this ‘what it’s likeness’, the phenomenality, is that it doesn’t seem to be a necessary one—any function seems to be, in principle, performable without any attendant conscious experience.

I can built a robot (well, I can imagine how such a robot is built…) that can catch a ball, but that doesn’t mean the robot is ever consciously aware of that ball—all the reason of why it catches the ball is in its programming: it possesses a CCD chip that delivers certain patterns of voltages to its program, which compares these voltage patterns with those it would expect if a ball were present, and if there is a match, it sends a certain pattern of voltages to the actuators in its arm in order to catch the ball. If you want, you can realize this program completely in terms of a huge lookup table, which on the one side lists the voltage patterns from the CCD, and on the other side lists the voltage patterns to send to the actuators. The actions of the robot can be completely and fully described in this matter, but nowhere is there any need to refer to perception, or consciousness, or subjectivity. There is nothing it is like to be that robot; it simply automatically matches input to output, by searching through a giant table.

So just because something fulfills a particular function, even a function that is in us accompanied with conscious experience, does not imply that that something likewise has conscious experience. (In case you’re interested, this is a particular version of Saul Kripke’s ‘modal argument’: if functional facts sufficed for conscious experience, then there should be a necessary connection between function and consciousness; but we can imagine things fulfilling the same functions (say, giant lookup tables), which don’t have conscious experience. Hence, there is no such necessary connection. Other versions include the famous zombie argument, and the argument from inverted spectrum.)

Something does not need to be a ‘dynamic electrochemical engine’ to be a computer. The argument just turns on the fact that to use something as a computer, you need to have a mapping between its physical states and the functional states of the computation. So if you take something like a finite automaton, then any computation is fundamentally just a sequence of states it traverses—at t[sub]1[/sub], it’s state s[sub]a[/sub], at t[sub]2[/sub], state s[sub]b[/sub], and so on. The sequence of states it traverses is the computation it performs. Any computer that we’ve ever built, since it has only finitely many states available (owing to its finite memory), can be described in these terms.

Now take the rock. You can set things up such that the microstates it traverses are mapped to the computational states of the automaton; no rock is ever perfectly inert, at the atomic or molecular level, there will always be some state changes. So now you take some mapping f: R -> S, where R is the set of states of the rock, and S is the set of states of our finite automaton. Such a mapping is always necessary to implement a computation physically; without specifying it, you simply don’t know which computation is being performed. (In a personal computer, for instance, the screen may implement such a mapping, from voltage patterns it receives to activation patterns of pixels, which are immediately appreciable to us—although really, those are translated by another mapping, since fundamentally, we have only symbols on a screen which are as abstract as those voltage patterns.)

Now the rock traverses some set of states, r[sub]1[/sub] to r[sub]2[/sub] to r[sub]3[/sub], and so on. All you have to do, then, is to choose your mapping f such that the states of the rock are mapped to the states of the computation: r[sub]1[/sub] to s[sub]a[/sub], r[sub]2[/sub] to s[sub]b[/sub], and so on; and presto, the rock implements your automaton. But of course, the mapping hasn’t changes anything about the rock: it just does what it always did. But nevertheless, it can be used to implement an arbitrary computation, including one that produces consciousness, if such a thing exists. It’s just a choice of mapping, just a different screen you point at the rock—so basically, the mapping is just a difference in the way you look at the rock!

So now it seems that if you insist computation suffices for consciousness, you’ve got to either conclude that whether something is conscious depends on how you look at it—which is not a palatable position to many. Or, that everything is conscious, in every possible way—which seems likewise absurd.

(Just parenthetically, you don’t even need to look at the rock’s detailed microstates—you can simply use a mapping that is time-indexed, such that the rock is in state r at all times, but at t[sub]1[/sub], f maps r to s[sub]a[/sub], and at t[sub]2[/sub], f maps r to s[sub]b[/sub], and so on. Thus, even perfectly inert things fall under the scope of the argument.)

Congratulations, you have just described a computer constructed of complimentary metal oxide silicon components. Calling it a rock may be a bit colloquial, but is true in essence. It is the computational device we all are using. It is done by manipulating states within inorganic components - hence a ‘rock’.

It is also true that one of the end results of computation is the arrangement of voltages on the flat screen display that enables us to interpret the output of the computer’s processing.

Consciousness is the same thing in that it is the distribution of molecules across some surface within the brain. But, it is not interpreted. It is a process that is intrinsic to the system.

So, to obtain consciousness we need a definition of the phenomenon. Something we did early in this thread but seem to have abandoned.

Crane

Well, not really. I have shown how anything at all—a rock, a tree, a butterfly, a tornado—can be regarded as a computer, indeed, as a device implementing any sort of computation you imagine. Such that, if you construct a machine that you claim is conscious solely by virtue of it implementing some computation, you can equally well point to a rock (or a tree or a butterfly or…) and claim that it is conscious in exactly the same way, with exactly the same justification.

Perhaps it’s easier to think about this in terms of a code: if I give you a text in some cypher, what does it mean? Well, in and of itself, it doesn’t mean anything; you need the decoding procedure (analogous to the implementation function) to decypher it. But that decoding procedure is entirely arbitrary: it’s ultimately just a table taking coded bits of text to plain text, where ‘plain text’ really just is some kind of code we understand without effort. The string ‘mxyzpltk’ might stand for ‘hello’, for ‘one if by land, two if by see’ or for the complete text of the collected works of Shakespeare—it’s all dependent on the encoding table. In the same sense, the physical evolution of some arbitrary system implements an arbitrary computation, just based on a different implementation function—in fact, it’s exactly the same phenomenon, the basic insight that syntax does not entail semantics.

Well, but that’s not really how things appear to us, is it? I mean, we don’t experience our thoughts in terms of electrochemical action potentials and tiny salty chemical squirts, which would be what they are without interpretation—rather, we experience nothing but the interpretation, a rich and vibrant phenomenological world of colors, shapes, sounds, intentions, emotions, a subjective sense of self, and so on. For some reason, there is something it is like to produce these tiny salty squirts, and that’s the central question—how does this come about?

Just giving an account in terms of what neuron fires when and why, which is all, ultimately, the brain ever does, simply misses the point of the question, I think. It might be sufficient to describe all of our behaviours, but still misses the important question of why any of this should be associated with any kind of subjective experience at all—to merely produce behaviour, it’s not necessary at all.

Dude, it’s all I got! I don’t KNOW the answers. If I did, I’d be rolling in the dough!

Do you want me to lie? “Yes, it has to do with the lesser calciate sulcus in the latitudinal angina midbrain.” Not gonna happen.

I disagree with what you say. That’s it. If that stops all discussion, then, with my very kind permission, stop posting.

But none of this denies the possibility that consciousness is a composite process, made up of sub-processes.

I spoke of partially conscious microprocessors and ants. I was positing a gradation of consciousness, self-awareness at different orders. I never spoke of a conscious microprocessor, but of a .05% conscious microprocessor.

In part, I was thinking of the weird semi-aware activities of parts of the human brain, in such cases as split-brain surgery patients. Their brains are doing a kind of thinking that is clearly aware of the environment, and even self-aware, but of which the overall patient’s sense of self is not aware.

Agreed. I don’t think my ideas are “necessary.” And, okay, maybe the components and sub-components of consciousness are not conscious. I’ve suggested they might be “partially conscious.” Conscious at a lower order of awareness. I’m willing to admit that they might be wholly non-conscious.

I just happen to be arguing that the overall effect of consciousness might be a composite of a number of lesser processes, working in combination.

If consciousness is defined solely by a subjective sense, then it ceases to be scientific at all. This veers into solipsism and despair.

How do we demonstrate that the kid over there playing catch with his papa isn’t such a robot. How do we assess another mind’s subjective sense of self-awareness?

Like “qualia,” this kind of subjective definition loses value by being immune from objective examination.

HMHW,

Consciousness does not require syntax or purpose.

While mulling over this problem I happened to glance down at my cat. I realized that I took in the whole scene - cat, dish, meow etc. without describing it in words. The language look up table is separate from conscious perception.

Consciousness is simply your mental response to the neuronal output contacting a control surface. It is completely internal and will remain so until neural science makes significant advances.

These are existential events, there is no mystical ‘why’ associated with them. Our brains perceive sensory inputs and we react based on the sum total of our experience. A process we refer to as consciousness.

To expect more, is to search for a mystical soul or the pilot in the control room.

Also, perhaps you can come up with a syllogism or syllogisms to explain your ‘rock’ argument. Creating a device with a program-variable set of states in a stone medium is exactly what is done in semiconductor processing. Your computer program is a executed by organizing molecules (holes/electrons/waves) on the oxide layer of a few grains of sand. They are not unprocessed rocks. Butterflies are complex UAVs that have gone through numerous processes. They too are not random rocks.

Crane

Trinopus,

Excellent point. Is there a set of criteria that would allow you to detect consciousness?

Adult Human
Teen Human
Psychotic Adult Human
Dog
Cat
Bee
Ant

Crane

(Forgive me for condensing your list.)

If consciousness is defined only subjectively, then, no, I sure can’t think of any. We all tend to assume that other people are conscious, based on an immense volume of behavioral similarities. But suppose there were a genetic disorder, something like autism, and fifteen per cent of the world’s population were not conscious? They just go through the motions of speech, reasoning, work and play, without any actual conceptual awareness of it? It doesn’t mean anything to them, they’re just “phoning it in,” going through the motions, because that’s what they’ve been told to do.

There might be some variety of psychosis that has this effect. We know (or we think we know) that there are some sociopaths and psychopaths who have no sense of right and wrong, or no sense of human sympathy. Is it impossible that there are people out there who have no sense of conscious self?

I think the default assumption is the most useful: we’ll just take it as a postulate that you, and I, and everybody else here, are all conscious, and that we experience a sense of self in roughly the same way. We don’t have to worry about small differences in that perception.

(e.g., what if the color red really looked like ‘green’ to me, but since I call it ‘red’ no one can know that I’m seeing it differently. The “qualia” concept, which I hold to be pretty much nonsensical. Maybe my self-consciousness is a kind of “vibrato” in tone, while yours is more “sostenuto.” There isn’t any possible way for us to know.)

FWIW, I will vote for the “mirror test” as one objective measure of a certain degree of consciousness. When an animal, or a child growing up, realizes that the image in the mirror is “really me,” then one threshold of consciousness has been attained.

(I hasten to say, I don’t think this is the single defining quality of consciousness; it’s just one really important sign of abstract self-awareness.)

No, but I’d want you to engage what I’m posting, maybe discuss it, develop some counterideas, find flaws, etc.; otherwise, it just leads nowhere. I mean, think about it, what if everybody in GD took this attitude—someone brings forward some argument that they think is well-reasoned, and the only response is ‘I disagree’? If you disagree with some conclusion, then there must be either a bit of reasoning or a premise you think is wrong—otherwise, you have no logical grounds for disagreement. Try to work out where you think the argument goes wrong, or if you can’t find fault with it, find some way to accomodate its conclusion—but just saying ‘I disagree’ gets neither of us anywhere, I’m afraid.

That wasn’t what I was arguing for, but merely that evidently not all processing is accompanied with conscious experience, which casts doubt on the ideas such as microprocessors or ants being conscious.

Well, a 0.05% conscious microprocessor is a conscious microprocessor—it certainly isn’t a nonconscious microprocessor.

Anyway, what exactly do you think being ‘0.05% conscious’ means? Is there something it’s like to be that microprocessor, or not? What does it feel like to be partially conscious? Again, whenever I examine my own conscious states, I can’t find anything that I would describe as being ‘partially conscious’ rather than being fully conscious of a dimished set of stimuli—there always either is or isn’t any subjective experience.

But in split-brain cases, what seems to happen is that there is only one consciousness, with some actions occuring either without or explicitly against the conscious intention of the patient—at least, I am not aware of any evidence to the contrary.

I think I meant necessity in a different sense, the one used in a modal logic context, in which something is necessary if it can’t fail to be the case, like saying ‘water is H[sub]2[/sub]O’. Whenever I have a glass of water, I also have a glass of H[sub]2[/sub]O, and the other way around. My point was that the connection between certain brain processes and consciousness does not seem to be of this kind: all of these processes seem to be able to occur without any conscious experience. But then, just pointing to these processes as an explanation for conscious experience misses the mark, as there must be something else that decides when there is or isn’t conscious experience.

But what I’m interested in is the question of how consciousness itself arises—it might be possible to build up larger consciousnesses from smaller ones (though see the combination problem), but this doesn’t answer the question of how consciousness emerges from electrochemical potentials and salty squirts.

This is exactly the problem: our known scientific techniques appear to be inadequate for the study of consciousness; nevertheless, there is a real phenomenon here demanding explanation. And yes, it is a subjective phenomenon, or more accurately, the phenomenon is the existence of subjectivity in a material world—how does that work?

But I don’t agree that because it doesn’t easily square with our current version of science, we therefore have to exclude it from inquiry (that in itself would mean a kind of bankruptcy of the scientific method)—many things were once thought to be outside the purview of science, with the result typically being the extension of its purview. It’s a difficult problem, to be sure, but I think that with diligent work, it will eventually yield to rational inquiry.

Well, we don’t, obviously! As I already said, we can only conclude that that’s the case because we know it’s the case for us, and we take other people to be beings not substantially different from ourselves, and thus, it would take some extra, hypothetical mitigating factors to deny them conscious experience, which would violate parsimony.

But the trouble is, that doesn’t make the phenomena any less real, if we don’t want to yield to the temptation of ‘what must not be cannot be’.

Okay, sorry, that was ruder than I meant it to be.

All I’m trying to say here is: if I don’t live up to your standards, I’m sorry. I’m doing the best I can. I like to think I’m somewhere slightly above average as a poster on these kinds of subjects, but I’m not a scholar, not a scientist, not a professional, and not necessarily very good.

I’m doing the best I can. What more would anyone ask?

Words and language have nothing to do with it. The problem is, fundamentally, that uninterpreted, what physically occurs are just salty squirts, neuron firings, that sort of thing. They’re analogous to the voltage patterns a computer produces, which you can then ‘translate’, via a monitor, for example, into pictures.

So without interpretation, there’s nothing but neurons firing, and so on. But what we experience is not the firing of neurons, is not tiny salty squirts of chemicals; rather, it’s evidently some interpretation of that, in terms of subjective experience, colours, pain, and so on. These are not things you find among salty squirts.

So who decides on the interpretation? Who plugs in the appropriate monitor? And more importantly, who views the pictures on it?

I don’t create a device in a stone medium. I take the stone, as it is, unaltered, and view it through a special implementation function; I attach a monitor, or view it through an appropriate set of glasses, if you will. I don’t touch the stone at all—it follows its natural evolution. Exactly how, in decoding a cypher, I don’t do anything to that cypher, merely translate it using the appropriate codebook. I just take the right codebook, and look up states of the rock, or what have you; and in this way, any computation whatsoever can be implemented.

So, just to be clear—this means that you believe that a being behaviourally identical to us, but unconscious, is possible. Am I reading you right? Otherwise, you could point to any behavioural differences as a surefire test for consciousness.

Trinopus,

Ooops, I left out:

Automated automobile using video input.
Microprocessor
IBM Watson

Any computer with pattern matching will pass the mirror test.

Crane

HMHW,

Interesting rock hypothesis. Is there a reason that rocks encode the arbitrary definitions of the philosophy of mathematics?

By making sufficient connections the signals could be mapped to a flat screen, but it would not display anything immediately recognizable.

What we call consciousness is the product of neuronal output. It is how our brain interprets the signals. It is an internal process that has no external equivalent.

So, what more do you need?

Crane

Exactly. But I hasten to add that this is a very far-out, extreme, philosophical, abstract kind of “possible.” It isn’t quite as far out as some other forms of Cartesian Doubt – it’s possible we all live in a Holodeck sim – but it’s pretty damn far out. I’m a Strong AI proponent, so I believe it is “possible” to make a thinking computer. But “possible” in this case means “not even close to possible” given the current state of the technology.

This is why I came back quickly with at least one concrete behavioral test for some kind of consciousness: the mirror test.

I have no way to know if anyone else on earth actually has self awareness, but at least I can test people to see if they comprehend that a mirror image is not another person, but themselves.

Hm? At this point in time, there is no computer nor machine nor robot that can pass the mirror test. No artificial system exists that can look at its image in a mirror and conclude, “That’s really me.”

Obviously, I want to exclude simple operational IFF (Identification Friend or Foe) recognition. It would be trivial to put a bar code on a camera, so that when the system saw itself in a mirror it would recognize the bar code and be programmed to say, “That is the same as this unit.”

Some birds realize that mirror images aren’t other birds. Dogs and chimps soon figure out mirrors. Very young children figure it out at a certain age. A very, very little baby doesn’t see itself in a mirror, but a typical three year old certainly does.

I have absolutely no idea what would happen if IBM’s Watson were asked to participate in this thread! I would definitely enjoy the experiment!

Here’s a reply by David Chalmers to Searle’s Rock/computer mapping conundrum;
Does a Rock Implement Every Finite-State Automaton?
Basically it proves nothing useful with respect to computation and consciousness, according to Chalmers. And I agree.

Another way of looking at this is that all of Shakespeare’s works are encoded in the digits of Pi, but in order to find them, you have to already have a copy of Shakespeares’ works to hand, to compare the digits to. Similarly we can’t send usable information faster than light using quantum entanglement, since we need classical data to compare it to.

Random states are not the same as usable data.

For what’s worth, in response to that video earlier, I saw the gorilla and the strange S’s on the wall by the elevators, but I also counted 17 instead of 15 passes. Go figure.