Do we even know what consciousness is at all?

Thereby massively increasing the connections and interactions that are occuring; and creating a much more complex network. Something that does not exist in any specific interaction may exist in that complex network. In fact, many things that do not exist in any specific interaction do exist in such networks: such as functioning human minds.

It seems to be your alternative argument, however. With the addition that your “something unknown” is entirely unrelated to anything physical that we do know.

So if it’s not an argument you think I can make, why is it an argument that you think you can make?

And, while we’re at it: if the physical brain has nothing to do with consciousness, how do you know that rocks aren’t conscious?

We don’t know that. We do know that we don’t understand everything that’s going on with all those physical interactions.

Which also would make it impossible to imagine your p-zombies.

But, in saying it would be a fool’s errand to try to find such things, you’re saying that they don’t exist. Which by your own statement disproves your entire argument.

Again, physics is local. None of these interactions know anything about whether they are in a specific network, or happening in vacuum: they happen in exactly the same way, and jointly determine the behavior of the whole completely. If they can occur without conscious experience on their own, they can do so as part of a network. All that’s needed for that is the thesis that physics is causally closed: then, the same physics—even if it actually only occurs accompanied by conscious experience—could occur (as in, it would entail no logical contradiction if it did) without it.

If conscious experience arose in the whole without being reducible to the component parts, then again, you’re arguing for strong emergence, and have already left the purely physical world behind.

That’s completely ass-backwards. The intrinsic properties I claim are necessary for the explanation of conscious existence are the only things ever we actually do have direct, first-hand, certain knowledge of; all other knowledge is derived from that. We know exactly what they are, and what they do; everything else is just consequent to that knowledge. Whenever we see the readout of a measurement instrument to infer data supporting a certain physical theory, we are first having a subjective experience of that measurement reading; that’s the primary thing we know.

I’ve nowhere said that the physical brain has nothing to do with consciousness.

You’ve missed the antecedent of that statement:

As I noted, I don’t think the logic is ultimately sound. But it can’t be gotten around by just ignoring it; it poses a significant constraint on the building of sensible theories.

No: imagining things without experience isn’t constrained by the inability to imagining the experience of anything else.

Again, that’s in response to accepting the conclusion of the zombie argument: if there are extraphysical causes of conscious experience, as there are if the zombie argument is sound, then indeed seeking them in the physical world is just not going to accomplish much.

All kinds of things occur as part of a network that don’t occur if only one of the interactions in that network occur. Why are you having this argument about consciousness, and not about sight or hearing or digestion or language?

OK, so what are the intrinsic properties you claim are necessary for the explanation of conscious existence?

– might address the rest of that post later.

Because all of these phenomena transparently occur as a result of local interactions. Take any sort of logic circuit: any computable function can be decomposed in a sequence of logic gates—in fact, a single sort of gate, e.g. NAND, suffices. So whatever the function is, mapping a bit string input to a bit string output, it reduces to a sequence of NAND-operations on the bits. Each individual NAND acts the same, no matter whether it’s part of a larger network. Knowing how NAND works means we know how the circuit works means we know what every possible computation does. In particular, no conscious experience is necessary to determine the output of a single NAND; none is necessary to determine the output of two NANDs; none is necessary to determine the output of any given number of NANDs. Whether it’s there or not just doesn’t play any role at all.

Even if none of those NAND gates know how to do addition, that there is a sequence of such gates that does isn’t surprising. None of those NANDs need to do anything different from their normal operation for this sort of emergent behavior. No consciousness is needed, either.

But you claim that knowing what happens at a given interaction point does not suffice to know what happens in the whole network. Certain sorts of behavior, of mapping some input (say, a particular image from the eyes) to a particular output (say, a particular movement) all of a sudden require consciousness. In the example, there’s a network of NAND gates that does something to its input that isn’t reducible to the application of individual NAND gates to input bits. Because if it were, then again there would be no need for conscious experience to come along, since the functioning of those NANDs is fully determined without it.

That doesn’t mean that the processing carried out by NAND operations doesn’t sometimes produce conscious experience. It might. Nobody knows. All that’s needed is that it does not necessitate conscious experience; i.e. that it is consistently possible that all of that processing goes on without it. Which it is exactly if all those NANDs just behave like NANDs, and nothing else. Then, consciousness is not relevant to this processing.

So yes, complex assemblages are able to perform functions none of their parts can. But as long as those functions are reducible to the action of the parts, and that action is perfectly possible without any conscious experience, then nothing about this function entails the presence of consciousness, and the zombie argument goes through.

That would be a bit of a long answer for this post, but I’ve already linked to both the popular-level description and the peer-reviewed publication before in this thread (Discourse gets cross with you if you post the same links too often).

Not unless we understand the whole network, it doesn’t.

And we do not understand the whole network.

We understand some, though not all, of it if what we’re doing is figuring out how many photons which are affecting how many sensors which are affecting how many neurons causes how many other connections firing among those neurons which allows a human to see a red light. We understand something of how all this causes a different set of connections to push a foot down on a brake. We’re nowhere near understanding all the connections that go into how people learn and remember that a red light means they’re supposed to step on the brake, or why some people who have learned that info nevertheless see the light and don’t step on the brake; let alone all the connections that go into why many of those people feel that they consciously decided whether to step on the brake or not. (Some of them just didn’t see the light with their brains, even though the photons stimulated their eyeballs just fine). That doesn’t mean that those connections don’t exist, or that they’re not physically based, or that the thing we call consciousness that results from them isn’t real.

If what you’re saying is that from inside these brains we’re never going to understand that network of connections that well: I’m agnostic about that one. Maybe we can, maybe we can’t. But we sure as hell don’t understand it that well now.

So no, knowing what happens at a given interaction point does not suffice for us to know what happens in the whole network. But I’m not saying that theoretically it can’t; I’m saying that in practice it doesn’t.

What’s need, in that sense, got to do with it? Evolution doesn’t do things on purpose, however purposeful the results look after they happen.

It doesn’t get cross if you just mention the number of the posts. (It can also be ignored; I’ve ignored its being cross with me about replying to you repeatedly in order to post this. But the number of the posts would be fine; that’s what I often do when referring back in a long thread.)

How do you do this with a human brain, please? Human brains are not computers, they’re not black boxes with defined inputs and outputs, they’re complex electro-chemical organs continually connected to the outside world on multiple analogue channels, continuous input and output. The computer metaphor breaks down very rapidly when you look at the brain as it is, rather than as a metaphor. Neurons are not NAND gates.

They are not inconsistent. You said the thought experiment shows consciousness is not required. You agree consciousness exists.

We both agree there is a gap in understanding of how it can arise.

That seems inconsistent with your claim that the physical interactions are insufficient.

I read through your posted description. It is a little hard to follow, even as a popular description, because it uses technical terms and refers to concepts that are assumed to be understood. That might be true in the technical journal, but some of us haven’t studied this formally. I’m playing catchup.

I appreciate your sharing that link, but appreciate more your posting some description in this thread.

You chide us for strong emergence and “ ‘Something unknown might be doing we don’t know what’ isn’t really an argument."

Yet you are doing precisely the same thing. You are saying that physical science is incapable of providing understanding of a completely physical world. Talk about self- contradiction.

You speak of “intrinsic properties” that we directly experience as if that explains how we have experiences. It doesn’t, it just gives it a new name. How do we experience intrinsic properties? What are they? How does the brain go from atoms to neurological processes? We know hormones affect mood, but not why we have moods.

What are memories? How are they stored and accessed? Where does language come from in the NAND gates?

Yet that has no more meaning than calling them qualia. What are they, where do they come from, are they the same for all people? How does the brain create/access them? Do they exist outside our brains? Are they like Platonic forms? Or are they purely internal manifestations of experience itself?

You were saying that measuring all the interactions doesn’t leave room for consciousness. I point out that we haven’t begun to measure all the interactions. I pointed to an example where we are a lot further along in understanding the principles and relationships and interactions, and yet cannot fully determine the results. If there is room for incompleteness in our understanding of weather, there’s a lot more room in our understanding of the brain.

Well, I agree with that, if only because the intrinsic properties of the world are probably forever beyond our reach. This means the Hard Problem is a kind of ‘god of the gaps’ phenomenon; we can investigate mental activity and neurophysics in minute detail, and decode and replicate the internal workings of the mind, and of qualia, and of consciousness, for ever, but there would always be something more to find, some mysterious and unexplained ‘wibble’ that we will never pin down.

That doesn’t mean we are wasting our time - the ability to wrangle consciousness and qualia without ever fully understanding them is a worthy goal, and help us to understand each other.

Then you’re saying that the physical description is insufficient for the description of the behavior of the whole network, because knowing what happens locally is sufficient for knowing the full physics of the network.

But the whole argument turns on what’s in principle, i.e. theoretically, possible. If it’s theoretically possible to describe the behavior of the whole brain using only the local physics, and if those don’t necessitate consciousness, then the complete description doesn’t necessitate consciousness. But then, we can consistently do without, and zombies are possible, meaning the physical description alone is insufficient.

As noted, nothing but the causal closure of the physical is needed for this argument:

‘Need’ here only means that consciousness isn’t necessary for all of the physical processes involved to occur, and hence, it is consistent to imagine them occurring without it.

You’re right, but the link was right in the post you quoted from before (this also includes a link to the published version, or see this post). It eventually gets rather wearying to post the same things over and over.

No clue, obviously. But I don’t need to know how, I only need to know that it is possible, and the brain being an ordinary physical object is sufficient for that.

Sure, and if consciousness is not required, then the physical processes can consistently occur without it, hence, they fail to entail conscious experience—they’re not sufficient to produce that experience, something besides is needed. Hence, ‘the logic of the zombie experiment is sound’ and ‘consciousness is a manifestation of the physical interactions’ are indeed inconsistent.

That is the conclusion of the zombie argument, but as noted, I believe the argument fails. But even if one holds the argument to succeed, that doesn’t mean that the brain is irrelevant: for instance, on the popular integrated information theory of consciousness, consciousness arises once information is ‘integrated’ enough in a certain way. The theory is essentially panpsychist, meaning that some consciousness is associated with information everywhere, but it’s only in certain systems that its integration is sufficient for conscious subjects, and conscious experience, to arise—so that a brain might be conscious where a stone (or indeed, the cerebellum, despite containing most of the brain’s neurons) is not.

I’d be happy to discuss the details of the model, but there’s already a thread that might be more suited to that.

That would only be contradictory if what falls under the physical description were all we could know about the world, but rather the opposite is the case: we know about the physical description only by proxy, via mediation of our conscious experience, which is what we most intimately and primarily know. So it’s not that the intrinsic properties and what they do are in any way unknown, they are what we know first and foremost.

This is exactly what I claim happens by means of the von Neumann process: by essentially having access to its own properties, it can represent the world (via the modal fixed-point theorem) in terms of its intrinsic properties. If you want an analogy, then consider the case of painting by numbers: without a code of how to color in the different areas, what the picture shows is underspecified; but filling it in then gives a concrete picture. In the analogy, the intrinsic properties provide the colors to fill in and make the picture, the representation, definite. Mere structure, relations without relata, is indeterminate, and the intrinsic properties provide the relata to single out a concrete realization of that structure.

All interesting questions, but rather beside the point—these are part of the ‘easy problems’ of consciousness.

In the end, they’re just the concrete properties that bear the relations that figure in our abstract descriptions. As an analogy, consider the difference between a system of axioms and the models of those axioms: typically, say the group axioms provide a certain structure, which can be differently realized in the form of different concrete groups. This group might be differently realized, say by means of matrices and their product yielding the group operation. These matrices give a concrete realization of the group structure: they’re in that sense intrinsic objects, and their properties the intrinsic properties.

Note that different realizations of that group are possible: then, one could use the representation based on matrices to form a model of another realization, but that doesn’t mean that the matrices themselves become identical to the objects they model (in fact, that’s how elementary particles are modeled in quantum field theory). They just realize the same structure.

Where this becomes relevant is that any system of axioms admits of different concrete realizations, and (for sufficiently powerful systems) those axioms can’t fix all the properties of those realizations. Meaning that, from the axioms, there are properties P such that both P and not-P are consistent with the axioms—meaning, in turn, that using a model, we can create both models having property P, and models not having that property. That’s where things become relevant for the zombie problem: we can consistently build models with and without P(henomenal experience), but that doesn’t mean that the system we’re modeling can have either: it’s just an inherent limitation of using models to understand the external world.

So you might think about the axioms intended to provide a theory of the natural numbers as a structure, or a theory, the natural numbers as the physical universe, and our attempt to grasp the world as building models that possess that structure; then, the properties the natural numbers, the ‘real’ ones, have, are the intrinsic properties. That is, suppose the natural numbers have P: since the axioms don’t fix that they do, we can build consistent models of these axioms that both have and not have P. No theory will ever decide the matter, but of course, to the natural numbers themselves, could they introspect, it would be perfectly obvious. But that’s the situation we’re in: we can introspect and see that we, intrinsically, have the property P (i.e. conscious experience), but our theories can never fix the matter.

No; the zombie argument claims that the physical processes don’t necessitate any consciousness. There may be conscious experience when those physical processes occur, but if the physical processes can be imagined without it, then they don’t necessitate consciousness, so there needs to be something extra that does, given that we actually are conscious.

And plenty about the brain is not understood in detail, but that’s entirely irrelevant: if we have grounds to believe that a full physical description of the brain exists (and if we hadn’t, we’d already have abandoned physicalism), then we can run the zombie argument.

But “those functions being reducible to the action of the parts” is only true for closed systems, as far as I understand it, and the brain is not a closed system.

Hmm, not sure why you would think that changes things. I can always draw a boundary around a system, keep track of the data going in or out, and compute the local dynamics. That’s obvious in classical mechanics (I take it), but even in the quantum world, I can formulate an appropriate Lindblad equation for keeping track of the interaction with the environment, or purify the whole system to a larger one that then evolves exactly as a closed system using Stinespring dilation.

So, draw a boundary around the whole universe, or at worst the whole planet, and that’s where consciousness lies? I mean, that’s not what I’d think of as explaining anything.

No, around the brain. You only need the local quantum state plus a certain quantum operation (completely positive trace-preserving map) to model the evolution of the brain; the influence from the environment is then just the fact that this evolution is typically not unitary, but that doesn’t yield any novel problems.

I’m not following - if the brain is constantly in communication with not-brain, in very analogue ways, how does this work?

As I said, whatever the environment does to the brain can only be a general state transformation, so you only need those dynamics to keep track of its evolution.

If it helps, imagine a brain in a vat: all input is created, and all output analyzed, by a container enclosing the brain arbitrarily closely. So the total dynamics will just be determined by what happens in the box. A general quantum operation then effectively takes only the state of the brain, and puts whatever the box does into the dynamics, recapitulating the exact same evolution of the brain state.

Of course, we don’t know the dynamics, but the point is just that such a dynamics always exists: so we only need the local physical facts to account for all that happens.

Seems like “only” is doing a lot of work there - if you’re intent on modelling the brain as a computer, but every microsecond it’s getting state changes imposed from outside inputs, then its internal workings aren’t going to make sense as pure computational workings.

It’s a computer where the actual chips are being changed out second-by-second for ever-so-slightly different ones, some of which aren’t even silicon, and at the same time the code is being rewritten, it seems to me.

My point is we can, “possibly”, model such a thing by its pure physical states. But such snapshots in no way are going to resemble state snapshots of a computer. There will be too much analogue change throughout the system, moment-to-moment.

I don’t have an issue with a purely physical accounting of the brain. It is all just physics, after all, I’m in no way a dualist.

I do have an issue with then saying “therefore it’s a computer”. That’s the unjustified leap, to me.

That’s all I was arguing for. Personally, I’m right there with you when it comes to modelling the brain as a computer (I have even published an argument as to why computation alone isn’t sufficient), but many people subscribe to a strong, physical version of the Church-Turing thesis (often implicitly) according to which everything that can physically occur is effectively computable (as in, computable by an effective procedure, not necessarily efficiently so).

No problem, then, it’s just that you were using physical computing terms like NAND gates to explain things, so you can see my confusion.

Yeah, I personally think that makes for a nice xkcd but is otherwise wrong.

That was just illustrating a point regarding the relation between local dynamics and global properties, not something I intended to be metaphysically realistic.

Only if “knowing what happens locally” includes “knowing how everything else in the universe, or at least in the network, affects what happens locally.”

OK. What I’m saying is that no, it’s not theoretically possible to describe the behavior of the whole brain using only the “local physics”; at least, unless “local” is taken to mean the whole brain, the rest of the body, and the brain’s history. In which case, we don’t know how to describe it.

That’s sort of the actual case, as all the input to what we call the mind is coming from the body and all the output is being analyzed by the body (including the brain): although of course all sorts of other things are affecting what happens in the “box”. But I don’t see how that improves our understanding of the “total dynamics” to the point at which we can say ‘no, there’s definitely nothing going on in those “total dynamics” that necessitates consciousness.’