Do we even know what consciousness is at all?

Then you’re saying that the physical description is insufficient for the description of the behavior of the whole network, because knowing what happens locally is sufficient for knowing the full physics of the network.

But the whole argument turns on what’s in principle, i.e. theoretically, possible. If it’s theoretically possible to describe the behavior of the whole brain using only the local physics, and if those don’t necessitate consciousness, then the complete description doesn’t necessitate consciousness. But then, we can consistently do without, and zombies are possible, meaning the physical description alone is insufficient.

As noted, nothing but the causal closure of the physical is needed for this argument:

‘Need’ here only means that consciousness isn’t necessary for all of the physical processes involved to occur, and hence, it is consistent to imagine them occurring without it.

You’re right, but the link was right in the post you quoted from before (this also includes a link to the published version, or see this post). It eventually gets rather wearying to post the same things over and over.

No clue, obviously. But I don’t need to know how, I only need to know that it is possible, and the brain being an ordinary physical object is sufficient for that.

Sure, and if consciousness is not required, then the physical processes can consistently occur without it, hence, they fail to entail conscious experience—they’re not sufficient to produce that experience, something besides is needed. Hence, ‘the logic of the zombie experiment is sound’ and ‘consciousness is a manifestation of the physical interactions’ are indeed inconsistent.

That is the conclusion of the zombie argument, but as noted, I believe the argument fails. But even if one holds the argument to succeed, that doesn’t mean that the brain is irrelevant: for instance, on the popular integrated information theory of consciousness, consciousness arises once information is ‘integrated’ enough in a certain way. The theory is essentially panpsychist, meaning that some consciousness is associated with information everywhere, but it’s only in certain systems that its integration is sufficient for conscious subjects, and conscious experience, to arise—so that a brain might be conscious where a stone (or indeed, the cerebellum, despite containing most of the brain’s neurons) is not.

I’d be happy to discuss the details of the model, but there’s already a thread that might be more suited to that.

That would only be contradictory if what falls under the physical description were all we could know about the world, but rather the opposite is the case: we know about the physical description only by proxy, via mediation of our conscious experience, which is what we most intimately and primarily know. So it’s not that the intrinsic properties and what they do are in any way unknown, they are what we know first and foremost.

This is exactly what I claim happens by means of the von Neumann process: by essentially having access to its own properties, it can represent the world (via the modal fixed-point theorem) in terms of its intrinsic properties. If you want an analogy, then consider the case of painting by numbers: without a code of how to color in the different areas, what the picture shows is underspecified; but filling it in then gives a concrete picture. In the analogy, the intrinsic properties provide the colors to fill in and make the picture, the representation, definite. Mere structure, relations without relata, is indeterminate, and the intrinsic properties provide the relata to single out a concrete realization of that structure.

All interesting questions, but rather beside the point—these are part of the ‘easy problems’ of consciousness.

In the end, they’re just the concrete properties that bear the relations that figure in our abstract descriptions. As an analogy, consider the difference between a system of axioms and the models of those axioms: typically, say the group axioms provide a certain structure, which can be differently realized in the form of different concrete groups. This group might be differently realized, say by means of matrices and their product yielding the group operation. These matrices give a concrete realization of the group structure: they’re in that sense intrinsic objects, and their properties the intrinsic properties.

Note that different realizations of that group are possible: then, one could use the representation based on matrices to form a model of another realization, but that doesn’t mean that the matrices themselves become identical to the objects they model (in fact, that’s how elementary particles are modeled in quantum field theory). They just realize the same structure.

Where this becomes relevant is that any system of axioms admits of different concrete realizations, and (for sufficiently powerful systems) those axioms can’t fix all the properties of those realizations. Meaning that, from the axioms, there are properties P such that both P and not-P are consistent with the axioms—meaning, in turn, that using a model, we can create both models having property P, and models not having that property. That’s where things become relevant for the zombie problem: we can consistently build models with and without P(henomenal experience), but that doesn’t mean that the system we’re modeling can have either: it’s just an inherent limitation of using models to understand the external world.

So you might think about the axioms intended to provide a theory of the natural numbers as a structure, or a theory, the natural numbers as the physical universe, and our attempt to grasp the world as building models that possess that structure; then, the properties the natural numbers, the ‘real’ ones, have, are the intrinsic properties. That is, suppose the natural numbers have P: since the axioms don’t fix that they do, we can build consistent models of these axioms that both have and not have P. No theory will ever decide the matter, but of course, to the natural numbers themselves, could they introspect, it would be perfectly obvious. But that’s the situation we’re in: we can introspect and see that we, intrinsically, have the property P (i.e. conscious experience), but our theories can never fix the matter.

No; the zombie argument claims that the physical processes don’t necessitate any consciousness. There may be conscious experience when those physical processes occur, but if the physical processes can be imagined without it, then they don’t necessitate consciousness, so there needs to be something extra that does, given that we actually are conscious.

And plenty about the brain is not understood in detail, but that’s entirely irrelevant: if we have grounds to believe that a full physical description of the brain exists (and if we hadn’t, we’d already have abandoned physicalism), then we can run the zombie argument.