Do we even know what consciousness is at all?

We do know that: it doesn’t. Physics obeys a principle of locality:

In physics, the principle of locality states that an object is influenced directly only by its immediate surroundings. A theory that includes the principle of locality is said to be a “local theory”. This is an alternative to the concept of instantaneous, or “non-local” action at a distance. Locality evolved out of the field theories of classical physics. The idea is that for a cause at one point to have an effect at another point, something in the space between those points must mediate the action. To exert an influence, something, such as a wave or particle, must travel through the space between the two points, carrying the influence.

So I don’t have to worry about what’s happening on Mars when I throw a baseball on Earth, billiard balls don’t have to worry about other balls around them, and cells (or whatever) don’t have to worry about whether they’re part of a network: they just react to what directly impinges on them in the same ways.

Even in quantum mechanics, whatever stance one might take on the issues surrounding Bell inequality violations, we have a principle known as ‘signal locality’ which ensures that what happens over here doesn’t influence processes over there; in fact it’s a theorem that can be straightforwardly proven from its basic formalism.

So, this is just not right.

Anyway, I may be guilty of leading you down a blind alley here. Ultimately, the question of locality really has nothing to do with the zombie argument; I was hoping to build some intuition for the conceivability-claim by means of the billiard ball example, but evidently, that failed. What really matters is just the causal closure of physics: it ensures that all physical processes can be conceived of—are consistently possible—without any conscious experience; so just the bare physical facts appear not to fix the facts regarding experience.

As I noted, I think this argument ultimately fails, although the reasons it does so are quite subtle. So I think I’ve spent enough of my time trying to explain an argument I don’t really think works, in the end, just to point out that trying to dismiss it for shallow reasons or based on misunderstandings is just going to mislead. If you’re interested in the current discourse on this, I think the Stanford Encyclopedia-article I linked earlier is a good starting point (although it’s a bit dated and is missing important later developments, such as Chalmer’s explication of conceivability via two-dimensional semantics).

But we also know that things that don’t happen when one billiard ball bangs into another, or one neuron connects with another, do happen when there are multiple connections and types of connections. One singular connection does not make a brain, or even a kneejerk reaction; it takes an intertangled combination of them to do that. And that intertangled combination does a lot of things we don’t understand; or at least don’t understand yet. One of them appears to be consciousness.

Fair enough.

Sorry to take so long replying. I wanted to get into detail on some of this but I think we sometimes are talking past each other because of a lack of common definitions. That is what I’m trying to resolve with you at the moment. But in general I’m far more interested in the parts of consciousness we can’t simply explain with machine models. IMO the discussion of electrical activity and zombies is not important, it’s not consciousness. We need to talk about language, creativity, complex reasoning like mathematics and things like artistic expression that often involves unconscious or subconscious processing.

And now I have a ton of new posts to catch up on also.

First, if you think this is the crux of the matter you may as well give up now. This is the very last thing we’ll ever learn about how consciousness occurs. We’ll know all about DNA and how every last cell in our bodies are formed, we’ll know how quantum gravity works, and why squirrels run half way across the road when a car is coming and then turn around and go before we understand the specifics of the electrical activity in our brain and how it is a component in consciousness. It’s not going to a particular structured instance of electrical activity, it’s an enormous volume of electrical activity dynamically interconnecting with our biological hardware. And it’s also going to be no more connected to a subjective experience than HTTP protocol to the complex content we interchange to post these messages. The subjective experience is the result of the processing and it’s content within the electrical activity that matters not the process that produces it.

Secondly, it’s not something that “somehow comes along”. It comes from billions of years of evolution that resulted in our brains creating a system of modeling the real world. That’s what our subjective experiences are, the model of the world we create in our brains and then use internally illustrate externally. I don’t see this at all difficult to understand. The simplest of animals with a neurological system have subjective experiences. If you’re asking about our most complex subjective experiences we don’t have that level of understanding but the objective experience of imaging an apple is pretty simple, we have the memory of experiencing apples, we form an model that provides an image, we recall taste and associated feelings. I don’t know exactly how electrical activity in our brains does that but it’s pretty easy to see it’s something a machine can do. I don’t see any reason to think the more complex aspects are beyond our understanding even if we don’t see the underlying details of the process.

I don’t reject Machine State Functionalism or Multiple Realizability. That’s exactly what my argument is.

If brain activity is not behavior then what is it? I don’t think all brain activity can be classified as behavior, but when someone says “Think of an apple…” what else would you call the brain activity that results?

I don’t understand this disconnect you talk about, perhaps you could elaborate. Then I could comment.

I don’t care about ‘zombie’ arguments at all. They’re meaningless. Two identical systems will behave in exactly the same way. That’s what ‘identical’ means, and it can never happen outside of a simulation because of physical limitations. There’s no subtracting the experience, the experience is the result. It’s another True Scotsman argument.

Yet these things are transparently explicable and follow necessarily from the aggregate of these local descriptions, just as how any computation is transparently implementable using only NAND gates. Consciousness just doesn’t. You can try to sweep that off the table by vague gestures at complexity, networks, emergence and whatnot, but all you’re doing there is to refuse engaging with the problem.

Sorry, but these all appear to be mutually inconsistent notions I can’t distill a coherent line of though from to reply to. We’ll never understand how subjective experience arises, it’s the result of the processing, it’s the content that matters, subjective experiences are easily understood, a neurological system is necessary, subjective experiences are models, subjective experience are electrical activity, even the more complex aspects aren’t beyond our understanding, subjective experience is all in the process, where neither content nor electrical activity nor neurological systems matter (that’s what functionalism entails), and so on.

Brain activity is just brain activity; behavior is what that brain activity produces. It’s what I do, where brain activity is what I am. Again, saying that it is also behavior would lead to an inconsistent regress, where behavior is produced by behavior.

The problem is that even if you fully understand the physics of the situation, there’s no need for experience at all. Compare, say, the function of addition: given a certain logical circuit, then you can’t think of the circuit doing what it does without also thinking about addition: it’s just what it does. But all of the physics put together necessitates no talk at all about subjective experience.

And even if you think that the zombie argument fails, it’s simply no coherent position to just ignore it—it’s part of the explanatory burden any successful theory of consciousness will have to meet, if perhaps by showing it unsound. You can’t hope to formulate a successful theory of gravity without addressing Einstein’s elevator thought experiment, and you can’t hope formulating a successful theory of consciousness without addressing the zombie argument. Just trying to sweep it under the rug just means that your understanding will remain naive.

We appear to be in pretty drastic disagreement about how well all the functions of the brain are understood. I was unaware that all research in this area has come to an end because all questions about everything happening in the brain have been fully answered.

I didn’t say they are explained, but that they are explicable: whatever the details at work are, there is absolutely nothing to suppose they won’t be amenable to the same mode of explanation that moves from the microscopic details to the macroscopic behaviors. Hence, while acknowledging them to be fiercely difficult, they are the ‘easy’ problems of neuroscience. Conscious experience seems unique in not fitting that mold.

How do we know that it doesn’t?

We don’t; but currently we can’t see how a model could explain consciousness in those terms.

Earlier in this thread, and in countless other threads on this topic, it was implied that we have a basic model of how subjective experience might work but the brain is so complex, there are so many connections, that we’ve a long way to go on figuring out the details.

This actually understates the problem.

We don’t have that crude starting foundation yet. And it’s hard to see how we can get started. We can understand how the brain can reason things; computer science gave us an excellent leg up on that. But how it can feel anything remains mysterious.

Well, we don’t, hence I use words like ‘seems’ and ‘appears’, although apparently to no great effect. But we have, as discussed, good a priori arguments that make it seem that way, and absent defeaters to those, need to be taken into account.

Take @Mijin’s example of (logical) reasoning: it’s very plausible that there exists an algorithm that starts from some set of premises and arrives at their conclusion. In fact, such algorithms have been produced, for instance in mathematical proof-checking systems like Coq. So there’s a clear way to analyze reasoning in terms of components, implementing the algorithm’s step.

Now, we may not know in great detail how reasoning is produced in the human brain. But the existence of such algorithm gives us a plausible explanation, and even if that isn’t the one fickle evolution chose to implement within human brains, that means there’s no essential mystery to find, here. Implement the algorithm, and then that’s just what logical reasoning is. It’s not something a single neuron could do, but it’s straightforward in principle how a collection of neurons bands together to do so, and no single neuron has to do anything different from what it does all the time. We may not have the whole story, but a good outline is enough to dispel any lingering air of mystery.

However, nothing about that story necessitates anything about consciousness. The action of a single neuron can occur without any conscious experience: it’s just a mapping from inputs to outputs. And that’s all those neurons do, whether part of a larger whole or not. So it’s at least consistently imaginable that they do these things without any attendant conscious experience. In reality, there might be conscious experience associated with that, but for carrying out their particular function, it is entirely dispensable. You could replace a neuron by a simple database that checks each incoming signal and gives an appropriate output, and nothing within the network would be any the wiser; in fact, you could do that with all the neurons. And they can perform their task entirely in the dark, and fully account for the behavior of the network, so the entire network can do so, too.

And it’s the same not just with reasoning, but with any cognitive or mental faculty, with any behavior: all of that can be explained with a story just like the one above, and none of these stories will have any cause to mention conscious experience. So all that a human being does and says essentially can be done and said without the involvement of any consciousness. So can all that occurs on the physical level. Sleepwalkers can carry out surprisingly complex tasks, apparently unconsciously. In blindsight, visual information is processed by the brain, informing a subject’s actions, without ever being consciously available to them. Split-brain patients can act on cues never available to their conscious minds. And so on.

But if any and all of this could occur without conscious experience, then it means that all of it occurring can’t explain conscious experience. When we have the right NAND-gates wired together to carry out the algorithm implementing logical reasoning, then logical reasoning happens necessarily. It isn’t possible for the actions of the NAND-gates to occur without logical reasoning also occurring. But it strongly seems possible for any sort of physical action to occur without consciousness occurring. The absence of reasoning in the logical circuit is inconceivable, while the absence of consciousness is conceivable (hence, it’s also sometimes called the ‘conceivability argument’). We simply have no story, nor even the faintest outline of a story, of what needs to occur such that its occurrence necessitates conscious experience in the same way. (Well, obviously I believe I do have such a story, but let’s face it, it’s rather unlikely that I should’ve cracked this centuries-old mystery, so I’m just taking the perspective of the wider scientific community here.)

That’s what sets consciousness apart: for any of our behaviors, any of our reactions, any of our capabilities, it’s easy to at least see in what terms a story of them could be formulated, even if the real story takes a completely different approach. But for consciousness, such a story seems impossible: any story we do tell, down at the bottom, seems to be entirely independent of consciousness. Whatever you talk about, the answer to the question ‘could it happen without consciousness?’ always seems to be ‘yes’. And if that’s true, then this means that consciousness is fundamentally different from all other phenomena we’ve encountered so far. That’s what makes it interesting, and hand-waving it away deprives one of the fun of coming to grips with the whole thing.