I’m not necessarily trying to prove or disprove qualia - just saying that our subjective internal states can be communicated given the right set of circumstances.
I’m not sure if that means I don’t think qualia exist or not, you tell me.
I’m not necessarily trying to prove or disprove qualia - just saying that our subjective internal states can be communicated given the right set of circumstances.
I’m not sure if that means I don’t think qualia exist or not, you tell me.
On a variety of levels I do not understand how this is relevant. Perhaps you could elaborate, keeping in mind that merely saying that communicating verbally is “an approximation” without disclosing how and when that approximation becomes problematic, is meaningless. For example I’m not sure how and why your purported approximation is relevant when verbally communicating the digits of pi.
If you are saying our subjective internal states can be communicated, then you are saying that qualia can be communicated, which is in contradiction with, and therefore a disproof of, qualia.
I have been explaining why our subjective internal states cannot be communicated, by trying to show you that there is no way to unambiguously map our physical internal states to our subjective internal states.
I’m heading out for a few hours so this will be short, but basically a point was made that a key problem with subjective experience was that it couldn’t be communicated, that a description of red doesn’t create an internal response the same as the actual sensory information - my point with verbal communication is that it simply is not a precise method of communication in general, when I say “bird” you and I are not thinking the same thing - it may be close of enough for us to get by but like red it’s not really communicating properly what is meant. I’m not saying red and bird have the same level of lack of information, just that verbal communication drops lots of information in general.
Discussion of degree of approximation is not relevant when the argument is that zero information is communicable about something. I’m a little confused about where you are coming from here. Do you think that you can communicate in some approximate sense what “red” is to someone else(*)? We can go through that exercise, but I assure you that it will end in failure. You will not communicate some very vague approximation of “red”; 100% of the time you will communicate nothing at all. Ever.
(*)And don’t try something like “it’s a warm color” because if someone else sees blue where you see red, then everything they will associate with warmth will look blue to them…
If you are referring to a description prior to any experience with vision, and the method of communication is verbal, then you are absolutely correct.
If you are referring to the use of the word red to a person with red experiences then you are incorrect, you do cause the brain to reference some gross approximation of red, but you are not able to initiate the same strength of internal experience as when the proper neurons are stimulated either by actually seeing red or through some other form of neuron stimulation (e.g. dreaming).
Because we aren’t able to communicate anything until we have a common base of information, I was assuming much of our communication discussion referred to the translation from words into internal states and my point is that whether it’s color or anything else, it’s generally grossly approximate.
In other words, in our brains, we physically lack the capability to trigger the specific neurons required to retrieve or cause arbitrary internal states, we have a very limited function in this regard. If, on the other hand, we had a mechanism that allowed us to queue up multiple requests for neuron activation, and then cause them to happen in the right sequence, then we could both simulate and retrieve on demand the types of internal states that are difficult to communicate.
We have this capability with respect to dreaming, and we don’t with respect to words. Thus my point is that communicating with words is limited by our physical structure and should not necessarily be used to determine what is possible and what is not possible to communicate in general.
I believe, given the proper mechanisms and circumstances, that the information can be communicated (see explanation above with internal mechanism allowing much greater control over neurons and other internal items) at least in the sense that you can xmit data such that the receiver ends up in a physical state that is identical to the physical state in which the sender experiences X.
And because I think we are only matter and energy, I think that is a complete description of the physical state and any resulting subjective states.
So I guess that means I don’t believe in qualia.
This appears to be your explanation why our internal state cannot be communicated:
But this is not a robust rebuttal.
If there are 2 identical brains (after the communication) - how can you just say “suppose that B sees blue”?
The brains are identical, therefore their mappings from external information to internal information, word references, past history, etc. are the same. Saying that they are identical and different is not really consistent and I’m unable to just suppose that B sees blue - I don’t know what it means to say that in this context.
You might argue back that there could be a brain from an alien planet that experienced a completely different set of wavelengths, but still arrived at a physical structure that matched A and B therefore lead to the same brain but a different conscious experience - and I’m not sure how I would respond to that, that’s a tricky one.
Are we asking that the robot emulate you, or merely that it emulate a human mind? It might make a choice based on its color preference, which might be different from your color preference.
The whole idea of qualia is subjective. The robot might very easily have its own subjective preferences; it simply won’t be able to tell you why, any more than I can tell you why!
I was assuming that “just like a human” meant that we could create a robot/zombie that behaves the same as a specific human in all situations. So whether it’s me or someone else, the ability to create a robot/zombie that acts just like a human but without qualia implies we can choose a specific human to mimic and verify the mimicry is accurate.
At the point I wrote that I was referring to the broader concept of subjective experience as affected by qualia. But I would still say the same thing specifically about qualia, however providing the context of AI as opposed to the human brain.
I’ll interject here and say that I couldn’t add anything to** Raft’s **counter argument so far. Well done.
I think machines can simulate functions of the human brain without necessarily doing so through the same process. And I believe a machine that appears to be aware, or intelligent, or emotional, or subjectively experience qualia, is aware, intelligent, emotional, and subjectively experiencing qualia. I can make some reasonable predictions about how computers may achieve these things. However, I can only make broad guesses about how the human brain does these things.
I think the simple explanation for the perception of qualia in the human brain doesn’t require a lot of detailed knowledge of the brain processes though. Still, those parts of the brain process I describe next are just guesses.
It’s clearly a response to stimulus or a memory of such a response. We just don’t have the ability to reflect on the operation of the brain at that level of detail. We ‘see’ the color red when we reflect on a model of an image in our brain. That model is updated in real time through continued stimulus from our eyes, and from analytical processes in our brain. That model has to have a substructure component to associate with the stimulus from our red cones in the eye. Red is that substructure component, and will vary from brain to brain. We cannot look at that substructure component though. It only reveals itself as something distinquishable from green and blue. We don’t get to see how it does that. And the model is also updated from the persisent memory of experiences, and we have a tendency to ascribe other attributes to the substructure component that are not the direct stimulus response.
As **Raft **has said, our brains don’t all speak the same language. So even if we could reflect on the model at the level of the substructure components, it would be as useless as trying to run the machine code for one type of processor on another.
Speculating on how the brain does it is much more interesting than looking at the machines. They will be able to reflect on their own operation in much greater detail unless that functionality is intentionally removed (assuming it was intentionally put in to start with. I can’t think of a reason to leave it out initially). But many interesting experiments will come from watching machines that can operate intelligently, but are prohibited from accessing their own internal structure.
But I do think it will be a trivial matter in terms of AI through simulation of human brain functionality, though I think the simulation will be incredibly complex. Nothing like the simple attempts at computational intelligence so far.
I didn’t realize the discussion was about two identical humans. I thought it was merely about robots indistinguishable from humans in general.
The problem of identical duplicates is that of gradual divergence from identity. In time – and no one can know how swiftly until experiments are conducted – the duplicates will begin to grow apart. A moment will come when one of them wants a pizza and the other wants a burger, and the divergence can only widen from there.
But let’s stipulate exact and non-diverging duplicates. Given that, the duplicates will behave the same, including making the same color-preferences and other behavioral identities, based on innate individual reactions to stimuli. The robot feels exactly the same way you do, because we duplicated the “key ingredient.”
I think we are only matter and energy – and information, which, I think, is more key here. The duplication of information is much easier than comprehending it. I can trivially duplicate a DVD…but I wouldn’t have a chance in hades of knowing what the information on it means if printed out for me in a hexadecimal dump. So, while a Star Trek type Transporter might duplicate a person, I don’t think that means it is possible for us to know what a person feels, even given an explicit plot of all the particles and energies in their brain.
The data is so deeply interwoven, with vast chunks of information linked to other chunks. This is already seen in the very simplest “neural net” programming. There isn’t any algorithm that can be analyzed; the information is stored “holographically,” spread out over all the neurons of the net. This is why, for instance, you can remove one neuron and almost no information is lost. (Try that with an algorithm!)
My belief is that consciousness comes from the incredible depth of the network, with many parts of it specializing in modeling what other parts are doing, including many parts that model what other parts might be about to do. Thus, the “preconscious,” which weighs such things as the words we’re about to utter, either giving the imprimatur to speech or else sending the words back to the scriptwriters as unsuitable. A whole censorship routine, built in to the mechanism of speech! I think consciousness is just the brain’s awareness of itself, coming from the brain’s observation, and predictive modeling, of itself.
Easy to say… All but impossible to reproduce from scratch!
We shouldn’t be, and I am certainly not, however Raft keeps coming back to this tautology of “communicating” between identical objects because it the simplest application of his more general tautology.
No, no, no, no. I don’t know how many times I have to explain this to you! When you use the word “red” to a person with “red” experiences, you have no way of knowing whether that person associates the word and experiences conjured up by the word “red” with the same qaulia of “red” that you experience.
The case of 2 identical brains is a special case of your more general tautology. Suppose you are communicating with an alien on the other side of the universe, who happens to have the exact same physiology as you. If you want to transmit the information of what a “left hand” is to the alien, it has no way of knowing which hand you mean. “Which hand?” the alien asks. The informational distinction is not transmitted despite the fact that the two anatomies are identical. You cannot transmit the information to the alien about which hand you are really referring to without the alien already containing the information you are trying to send. So next you assume that not only are the gross anatomies identical, but the internal brain states are identical as well. In other words, the alien already knows the distinction between right and left, and no new information can be transmitted. Not only does the alien already have the information you are trying to send, but since the brains are identical, the two aliens are trying to send the same information to each other! Not only is the argument tautological, but it is not internally consistent.
Compare this to the case of actually conveying information. No identical brain states (no tautologies) are required. What is needed is a foundational shared vocabulary, but one that does not necessarily overlap with the information you are trying to convey (this is obvious when you reflect upon the fact that a human can learn things despite starting out life with a tiny shared vocabulary). For example, assume you have a shared vocabulary of the natural numbers and arithmetic. You can convey to the aliens information such as the fact that the number 11 is prime, without them having already known this, nor having shared the same internal brain state. The same is not true for qualia.
What is information?
If it’s a tautology I will discard it as an example, but we are exploring the issue and you have not shown this to be the case yet. I see you expand on it in later comments so I will respond to those.
Yep, I get that.
Given that we don’t know for sure (they could be identical in all cases or they could be different in all cases or some mix), how can any conclusions be drawn? We simply don’t know.
Which leads me to consider the exact same physical structure which seems reasonable to conclude they would result in the same experience, and if they aren’t then that would be a discussion in itself regarding why do we think they could be different, etc.
This statement and your following paragraphs do not respond to my objection.
If 2 brains are identical and have arrived at their states due to the same environment and path - how can we just suppose that B sees blue?
The only way this seems possible is if we disconnect our internal conscious states from the state of our brain in such a way that there is an objective internal conscious experience that has an arbitrary mapping to the internal physical state.
Whether the communication is left vs right or numbers, in both cases there would have to be some shared knowledge to be able to communicate.
If there was no shared knowledge whatsoever, and I started sending strings of 1’s and 0’s, about any topic no matter how trivial - no communication would take place.
You would agree with that, correct?
I sounds like you think I am arguing qualia exist AND we can communicate them.
Whereas, if you read my previous posts you will see that ultimately I think we can cause the same experience to occur in a receiving party if the conditions are just right.
Whether you call that communicating qualia or not, I don’t know - but I will say that I have read nothing that is a strong argument against physical structure having a consistent mapping to conscious state.
If you are arguing that physical structure does not have a consistent mapping to conscious state then please provide an argument to that effect (but it needs to be better than “suppose B sees blue”).
Note: The identical alien brain from a completely different environment is an interesting situation - is it possible that 1 exact brain structure has multiple consistent mappings to different environments? On the surface it seems mathematically possible - not sure how to rebut this.
Actually, I would hold that of your position – if only that is possible which we know to be the case, then only that which is actual is possible, and there’s no need for the word ‘possible’ at all. There must be, in some sense, a notion of something that could have been the case, but isn’t – something that is only contingent.
As I said, one can fine-grain the simulation down to that point, where every neuron, or every atom and elementary particle if you will, is being faithfully simulated – but the whole thing is still just a big lookup table. So if you don’t consider the original lookup table to have subjective experience, you shouldn’t expect this one to have them, either. But this one is physically in every way equivalent to you, a faithful simulation of all the physical properties of your brain; so if it doesn’t have qualia, and you do, then qualia are extra-physical.
It’s easiest to imagine in the case of the neurons: they have quite a simple lookup table, correlating inputs with outputs, i.e. changing their firing rate based on the firing rates of the neurons it is connected with; so ultimately, the whole of your brain, physically, can be modeled using these small lookup tables, each one taking as its input the output of other lookup tables. This is completely isomorphic to what actually goes on in your brain; as far as the physical substratum is concerned, this is all that happens.
Oh, we do! The only thing we need to know is that there is a finite number of stimuli, to which you can react in a finite number of ways. So there is a lookup table that connects stimuli and responses in such a way as to be indistinguishable from you in every action.
But we can mimic your exact inner structure, using nothing but lookup tables, too!
If two brains are exactly identical then sure, both A and B see the same qualia (I never said otherwise). What I showed, and what you seem to continue to ignore in a way that is increasingly perplexing, is that if the two brains are identical then the argument is trivially tautological (and I also showed that your argument is tautological if the brains are not identical). Since you continue to act as though you haven’t read what I’ve said very carefully, I’ll make this very simple: if the two brains are identical, they are not two different brains, they are the same brain. Communication is tautologically possible “between” a brain and itself.
I’m not going to respond to the rest of your post in order to focus on one misunderstanding at a time or else I’ll go nuts.
Same brain, same qualia?
Again, the flat earth is not possible due to the laws of physics.
If we say, a priori, earth could be flat, therefore the oceans are causing a giant waterfall into space - we would be drawing a conclusion based on invalid assumptions.
This is very different from saying a creature with a purple head is possible - even if it doesn’t exist it’s not violating the laws of physics, it just requires the right set of environmental conditions to arise.
You make an assumption I wouldn’t make.
It’s possible you could have a detailed lookup table version of a human and have it feel differently than we do.
It’s also possible that the closer you get to the level of detail you are describing, that the machinery experiences the same thing we do.
It’s also possible that the only way to achieve our conscious experience is to have a similar physical setup based on an electromagnetic/chemical soup.
We just don’t know enough about consciousness to come to many conclusions in this area.
Although I understand where you are going with this, you stopped short of your goal. You descended from the top of Mt Everest to base camp at 17,000 feet and declared “we are now at sea level”.
One example: to alter the strength of a connection, at least 2 things happen in concert:
It’s possible a lookup table wouldn’t be adequate.
One simple reason:
If the continuous nature (vs discrete) of our underlying hardware is important to our operation, then the lookup table simulation will fail because a lookup table is discrete.
Our brain cells (neurons and glia) are bathed in a continuous gradient of electrical and chemical signals - it’s possible (not guaranteed) that that attribute is important.
That’s a bold statement, I assume not backed up by a mathematical proof regarding continuous vs discrete?