Is the human brain the most complex object in the universe?

I’m not sure I’ll have the time to follow this thread anymore, so for now I’ll just quickly answer a couple of points…

But this is the basis of counterfactual reasoning. Since we know that if the Earth were flat, there would be such a waterfall, we can look for it and, based on its absence, conclude that the Earth isn’t flat after all. If a priori we only accepted as possible what is actual, this kind of reasoning would not be possible.

It might well violate the laws of physics, say in a superdeterministic world whose evolution is completely fixed by unique and necessary initial conditions – it might be physically impossible for a creature with a purple head to arise. Contrariwise, the laws of physics might just be local by-laws, which differ throughout the uni- or multiverse, and in which thus indeed flat ‘Earths’ exist.

Well, it’s my argument that it would feel different – namely, it would lack subjective states, and thus, not ‘feel’ anything at all. And frankly, anybody who wants to advance the position that lookup tables do have subjective states would have to make a darn good argument for it…

This, too, can be simulated, at least in principle – it works according to the same physical laws as everything else. The idea that this is not sufficient is precisely the qualia idea, i.e. that there is something else besides these physical laws that determines mental content.

But we know a lot about the physical laws consciousness supervenes on, and also a lot about computation and information, which we can use to put very strong constraints on the phenomenology of consciousness.

Both of which can just be put into the lookup table as well, so I’m not sure I see your point.

Analogue computers have the same computational power as digital ones, so anything you can do with a continuous process, you can do with a discrete one, too.

Actually, the Nyquist-Shannon sampling theorem provides precisely that: any continuous, analogue signal can be encoded, without loss of information, into a digital one, provided its frequency range is bounded in some way; and since our neurons neither use gamma rays nor radio waves to communicate, that requirement seems to be fulfilled.

Most people think that intelligence is the easy part – how phenomenal experience arises isn’t known as the ‘hard problem’ for nothing…

Well that’s what this sidetrack is all about. I think these are all ego-driven concepts, mythology created by people to explain their lack of understanding about how their own brains work. As Raft and I have been pointing out, these are simply the limitations on our brains ability to reflect on it’s own internal structure.

The authorities you appeal to in this case have only reached dead ends, so I don’t see much point in following their logic, or lack of it.

At this point in the discussion, I gather we are all down to the common concept that qualia represent an atomic component of our perception that cannot be broken down into smaller components, due to the aforementioned limitations. But there’s no reason to believe a machine has to have those limitations, or that these ‘hard’ problems are hard once that barrier is removed.

Concluding it MIGHT exist, assuming our knowledge is good, is reasonable.

Concluding it DOES exist and then building on that supposed “fact” for further conclusions would be a mistake.

I don’t think anyone knows enough to accurately claim a system based on lookup tables does or does not have subjective experience.

Can be simulated? that’s a complete unknown, and highly likely we can’t simulate all of the physics going on in a single brain - even with proper computing power.

Which constraints are you aware of?

Sure a shared lookup table that is comprised of the location of every particle in the simulation and a calculation of interactions and thus next position (if you want the lookup table to actually be accurate over time)- at which point you have really just created a particle level simulation - which is a far cry from what most people would consider a lookup table.

Really, like loop quantum gravity? That theory isn’t having any problems?

If you can provide a proof or a cite that states that any continuous physical system can be simulated perfectly with an alternate discrete system, then I will be convinced.

My non-expert intuition says that has not been proven.

I’ve just lost some respect for your arguments, from the wiki:
“as it only applies to signals that are sampled for infinite time; any time-limited x(t) cannot be perfectly bandlimited”

and
“In practice, neither of the two statements of the sampling theorem described above can be completely satisfied, and neither can the reconstruction formula be precisely implemented.”

So, you’ve just responded to what I think is a very reasonable concern about continuous vs discrete systems, and you state that it’s not a problem and provide a cite that says on it’s very own page that basically, great theorem, unfortunately can’t be implemented in practice due to real world constraints.

Not good.

Yes you did address this, I missed that.

Yes but it was the same conversation and the same conditions applied - we don’t purge all variables and start over with every new post.

Here’s it is again:

  1. “If the receiver had the same (as in exact) internal structures and we communicated the information properly, like direct stimulation of neurons - then yes it’s seems communicable.”

  2. “And every single one of us has a unique language (internal brain structure/states) - similar but not identical”

  3. “in a receiver that spoke the same language”
    In #1 I’ve clearly defined my position regarding the word “same” and the conditions required to communicate.

In #2 I’ve stated that we DON’T have the SAME structure

In #3 I’ve stated again that if the receiver spoke the SAME
SAME=identical internal structure
SIMILAR=not identical
I get that you probably didn’t notice that when I used the word the SAME I assumed you were following me - I get that - it’s not obvious and you aren’t in my head - but it is far better to just ask for clarification than to accuse me of changing my story and not reading carefully or that I don’t have good intentions.

Because simple examples without a definition of communication won’t shed light on whether the real problem with communicating experience is merely that we don’t have the proper internal structure to read that state and then the proper internal structure on the receiving side to modify or arrive at that state.

If communication is merely the act of altering the targets computational system by some quantified delta, and we can do that with qualia if we had the proper structure that allowed us to do it, then there is no physical problem.

Yes. You have just mapped experience to knowledge, the experience of seeing (with eyes or experiencing with sound) “2x3” and the knowledge that we call the answer “6” in this case.

Similar to mapping mapping the visual of a bird to the word “bird” and the sound of a car to the word “car”.

To say it’s fundamentally impossible is also ruling out the possibility that it is merely due to lack of internal mechanisms that allows it.

That’s why I’m focused on and exploring the communications definitions, to determine if that is true, that it really is fundamentally impossible, or merely due to humans lacking some physical construct that allows it.
I’m out of time, I will come back to the rest of your post later.

And, as I’ve said multiple times, I understand the point you are making about the difference between the experience of red and communicating other stuff, but without exploring in detail the underlying issues, I can’t come to a conclusion.

Shall we look at the thread of conversation then? Here it is:

Post 256:

[QUOTE=RaftPeople]
And every single one of us has a unique language (internal brain structure/states) - similar but not identical
[/quote]

Post 257, with post 256 quoted:

Post 267, with post 257 quoted:

[QUOTE=RaftPeople]
If there are 2 identical brains (after the communication) - how can you just say “suppose that B sees blue”?
[/quote]

Note that in the above post you are quoting a post of mine that is clearly not meant to describe two identical brains, and was in fact responding to a post of yours that provided a context for discussion of two similar but non-identical brains. Yet in the above post you falsely attribute to me a statement about 2 identical brains.

It’s not the fact that you made a simple and easily forgivable and easily forgotten mistake that bugs me, it’s your continued inability to admit you are wrong. I’d like to say much more, but I will be breaking the rules of this forum. But that’s it, I think I’ve had enough here. I don’t want to cause a scene.

iamnotbatman, would you mind addressing two things?

  1. Is there a logical proof that qualia are consistent within a human mind? i.e., how do you know that the way you percieve ‘red’ today is not the way you perceived ‘blue’ yesterday?

  2. Is it necessary to have identical subjective experiences as a result of communication? So if I can give you enough information to determine that we see ‘red’ differently, does that undo some of the unresolved questions about qualia? Or as another aspect of it, is it possible for anything but identical minds to have the same subjective experience? (With the possibility that non-identical minds can differ in some ways that do not affect the subjective experience)

Several fallacies here. To begin with, we routinely build machines that are so complex, no individual mind can comprehend them in totality. It takes teams of engineers to design an aircraft carrier or a microprocessor. We can work together to create items that are far too complex for any one mind to conceive.

There is no logical limit to this. As yet no aircraft carrier or microchip has exceeded the human brain in complexity, but there is no reason we can’t.

As you, yourself, said: we can think up and create machines of infinite complexity. You’re conceding the very point you stepped in to deny!

me:
"I think a key item is that both parties must be “speaking” the same language. And every single one of us has a unique language (internal brain structure/states) - similar but not identical.

If we skipped the rods and cons and went further into the brain and stimulated specific neurons in a receiver that spoke the same language, then I think communication is possible."
you:
“I do not think so. I think you are defining “speak the same language” in a way that is tautological. Suppose A wants to communicate to B that he sees the qualia red. A does this by stimulating region X of the brain of B, a region which A believes should induce in B the qualia red. But suppose that B sees blue rather than red(*).”

  1. I clearly just set the stage for communicating between identical brains via stimulating neurons.

  2. You responded with a neuron-stimulation post immediately after WITHOUT clarifying that the identical brains I just introduced were no longer part of the conversation.

  3. Because of that I assumed you were working off the same idea and were trying to show me that even then blue could be the result.
    There is no need to get emotional, just ask for clarification and move on.

An old philosophy prof once tried to convince his class that mental telepathy was impossible. He said, “If I transmit my thoughts to your mind…they are no longer my thoughts, but your thoughts!”

I objected immediately. If that’s so…then communication is impossible!

But then I bumped someone whose views were even more extreme: he holds that communication is impossible.

More specifically: the classical “pipeline” model of communication is: I have an idea. I encode the idea into a signal. (Speech, writing, whatever.) It is transmitted to you. You decode it into an idea that is now in your mind.

But all of what we’re talking, here, about qualia and personal internal languages, suggest that this process is flawed.

(The solution, per many, is to retreat to the “behavioral” model of communication. If I say to you, “Please bring me the red pencil,” and you do, in fact, bring me the red pencil, then communication, at a behavioral level, has been accomplished. I don’t know – nor ever need to know – what the pencil looked like to you, nor do you need to know what it looked like to me. We conduct transactions in the real world, and are satisfied by one another’s behavior.)

But what about your idea of direct stimulation of neurons, to trigger a kind of meta-receipt of ideas? Printing directly on to the other person’s brain, thus, by force (so to speak) giving him an idea that exactly corresponds to the idea in the sender’s brain, complete with all the connotations, associations, and qualia that were in the sender’s brain. Is that possible?

My concern is that our individual associations and connotations are spread so widely throughout the brain, that, for this to succeed, you would have to over-write huge chunks of my brain – perhaps the whole thing! You’d have to overcome my own, current, innate associations, lest these compete with the sensations you wish to convey.

(To a degree, this works with ordinary languages. Someone raised to speak – and think – in Mandarin will have many very ordinary associations and connotations that are different from an English speaker. By learning Mandarin, the English speaker adds many of these associations to his thoughts, and thus picks up, not merely the vocabulary and grammar, but the “mind-view” of the language.)

I think, in practice, you win this round, and that even non-identical brains can be coerced, electronically, to have identical sensations. As very, very much as I respect iamnotbatman, I think that your notions are not tautological.

Of course, “pipeline” communication actually does work – all this typing we’re doing is not just “white noise!” – because most human brains are very close to identical. There is such a close conformity between us that we can, meaningfully, talk about subjective personal experiences with sympathy and empathy. As social animals, we are highly evolved to function in a “meeting of minds.”

I appreciate the post, but I probably wouldn’t use the word “win” (my goal is understanding and if that means I accept it’s tautological because that is correct I am fine with that, I just want to get it right).

These are the thoughts I currently have:

  1. The identical brain case sure seems to be communication

  2. But it is clearly a special case. Is it possible there exist these types of special cases, some types of information that can only be communicated via identical structures? Does the special case help us with the general case in any way?

  3. What exactly is going on with communication of other types of information? Is there some mathematical notion in which we can say the two brains now contain the same information? Is there some subset of the N-dimensional space representing our knowledge that can be mathematically extracted and proven to exist in both brains?

  4. If the mathematical notion applies, can sensory experience be proved to have been communicated in this same manner even with non-identical brains?

It is indeed. But the initial question may have been intended as ‘communication by description or the subjective experience’ as opposed to ‘communication by description of structure which could be used to reproduce subjective experience’.

That’s what I’m trying to figure out myself. I don’t think we can describe ‘red’ simply because of it’s low level atomic, comparatively static structure. We don’t form it out of other components. It’s a primitive element that can’t be subdivided externally.

I don’t know. But other types of information are conveyed not truly by a common language, but through a kind of common translator. Since language is often non-specific and each individual has individualized meaning and intent in language, communication of this sort is often a cyclic process of confirmation and clarification, yet possibly never good enough to duplicate a picture to the level of detail that can be done through numeric modeling. i.e., just how red is that rose? We don’t need to communicate the subjective experience of ‘red’ to attempt to answer that, but we still can’t ever quantify it.
4) If the mathematical notion applies, can sensory experience be proved to have been communicated in this same manner even with non-identical brains?
[/QUOTE]

Yes. Obviously. At least to me. I think this is the ground that has to be hashed out. Through analysis can we determine if two people see red in the same way. It’s not a question of A experiencing red as B does, but determining if there is an equivalence between between their experiences without experiencing them. It’s probably a lot easier in application to find a non-equivalence than complete equivalence, but logically it shouldn’t matter.

You’re a gentleman, and I was out of line. There are no winners in good science!

If the experiment were possible, I think the discriminant would be this: can qualia be communicated successfully, via the kind of neuron-writing you described, without the two brains being synchronized to the point of identity?

If it could be communicated successfully, between one guy and another guy, without the two guys becoming identical, then we have demonstrated that qualia can be communicated, and thus have some sort of objective existence.

If it can’t be communicated without making the two brains identical, then the conclusion would have to be that qualia are intrinsically subjective. For you to know what I feel…you would have to be me!

My opinion is for the former. Our brains are already so very similar, that we can communicate remarkably personal emotions and sensations. A good novelist, or a good poet, can make us feel emotions that aren’t original or innate within us. A good movie-maker has even more power, by controlling images, timing, music, etc.

Movie-making is pretty close to what you describe, i.e. direct neuronal writing!

This is why I like the “behavioral” model; it politely cops out, and simply says, “Here’s what we can objectively observe.” I say, “bring me the red pencil,” and you do, and the model is satisfied.

But I think the “pipeline” model is valid, because our brains (most of us) are so closely isomorphic. The model is so successful, it can be taken as a symptom of actual mental or cerebral dysfunction when someone can’t engage in “pipeline” conversation. This is why autism is so useful (ick!) in these conversations, as well as the horrid examples Oliver Sacks provides us with.

Stephen Jay Gould, in his essay “What If Anything Is a Zebra” notes the usefulness of teratology. We learn a lot from things that have gone (horribly) wrong.

I didn’t mean to shut you up via appeal to authority (though I do lack the self-confidence to only seek the fault in everybody else’s thinking), merely to point out that your intuition in this case diverges from that of most other people. To me, it boils down to something like: What does it feel like to be a lookup table? It seems that both you and RaftPeople have no trouble imagining that it does feel like something, while to me, this seems utterly unimaginable.

Exactly, and that’s what zombie arguments do. Compare: If the Earth were flat, there’s be a waterfall; there’s no waterfall. Why’s that? -> The Earth is round!, and: If zombies are possible, there’s no necessity for subjective states; there are subjective states. Why’s that? -> ??

How could something of the form: ‘see red’ -> ‘say “red”’, ‘see blue’ -> ‘say “blue”’, etc., have any kind of experience at all? It seems to me, if you want to maintain that it does, then you ought to look to panpsychism, and claim everything has phenomenal experience; a rock seems to have a just as rich inner life to me (after all it can, like everything, be recast into a lookup table that details its responses to observation and other probes.)

All the chemistry and biology going on in a brain is strictly deterministic, and even if you insist on taking the quantum level into account, it’s possible (though only inefficiently) to simulate it, given enough processing power.

Inability to compute the busy beaver function, or to solve the halting problem, for one – meaning that the brain is the computational equivalent of any other ordinary computer, and thus, can be treated as one.

Anything that can be computed is a computable function; and a function is ultimately nothing but a map that associates elements of one set with elements of another – a lookup table.

I’m not an expert, but I don’t think this has really anything to do with the topic; loop variables provide a quantization, not a straightforward discretization, of a continuum theory. It’s true that certain quantities take on discrete spectra – such as the area and volume operators – but this isn’t really the same as a naively discrete space. Also, I don’t think the major problems of LQG stem from its discreteness; indeed, that’s the feature that many people think may help to get rid of the singularities and divergences in general relativity.

What I’ve said is that analogue and digital computers are equivalent – which is certainly true, as both are computationally universal, and each can thus simulate the other. And any given process can be seen as a computation; so a continuum process can be seen as an analogue computation, which can be simulated on a digital computer. Of course, this assumes that one only needs finite (but arbitrarily high) precision; otherwise, one could implement physical hypercomputation by building something called a ‘real computer’, i.e. one that can store and manipulate arbitrary real numbers (and would thus in a sense have infinite capacity). This is generally thought to be a supertask, and impossible in the physical world. (For instance, such a real computer would enable one to violate the second law, by building a Maxwell demon that never needed to clear his memory and thus, produce entropy.)

The point was just that from a digital signal, an analogue one can be reconstructed arbitrarily well – beyond, for instance, fluctuations imposed by thermal or even quantum noise, or beyond any other boundary of accuracy you care to set. ‘Infinite’ precision, to the extent that’s even meaningful, isn’t possible – but it’s also not possible in the physical world, the laws of thermodynamics playing their usual spoilsport role.

That’s Ok. We could be wrong. This is one of those cases where the authorities don’t have much in the way of answers though. I think** Raft **and I are approaching this from an implementation aspect where we don’t see exactly how it works so much as seeing how it could conceiveably work.

Yes to what Tri said. I tend to make the assumption that physicalism is correct and whatever we are experiencing is just a byproduct of the computation.

Ok, I see what you are saying (ignoring quantum random effects).

I will respond to rest of post later.

But can you explain this in any way? (Or is it incommunicable? :p) What is the subjective experience of a lookup table, say if I use it? Or some mechanical process? It just seems utterly incomprehensible to me how there could be any. There’s just no impression of redness associated with the mere act of producing a reaction to the stimulus of red light – say if a camera measures the light frequency, then refers to its internal memory, and displays the word ‘red’ on the screen. In none of those steps, there’s anything it is like to go through them – they’re as mechanical as, say, a stone rolling down a hill: from its initial condition, through some evolution, a final state is reached. Does this stone have a subjective experience, too? Does an electron circling an atom (and what does it experience when its wave function collapses)?

I know such things have been proposed, see Whiteheadianism for instance, but these things seem irreducibly dualist to me – as there are no physical processes going on in a stone to support the subjective states, they must be relegated to some non-physical realm; the same is the case with lookup tables: there’s no change in the lookup table depending on whether or not it is read, so how can it have different experiences, and have these experiences be physically grounded?

Naturally your perspective is completely free of any and all mythologies, of whatever sort.

Sorry to jump on you like this, but I find such statements to be rather condescending & presumptuous, at best (it’s always the other guy who is biased, while I’m as squeaky clean as a whistle). This does not help your cause. Anyhoo, back to the main thread…

David Chalmers (one who fully acknowledges just how “hard” the hard problem is, and whose name I am amazed to discover has yet to be dropped in this thread) addresses this directly, when he writes:

[QUOTE=David Chalmers]
Any account given in purely physical terms will suffer from the same problem. It will ultimately be given in terms of the structural and dynamical properties of physical processes, and no matter how sophisticated such an account is, it will yield only more structure and dynamics. While this is enough to handle most natural phenomena, the problem of consciousness goes beyond any problem about the explanation of structure and function, so a new sort of explanation is needed.

It might be supposed that there could eventually be a reductive explanatory technique that explained something other than structure and function, but it is very hard to see how this could be possible, given that the laws of physics are ultimately cast in terms of structure and dynamics. The existence of consciousness will always be a further fact relative to structural and dynamic facts, and so will always be unexplained by a physical account.
[/QUOTE]

And I’ll just add, “will always be unexplained” no matter how finely you slice and dice your dynamics into smaller physical bits which some grand (future) intelligence will finally be able to parse. Unless you take consciousness seriously, on its own terms, any purely phisical theory you posit to explain it will always be incomplete.

Just because we have a difficult time seeing how a physical system can produce consciousness, that doesn’t allow us to conclude that there must be more than the physical, it merely allows us to question it.

Your statement “any purely phisical theory you posit to explain it will always be incomplete.” sounds like you are assuming the answer to be NOT X despite not really knowing for sure or being able to prove NOT X.