Is the human brain the most complex object in the universe?

We can go down much further, if you like. There’s a maximum to the amount of information in any part of space-time (given by the various kinds of holographic bounds); indeed, as Seth Lloyd has shown, there’s a maximum amount of computation that goes on in any space-time volume, and that amount is saturated by the volume being a black hole. Since we’re not black holes, the computation going on inside us, or our brains, etc., is less than that, thus finite, and thus equivalent to a finite lookup table that gives a functionally equivalent system that we could substitute for the given part of space-time without anybody (even physics) noticing.

You might say that quantum indeterminism somehow ruins this picture, as any computation (and even more obviously, every lookup table) is deterministic, but there are deterministic interpretations of quantum mechanics that give all the same experimental predictions (Bohmian mechanics, or many-worlds at the cost of non-uniqueness), and even under the standard interpretation, we can always add quantum systems (purifications) to the system under discussion such that the new system as a whole is pure, its evolution is unitary, and thus, deterministic. So this doesn’t help.

But the point is really that we need not go down to this level at all. We can just use a lookup table on the level of the entire person. Of course, one could then decide to probe a deeper level, which would require us to consider a more fine-grained level of simulation. But this does not enter any deeper into the ‘inner’ life of the simulated person: it just pushes the assumed boundary between the outer and inner world back a few degrees. We can continue pushing this boundary, but we never get to any inner world, even once we’re at the level of simulating all the microscopic degrees of freedom; so this physically exhaustive nesting of simulations, this deepening of the border to the outward, never reaches a point where it includes subjective experiences. But then, these must be extra-physical (if they exist, that is).

Michael L. Brown, professor of mathematics and statistics at Simmons College, claims that “2, 3, 5, 7, 11 and so on, expressed in dots, pulses, whatever” is “a timeless and immutable pattern that inanimate nature could never transmit”. What is the difference between “inanimate nature” and “consciousness”?

BTW: Any dismissing of this area I do is simply to maintain the independent approach I’m taking at the moment, i.e., not thinking inside of a box. If I’m left with questions and contradictions, of course I have to look at all attempts to resolve or better define this topic. So I’m not passing judgement on the work of anyone else.

This is the area we have to clear up. I think this line of reasoning is similar to** iamnotbatman’s** ‘identical minds’ tautology. I understand your reasoning that based on results the lookup table can appear to produce the same results as a conscious mind, and therefore the two would be indistinquishable. But they are distinquishable by looking at the mechanism, one is a list, and another is a more complex process. That may not make a substantive difference in your approach to examining this. So I’m going to rethink this in those terms and follow up with some more. I think the key point here is that the complexity in a lookup table lies not in the mechanism, but the relationship of the items in the table and the complexity of the index. Even if the table is structurally simple, it doesn’t populate itself, and there is still a high state of ordering that exists in such a table that cannot be used to describe it as simple.

And, possibly, it captures the very function of consciousness within it’s mapping/pre-calculations.

To say the lookup table doesn’t “feel” seems to be trying to imply the lookup table itself must be external to it’s data and have a “feel” attribute, as opposed to the “feel” of consciousness being embedded within the function that the data of the table represents.

Well, it ought not to make a difference in any physicalist approach. As I’ve said, physics effectively only looks at how a system kicks back when it is prodded; so every system that kicks back the same way ‘looks the same’ to physics. In saying that there might be some ‘internal’ difference, you’re, to my understanding, effectively saying that the physics doesn’t determine all the facts about the system, which would mean that qualia exist.

I’ve perhaps harped on too much about lookup tables. But the argument works equally well for any other formalization of computation (because they’re all equivalent, i.e. fundamentally the same kind of thing). In particular, if there’s a lookup table that does a certain job, and some more complex process accomplishing the same job, there’s a process (implemented, for example, through a lookup table) that translates the lookup table into the more complicated process without any additional information – so that there’s not really anything more going on in the more complex process after all; it can be viewed as simply a compressed version of the lookup table.

The same is true for any kind of code arrived at through different means than a lookup table, which usually carries the implication that someone just wrote it (though that’s not necessarily true: one could conceive of an ‘evolving lookup table’, initially populated randomly, whose selection criterion is how well it is capable of convincing conscious entities that it is a conscious entity itself – it’d take a little while, but I see no fundamental reason why it shouldn’t work).

In order for physicalism to be true, however, there should be distinct physical states associated with distinct experiences – distinct ‘neural correlates’, if you will. But a lookup table lacks such states, so if it were conscious, that ought to be a strong argument against the existence of a physical basis for conscious experience.

Yes, that is a good way of putting it. Whether events reel out as a long continuous thread, or loop back and modify themselves in operation, the results are reflected in the sequence of current states. And subjective experience, conciousness, qualia, etc., are a product of those sequential states however they arrive.

It seems like you have been presenting the ‘lookup table’ as a paradoxical situation because it can produce the same results as the more complex systems usually associated with consciousness, subjective experience, and the like. I’m not sure if this is what you’re getting at or not though.

Physics doesn’t just poke things though, and does use concepts like entropy. The lookup table works because of the highly ordered nature of it’s contents. But it is also just an alternate, inefficient means of defining an identical mind to a more complex process. I’ve mentioned before that subjective experience cannot be represented in a static state, that would just be a reference to such a thing, words used to describe it, not the experience itself which is dynamic in nature. To actually simulate subjective experience in the table would require a progression of lookups (which could be sequential) to produce the active nature of ‘experience’ or ‘conciousness’, but it could do that. But all of that is contigent on the processing to preload the data in the table. That would be recognizable by examination of the structure itself. So I don’t see a paradox here. The lookup table simulates subjective experience as a set of lookups. It’s a player piano type of operation, but like a player piano reproduces all the original player has done. It’s apparent lack of musical talent and skill only come from an examination of the mechanism and the realization that it’s results are merely a duplicate of the results produced by a more complex process.

The more interesting point is your question about abstract feelings. “What does it feel like to be a computer?” Ask a computer. If it has the necessary facilities it could answer. The trouble is you may not be able to understand the answer. Try answering the question “What does it feel like to be human to a computer?” What would you tell it? Even when humans communicate their feelings, it is done so with great inaccuracy. The human approach is to assume common experiences and use language to associate those in a way which may convey the essence of a feeling. AI will do the same things.

What is necessary for AI to have feelings? First, it needs to be able to reflect on it’s active processing. Computers can do that very well now. Second, it has to be able to analyze that active processing at a meta-level and reduce it to a state. That’s pretty much it. It’s how humans have feelings, by reflection on their state in comparison to other ideal states. It doesn’t seem much like a feeling in a simple machine because it lacks the complexity of human feelings. A computer which is affected by it’s own reflection, creates new and unpredictable processes as it operates, and is trying to condense many states defined from different processes into a single state description will produce something as complex as a human feeling.

So what we have in a human are multiple complex interacting processes that are self reflective, and self adapting. They produce a result which cannot be readily predicted, and to reduce to a state must be compared to experienced ideals, and to be communicated must be referenced through common knowledge.

Now that’s how I see it, and I’m interested in seeing how that conflicts with other approaches. The only part of human conciousness I find difficult to emulate in a machine is the ‘intelligence’ part. The ability to abstractly apply knowledge and develop and improve methodologies. It’s easy to see how a computer could learn a language by giving it a known language and specific algorithms for deriving the meaning of words and extracting the rules of grammar from observation of examples of another language. But I don’t see yet how a machine does what a human does, learning a language from a nearly empty slate. I’m sure we have some hard-wiring related to primitive aspects of language, but much of it would require development of high level associations produced only from experiences. At least in the human mind, there is a very complex underlying algorithm from which evolves the high level of processing we call human thought.

I think a lot of that will simply “fall out” if we ever develop a highly metaphoric image-driven language for them to use. Something connotative, with lots of “definition two” and “definition three” meanings.

When machines, equipped with this kind of language, start making awful puns, we’ll know they’re halfway to intelligence.

I think so too, but I can’t get a grip on the level of complexity necessary. I can see the other aspects discussed here starting with simpler implementations. Real intelligence maybe has to evolve instead of being designed. The human brain might start with some ‘core’ processes that end up getting refined and applied to increasingly more difficult problems, while AI might emerge instead from a plethora of specialized functions. I don’t expect an AI machine to be human though, just demonstrating the same type of capacities in a different environment.

I don’t think I’m following this point.

This makes sense:
“Distinct physical state associated with distinct experience”

This doesn’t:
“But a lookup table lacks such states”
If a lookup table represents the same computation that a collection of neurons represents - why would you say one has physical states and one doesn’t? Depending on your definition of “physical state”, it seems the lookup table has physical states also.

Either we are talking purely at a logical level when comparing the function computed by both the brain and the lookup table, or we are talking purely about the physical representation that both machines use to represent each state. One machine is made of neurons, and the other is made of some other physical combination of matter but with the same number of unique physical states. At both the logical and physical level there would appear to be an equivalence.

Lookup tables are invariant under arbitrary permutations of their entries, so they would have quite a high entropy associated with them (there are many possible ways of writing down a lookup table that produces the same behaviour, i.e. many microstates to a macrostate, and entropy counts how many microstates there are to a macrostate). But I don’t think that’s relevant.

If I gave you a system that in all respects reacted like a stone, could you devise an experiment that could tell it apart from an actual stone? Definitionally, you can’t: because an experiment is just observing the reactions of some system, and since these are equivalent to that of a stone by stipulation, all experiments will just tell you that this is, in fact, a stone. That’s what I mean when I say that physics treats functionally equivalent systems as identical. So if you have two functionally equivalent systems, which nevertheless differ in some respect (say, regarding their subjective experience), then physics does not determine all the facts about these systems. This is the essence of the argument.

The trouble with this is that you don’t need to understand the content of a lookup table to use it. Consider an implementation similar to Searle’s Chinese room: some dude has in his hands the lookup table, and gets information from the ‘outside world’ in some means – say, encoded as binary strings. All he has to do is to look up the string in the table, and output the appropriate ‘reaction string’. All of this is done without any reference to the content, to the meaning of the string – so whether the string refers to the seeing of blue or the hearing of Beethoven’s fifth is wholly immaterial. But if this is of no consequence, then how can these both give rise to clear, and clearly distinct, subjective experiences? There’s nothing anywhere in the system that even known what each experience should be about; not the lookup table, and not the poor guy using it. And yet, of course, the aboutness is the essential phenomenon of subjective experience.

Also, you’ve referred to subjective experience as a kind of process – which I think is probably accurate, but I’m not sure it takes anything away from my points. Each process is a sequence of states, and to lead to subjective experience, these states must differ from states whose occurrence in sequence does not lead to subjective experience – otherwise, if the same states in one case lead to experience, and not in the other, then again, the physical basis would be the same, but the phenomenal content different. So I think it’s justifiable to talk about ‘subjective states’.

Again, piano playing is wholly defined by its functionality (hit key, produce sound); that conscious experience isn’t is exactly the problem.

With this, you completely dodge the problem. The essential question is: Is it (or can it be) like anything to be a computer? This is the hard problem to solve; your referral to asking a computer is thus just an evasion, a refusal to engage the problem.

It may be how we have feelings, but it doesn’t capture – indeed, doesn’t even talk about – how it is that it seems like anything to us to have feelings. Even a thermostat has these kinds of feelings; they are nothing more than a comparison of the present state with some reference state.

Most of these things are considered to be solved, at least on the theoretical level. Systems like AIXI can make provably optimal decisions faced with arbitrary problems; programs exist that can deduce physical laws from experimental data. Things like PAC (‘probably approximately correct’) learning and Solomonoff induction allow the finding of regularities and prediction-making in arbitrary situations; the list goes on.

More than that, robots are even beginning to be able to create their own language; learning a language already known to one of the parties is a subset of this problem.

The lookup table only has one state – there are no changes made upon it; it’s the same whether the experience is ‘seeing blue’ or ‘hearing Beethoven’s fifth’, which is not the case in any being having subjective states, if those states are presumed to have a physical basis. Of course, you might argue that the physically different states correspond to different states of whatever mechanism is used to read out the table, but 1) the setup is independent of that mechanism, so extremely simple and extremely complicated mechanisms ought to lead to the same number of states (which of course is possible, but does not seem to carry any necessity), and 2) consider the Chinese-room like setup above: nothing exists there whose subjective states could have the content of whatever is being experienced, because the man – the lookup mechanism – does not know what is being experienced, so none of his states have any reference to the content of experience.

His states are ‘used up’ by his own subjective experience – of manipulating strings, and perhaps of wondering how he landed such a shitty job – and the lookup table does not confer additional states; so either you’re saying that there can somehow be two sets of subjective states associated with the man – which again would be in conflict with the assumption that a subjective state must have a physical basis --, or there’s no room for additional subjective states.

You might want to argue that the states of the system ‘man + lookup table’ are distinct from the states of the system ‘man’ alone, and that thus, the addition of the lookup table carries with it a whole 'man’s worth of extra states, yielding plenty of room for a physical basis of subjective states (of the system ‘man + lookup table’), but I don’t think this works – there is no proper system ‘man’ alone with states independent of the lookup table, because each state of the system ‘man + lookup table’ is of the form: ‘man looking up A + lookup table’, and there are no states of the system ‘man’ alone of the form ‘man looking up A’ (because this already presumes the lookup table); so these are not two distinct sets of states, but merely two times the same one.

I meant to post this earlier, but in case anybody’s interested, the 4th online consciousness conference is being held now, and will run until March 2nd. I haven’t had the time to follow it as much as I’d like to, but from the titles/speakers, there seem to be some interesting talks (and I’ve spotted Chalmers responding in the comments a few times).

:smiley:

“If you compare the mind to a computer, both the mind and the computer store information, both the mind and the computer imput information, both the mind and the computer process information, and both the mind and the computer output information - but no one assumes that the computer is aware of what it’s doing.”
If I may revise this:

“If you compare the brain to a computer, both the brain and the computer store information, both the brain and the computer imput information, both the brain and the computer process information, and both the brain and the computer output information - but no one assumes that the computer is aware of what it’s doing.”
And if I may extrapolate from this:

“If you compare the computer to the universe, both the computer and the universe store information, both the computer and the universe imput information, both the computer and the universe process information, and both the computer and the universe output information - but no one assumes that the universe is aware of what it’s doing.”

That is a dangerous assumption. Why? Why not?