Is the human brain the most complex object in the universe?

Must consciousness and experience exist at the micro level for it to exist at the macro level?

Apparently not, for, while you are conscious, no one would claim that any of your individual neurons are.

For my part, I would strenuously reject the quote from David Chalmers. It invites a kind of spiritualist duality, without offering the slightest ounce of evidence, either scientific or philosophical. It’s an appeal to awe. “My consciousness feels special, therefore it cannot be explained by any ordinary means.”

I will get back on this, just pressed for time at the moment.

Look, it’s kind of tedious to add “of course I could be wrong” at the end of every sentence. And you could just say the same about the other side of the argument. Clearly no one has the answer here.

No (or at any rate I don’t believe so), but that’s what seems to follow from the ‘lookup tables can have subjective experience’-strain of thought – or you’d have to supply a criterion which determines what kind of lookup table is conscious, i.e. why one supplying the reactions of a human being to outside stimuli is, while one supplying the reactions of a stone, or an electron, to outward stimuli isn’t. But of course, this criterion is precisely the hard problem!

I think the quote doesn’t really do justice to Chalmers’ position; his presentation here is more apt. :stuck_out_tongue: (An expanded version of the discussion can also be found here.)

More seriously, I think what Chalmers is mostly concerned with is that the explanation of the functions of consciousness don’t refer to the subjective experience of consciousness at all. Any given task a conscious being performs can be formulated in some algorithmic form, where receiving certain stimuli prompts the execution of certain actions. But at no point in this algorithm, any instruction like ‘generate a picture of the situation in the mind’s eye’, or ‘experience redness’ appears at all (and even if they did, the question of how they could conceivably be performed would still be left open). So the functional formulation of conscious experience, and the physical basis on which it rests (viewing the algorithm as ‘that which actually happens’ in some sense, or at least an isomorphic description thereof), leaves the question of phenomenology wholly unaddressed – it is something extra, something that has to be added to the description, simply because we undeniably experience it. A good paper to read on this, where Chalmers addresses responses to his original ‘hard problem’ formulation, can be found here.

Chalmers in fact (as HMHW indicated) has a much more nuanced position. Yes, he actually has identified 2 potentially viable dualistic positions-but also a non-materialistic monistic position as well. The first dualism (he calls it Type D dualism) is basically equivalent to Descartes interactionism, but the other one (Type E) says that physical states cause phenomenal states, but not the converse.

Ultimately tho he comes down on the side of Type F Monism: “On this view, phenomenal or protophenomenal properties are located at the fundamental level of physical reality and in a certain sense underlie physical reality itself.”

I find it odd (to say the least-telling, more like) that you are so casually dismissive-indeed presumptuously so-of the man’s views based on one single quote I brought up, apparently never having actually examined the man’s oeuvre in the first place. Odd in the sense like someone pontificating on spacetime who never mentions Einstein or, nay, claims complete unfamiliarity with his views, as Chalmers is one of the leading lights right now in terms of the hard problem. HMHW’s last link is to Chalmer’s website; I’d urge you to go there and spend some time reading his papers before coming back here.

  1. We don’t have a good mathematical definition of consciousness
  2. Nobody has provided a proof that consciousness can’t be achieved through algorithm/function

Therefore it seems like it’s jumping the gun to say it’s unaddressed. We simply don’t know.
Question that has probably already been addressed by someone in the field, but I’m not up to speed on all of this:
We know that alterations to our physical, chemical and electromagnetic structure consistently and significantly alters conscious experience (e.g. brain damage, brith defects, drugs, magnets used in brain research, neuron activation, etc.) - doesn’t this support the position that consciousness is just a function of brain state?

Note: I’ve just started reading through the paper you linked to, so far pretty interesting.

Sorry, but in this forum, even shallow, naive, simplistic, and pseudo-intellectuals have a right to an opinion. (“And I am that fool!” Gomez Addams.)

And I did say, very carefully, that it was my opinion.

I responded to the quote, as it was presented, for what it said. I thought it was a really bad piece of rotten reasoning. If the bloke has gone on to make it more complete, provided a better context, etc., well, good for him.

After all, how much of my extended reasoning, my publications, etc., did you look into before taking issue with my procedures?

This is a paragraph-based message board, not a formal philosophical conference. Ya gotta do your best in a limited space, otherwise we’d be wading through gigantic walls of text!

I’m not a professional philosopher. I’m a guy, with an opinion. As far as I can tell, that’s true for everyone participating in this thread.

Relax. Get popcorn. The world will little note, nor long remember, what we say here today.

I’m not able to view YouTube, for various technical reasons…

To be brutally candid, I’m not entirely sure I am willing to devote a lot of time to reading the guy, although I won’t promise not to, either…

In my opinion, the discussion is verging upon medieval theology in its unnecessary complexity. I had the joy of reading a bit on that subject, with various kinds of grace – immanent grace, conditional grace, covenant grace, reified grace…oh, it got juicy.

The number of new terms being introduced here is starting to daunt me. What the hell is “Type F Monism?” What does “protophenomenal” mean?

I believe I have the right to have my say in this conversation, even if I am a simpleton and a dullard in contrast. This is the “Straight Dope” message board, and I’m as straight a dope as they come.

In another forum, I’ve been working on helping to enlighten a benighted soul who has severe problems with basic issues of high-school level physics. I have never scolded him, nor derided him, nor impugned his integrity. Very much the opposite: I’ve made every effort to be inclusive, and to provide examples that are accessible.

Isaac Asimov could have made these ideas plain! Can’t we?

I don’t really know how a ‘lookup table’ fits into this discussion. The phrase simply specifies a set of data in some regular order, and implies a minimal process of locating data items in the set. So obviously such a structure lacks sufficient complexity to contain subjective experience, unless there is much more meaning behind those words. So I don’t see any need to address that directly.

What I don’t see in this argument is the alleged enigmatic nature of qualia and subjective experience in general, and here’s why:

When we discuss the color ‘red’, we do know that this is a construct in our brains. The varying wavelength of light do not have this ‘color’ characteristic that we perceive, it’s a distinction in the different types of cones in our eyes. Color blind people do not see this distinction. Our brain is stimulated by the cones and we form an internal image based on the input from our eyes (and other senses). In this internal image, (which as far as we know is a logical structure), we can distinquish between areas in our view that to our eyes differ only in color. Color seems to be an ‘atomic’ quality. We can’t break it down into sub-components to describe it to others in a way that would let us determine if I perceive the same color internally for red as you do. That just doesn’t seem mysterious to me because clearly our brains are not identical. They share some common template, but they grow and experience differently, and I’m pretty sure there are complex dynamic processes at work behind all our conscious observations of our brain. Our inability to reflect on those internal processes makes it impossible to impart the mass of information necessary communicate the a subject experience as simple as ‘color’. We can’t transfer the internal structure of our minds that gives us our basic perception of ‘red’ either for an another brain to simulate our subjective experiences, or to analyze them. But surely they exist, because most of use can perceive the color red.

When we add to the subjective experience of ‘red’ beyond it’s distinquishing color, we are then accessing the associations ‘red’ has in the rest of our experience. To say that the color ‘red’ has a ‘feel’ to it, strongly indicates that we are combining the reflexive response with other information in our brain. I don’t know if we start with an ‘objective’ perception of ‘red’ when we are born that eventually develops additional associations, or if some of those associations are hard-wired in. But we clearly develop associations with stimuli.

Now at this step, all I can do is describe how a machine would experience qualia of a similar nature to a human. Our machine has the ability to reference an internal model of anything that contains components with have a characteristics equivalent to ‘color’. Each distinct quality of that characteristic (i.e. ‘red’) has associative links to many experiences involving that characteristic. Analyzing an image that depicts objects colored ‘red’, the associations to all other objects that are red can be derived from that characteristic. Other more complex associations can be derived as well. Some of those are meta-associations, in the case of a color, the wavelengths that color represents as an example. Others though, would fit in the category of emotion, a secondary response to referencing the characteristic. And machines can have emotions already. The processor in the computer I’m typing on right now alters it’s speed to maintain performance without overheating. Secondary conditions can alter the operation of any system.

I understand that comparisons of simple machine operations to things as complex as human emotions seems to be a leap, but I can see currently the extremely varying results that occur in computer systems that are far less complex than humans brains. Modern computer system interact with users based on individual preferences all the time, producing more and more human characteristics to the machine side, and yet they operate on a very low level of complexity. The future of machines that can eventually form their own associative connections will be extremely complex, and could easily display all the characteristics of emotion by prioritizing and selecting the associative paths that are used to produce a response.

It will be much easier to continue, clarify my statements, correct mistakes, and provide further detail based on responses.

Captive audience.

Drugs alter conscious experience. Couldn’t we have altered conscious experience without drugs? Could conscious experience alter conscious experience?

Problem solved. More problems, though, remain.

Yeah, but from your participation in the “My Problems With Relativity” thread, I hope you can sympathize with the difficulty of debating with people who are dismissive of a deeply excavated, nuanced subject, based on a cursory understanding of it. You try to explain things but after a while you just want to scream “physicists (in this case philosophers) are not morons!” It’s like trying to explain a book to someone who, instead of just reading the book, would rather debate the book until, through some kind of pointillist aggregation of won and lost arguments, feel they understand or reject it. This tedious process isn’t really fair to those who have read the book, but it can be nonetheless rewarding if the opposition comes from a position like “you know, there has got to be a good reason why so many experts in the field hold this position on concept X; what am I missing?” rather than “concept X is silly, and as far as I can tell I am smarter than authorities on the subject, despite having read only bits and pieces of their arguments. Prove me wrong.”

[BTW, this is not indented to be read as directed towards you or anyone else in particular]

No, the point is that we do know – at least in principle – perfectly well how to describe the actions, reactions, and behaviours of a conscious agent based on external stimuli, and in this description (contained, for instance, in a lookup table), nothing that pertains to subjective experience exists.

Think, for instance, about a control system, like a thermostat. It monitors some variables, and, based on their changes, initiates certain actions – if the temperature drops beneath a certain value, it starts heating, if it rises above another value, it stops. I take it we agree that the thermostat has no subjective experience, at least none for which any physical basis exists – its physical states are all ‘used up’, directed to the performance of actions based on external stimuli (so like the stone of the electron, it has no ‘internal’ states left over in order to map to states of subjective experience).

But every kind of behavior can be mapped to such a system, just a more complicated one. And if you say, well, perhaps for a human, that’s not possible, then consider that for a neuron, it is, or for an atom, or any other sufficiently simple physical system. And from joining such systems together, you only ever get a larger system of the same kind – so if the thermostat is without phenomenal states, then so is a human, or a human-equivalent constructed in the described way.

I’m also not entirely sure I get the gist of your argument. On the one hand, you seem to maintain that there is no good reason to take the qualia arguments seriously – that they’re self-evidently or trivially wrong. On the other hand, your justification seems to be ‘nobody knows anything about it, really’. But you can’t both claim the problem is easy and unsolved!

Of course, and that’s indeed an often-made argument. But this of course doesn’t solve the problem at hand.

Ah, well, then I guess you’ll miss out on Chalmers’ singing voice…

Well, the complexity seems unnecessary from a present-day viewpoint, using present-day concepts, but within its own time, in its own concepts, it wasn’t necessarily. It’s kind of like learning to play a game, and being able to play it well: chess, first explained, may seem utterly trivial – all I have to do is get the other player’s king in a check with no escape? Pshaw! People write entire books about this? But of course, as you play, you start to appreciate the complexity, the depth contained in the simple rules of chess, and maybe even, at some point, compose an elaborate monograph on the details of the details of the Alkehine gambit following the Caro-Kann defense, or some such ‘unnecessarily complex’ and arkane issue.

What’s happened is simply that beyond the rules, you have gained an understanding of the concepts: you no longer think in terms of moves as ways to shuffle pieces across a board with colored squares, you think in terms of strategies, attack plans, feints, and so on; what seemed unnecessarily complex to you only now makes sense, and what seemed trivial is now revealed to be a subtle and delicate issue. Certainly, you can always play chess and enjoy it, but you can’t criticize grandmasters on the basis that they still haven’t found a general mate strategy; it seems so easy after all!

Which makes it all the more puzzling that in this thread, you seem to take his very stance!

That’s just the problem, because everything (every physical thing, at least) we know off can be recast into this form! Ultimately, a physical theory is just a map taking initial conditions to final conditions, which could be laid out in a lookup table. So could any computation – similarly, just a map from bit strings (input) to bit strings (output). Any program can be cast in a form that just looks for the input string on a lookup table, and produces its output based on what it finds there. So functionally, lookup tables are equivalent to every known physical system – yet, as you say, they obviously lack subjective experience! So how can the physical support subjective experience, if it’s ultimately nothing more than a lookup table?

Certainly, you can devise more clever programs that perform some kind of intricate computations, and that are much more efficient in implementing certain functions than a mere lookup table would be – but ultimately, they’re still the same kind of thing, just a compressed version. I mean, in the end, what’s the logic a computer (and we) uses based on? Lookup tables!

Yet, despite the obvious fact that there thus can’t be subjective states supported by the physical, we have them. That’s plenty mystery to me!

How does a ‘feel’ emerge from associations? Computers associate lots of things, apparently without feeling anything in the process (not to speak again of lookup tables, which are nothing but associations between things).

You already presuppose here that there’s some way these associations feel to us. But of course, this is circular, as the question precisely is how anything can feel a certain way to us! If this isn’t presupposed, all the associations in the world are not going to lead to any subjective experience.

How, exactly, is this supposed to lead to subjective experience? A machine can – and typically will – do all of these things, without this feeling like anything to it. The processes you describe are not different from retrieving a file from a harddrive, analyzing data with respect to certain markers, monitoring variables, etc. It would be sufficient for the machine to display the color red on its screen, to say the word ‘red’ through its speakers, but how is it sufficient for it having the experience of seeing red? And if it is, does then a stone have the subjective experience of feeling warm, when it lays in the sun for a while? Certainly, it exhibits all the features you have attributed to your ‘conscious’ machine: mainly, it reacts in a certain way, through expansion, through radiating heat, through accelerated molecular movement – all of the things that make your machine conscious are of the same kind, merely reactions to a certain stimulus. These reactions may include what you call ‘associations’, but these are just data links: the reaction in this case is just to refer back to something in some kind of memory (…perhaps a lookup table…), for which there’s no a priori reason that a stone could not do it as well (and indeed, some similarly simple materials, like memory metals, ‘associate’ a certain form with the ‘experience’ of being subject to an electric current).

So I really don’t think this addresses the problem at all – it is exactly what I said in my post to Trinopus earlier: an explanation of the functions, without any reference to the way it feels like to have experience. You seem to think the two are equivalent, but they are not: the functions can be performed without any subjective experience.

We don’t actually know perfectly well how to do this. We haven’t even scratched the surface yet.

Given that these subjective states are part of the function calculating any response, removing them would result in different actions.

Agreed, no subjective states at the micro level.

If subjective experience is a result of structure then you’ve simplistically hand waved it away in the same way that a bunch of wood and nails is not a house until it is transformed properly.

You said this many times but for you to be convincing you would need to show that conscious experience is not due to structure.

A few points here:

  1. My position is shifting as I consider the counter arguments (I know, this is the SDMB, that’s not supposed to happen)

  2. My primary objection is frequently that one side has taken too extreme of a position, ruled something out prematurely. This doesn’t mean I disagree entirely with the arguments, just that it may represent more of a possibility as the issue is explored as opposed to “well, it’s clear it can’t be X” when there are too many unknowns that leave X still a possibility.

  3. I get the “hardness” of the problem of consciousness, and while I have probably trivialized it with my words, it was really from the perspective of accepting that we know about matter and energy, we know that’s what we are made of, we know we have consciousness, therefore consciousness can arise due to a particular structure - any claim that physicalism is false seems incorrect, despite the “hard” problem.

No it doesn’t “solve” it but it can guide our thinking and set parameters around what we believe is a reasonable answer.

I was hoping you would have examples of arguments that counter those points. To me those are critical points that make it very difficult to abandon consciousness due to structure.

I disagree, on both counts. First, while it is a quantitatively impossible task, qualitatively, a simple massive lookup table would do the trick of describing the actions of any actor. So we do know how, we just can’t execute the task – there’s a difference.

Second, the lookup table does perform like the conscious being, without the subjective states; so subtracting them would not lead to different actions. (Consider the case of Mary: she could say ‘Oh, so that’s what red looks like’, as an entirely automatic reaction, without any associated phenomenal experience, just as well as she could if she actually has subjective states.) Or consider the problem of finding out whether your conversation partner is conscious: if you can’t, then the subjective states do not play any essential part in determining his behavior.

But this argues for strong emergence, a position at odds with physicalism, stating that qualitatively new properties arise in aggregates in such a way that they are not determined by the fundamental microdynamics, but have to be postulated as separate, new laws not reducible to the microscopic level – which is a position even stronger than Chalmers’.

You could, with the same arguments, argue for the existence of magic. No, I have not shown that there is no such arrangement of matter that it allows me to access mystical powers that bend the laws of nature as we know them; but just stating that something may happen if we put things together just the right way isn’t very convincing, either. I mean, can you give an argument for/description of consciousness being due to structure? From my perspective, it looks like you’re backed into a corner, and take the only way out, which is appealing to the unknown: since nobody can properly gauge the implications of complexity as great as the human brain’s, nobody can conclusively argue that there’s not something, somewhere that ‘sparks up’ and provides us with our phenomenal experience. But to make this at all plausible, there should be at least a hint at how something like this might be possible. It seems entirely unlike anything I know; it’s not like, for instance, the emergence of life: life is completely described by its functions, and these functions are inherited from the interactions going on on the microscopic level in a (more or less) straightforward way. I can’t conceive of such an explanation for conscious experience.

Besides, if a small lookup table isn’t conscious, why should a larger one be? Or rather, how could it be? It’s still the same kind of thing.

There’s also the concept of substrate independence that’s very widely accepted (I believe) and I think is implied by physicalism. It’s basically the idea that consciousness, cognition and so on don’t depend on the implementation, i.e. that one could create a conscious neural network based on silicone just as well as one based on sludgy biomatter. This is because in physics, objects are really only determined up to their interactions – so whatever interacts the same way with you, will appear to you the same way (where I use ‘appear’ without any reference to conscious awareness, just in the sense that a wall appears solid to a stone just as much as it does to you). This is the reason why a stone on planet Earth follows the same laws as it does orbiting in space, or on another, distant planet. From this point of view, what we build a conscious being from – neurons, chips, lookup tables – should not matter, as long as the building blocks have the same interactions, i.e. the same functional characteristics.

These just show that a physical world exist, but do not provide grounds for the non-existence of some extra-physical realm, it seems to me. In other words, whatever’s consistent with ‘physics’, is also consistent with ‘physics + x’ (as long as x doesn’t somehow damages physics).

Well, they’re (some of) the very same reasons I believe that the physical is all there is (or at least all that’s relevant), so I’m sorry I can’t be of any help here…

Why not project this more favorable thought onto posters instead of the ones you just listed: “Hey, I like to analyze things and I like thinking about how the brain works, hmmmmm, that point doesn’t seem logical to me because of…”

Eh? Hardly! I rejected a quote!

Once informed that the author of the quote had, in fact, taken efforts to explain his position in more detail, and in a clearer context, I immediately accepted that.

At least wait until I’ve repeated my error half a dozen times before comparing me to … um … our mutual correspondent!

It’s always disheartening when “obvious” meets “obvious” head-on, like the two freight trains in our old familiar grade-school algebra problems… X=25…and the explosion killed six railroad workers as well as two bystanders…

This is why, rather than lookup tables, I like to use the metaphor of emulator software. Kind of like those “Dos” emulators that let you run Ms. Pac Man on a newer CPU… If you have a large number of emulation loops, with systems all dedicated to modeling other systems, then, somewhere in the mix, you start getting the emergent complexity of “feeling.”

It can also be tossed off as an illusion. Remember, we have the faculty of empathy, to have a sense of what someone else is feeling. If I watch a video of some guy getting kicked in the nads, I feel his pain. Obviously, I don’t really feel his pain, and obviously (see, there’s that word again) I can’t know what you’re thinking. But I can do a fairly good job of faking it…

This is why I also emphasize the role of information, far more than just matter and energy.

(Notwithstanding, I’m a very strong AI proponent, and believe that “consciousness” could be produced in a machine made of Tinker Toys.)

Bless you! That’s all I would ever try to say in a discussion like this. It’s obviously (doggone that word!) way over my head, but, since this is a “cocktail party” rather than a “university symposium,” well…I’m gonna say my say, even if it exposes me as a clod.

Heck, if it were a requirement on the SDMB actually to know what one was talking about, the place would be de-populated faster than Bosnia under Serb occupation!

I say they lack subjective experience because they do not form anything, and I don’t think there’s a basis to reduce subjective experience to a ‘state’. You make a sudden turn here by talking about subjective ‘states’ instead of ‘experiences’. Experiences are active processes that change over time, not a ‘state’. And a ‘feel’ is that also. ‘Hunger’ as an entry in a lookup table is not analogous to a feeling, it’s simply a label. The feeling comes from reflection on the active processing, otherwise it is just a memory like an entry in a table.

I presuppose that these associations as processes form the ‘feel’. The associations don’t feel something, the processing of the associations and the secondary effects of the processing provided the ‘feel’. And the secondary effect is another problem with your lookup table example. When you feel something, you change your subjective experience and the nature of the feeling. The contents of a lookup table are unaffected by secondary effects, because there aren’t any.

So your comparisons between complex processes and a lookup table are not valid. The lookup table can be distinquished from a mind that has subjective experiences by it’s invariance.

I didn’t think there was a way to tell you exactly how it’s done, but I think I actually can. Computers traditionally have not been used for the purpose of developing subjective experience, usually the opposite in fact. Subjective experiences lead to variant results, traditionally considered a problem.

But here’s an example. I currently work with a database system that can generate query code on demand based on the state of relational tables and run-time parameters. There’s no reason the code generator couldn’t recognize other factors such as time constraints in producing code. As an example, for a query result that will be displayed on an interactive web page the generator could produce a set of code which satisfies the results returning the first results rapidly even though overall it would be slower at returning the complete set of results. The generated code will also be stored so that is can be reused and avoid the code generation time with each reuse. If in addition, information is recorded about the actual usage of the query such as the overall execution time, and that information was used to refine the code generation system, the analysis of the code and it’s usage information would provide the reflective ability. The computer in analyzing it’s own results from generated code would be developing a ‘feel’ for how a query executes. We don’t see the signs of this readily because we don’t use computers much this way. We would rather have a computer perform an ‘objective’ analysis that factors in all known information in analysis. But in a sufficiently complex system, we would not be able predict and control for all factors, and the system would have to rely on the results of it’s own ability to analyze, and produce results affected by the subjective experiences of the computer.

HMHW, lets talk about your lookup table.

I assume, at minimum, the input is a vector of all particle information for any particle within a distance to influence this particular time slice, and the output is a vector of all particle information (basically the lookup table is a particle simulation pre-calculated for all possible arrangements of a subset of particles, those near enough the simulated human in question to influence the calcs). This would have to include the internal particles to account for internal changes that influence the next time slice.

Is this, at minimum, what you are picturing? I say minimum because it’s most likely possible to compress some portions of it, in other words, our brain is not perfectly optimized.

It’s possible that I’m conflating you with TriPolar to some extent, if so I apologize. But you did somewhat brashly refuse to ‘devote any time’ to reading Chalmers, and, somewhat far from accepting that he had a more well thought-out position than was immediately recognizable from the quote, you brushed it off, comparing it to the ‘unnecessary complexity’ of medieval scholarly discussion, so there’s at least some basis for my impression arising…

This is the same argument RaftPeople proposes, and I think also TriPolar in his latest post: once you layer enough complexity that no one can hold it all in their mind simultaneously, maybe something happens and ta-daa, consciousness. But it just doesn’t work: the layered complexity simply obfuscates the fundamentals enough so that it isn’t as immediately obvious as in the lookup table case. But even with layered emulations, even with self-reflective elements and other complications, the whole thing still is functionally equivalent to a lookup table. And saying that two things are functionally equivalent, yet differ respective their phenomenal experience, is precisely arguing for qualia!

Compare this with the case of some physical system – say, a stone. You could replace it with any functionally equivalent system, i.e. one that reacts in all instances just as the stone would, without changing the physics. So the replacement of one system with a functionally equivalent one makes no difference with respect to the physical description. If it then can make a difference with respect to phenomenal experience, then physicalism is false – as physics obviously does not fix all the facts about the world – and qualia exist.

Any computation, anything you can do with a computer can be recast in the form of a lookup table that is its functional equivalent (and in the physical world, this lookup table is always finite, even). (For instance, a process in which the reactions of a system change because of some condition, say some association with past experience, can just be modeled by a lookup table that’s twice as large, with one set of entries giving its reactions if the condition does not hold, and one set of entries giving the reactions if it does.) The argument that the functional equivalence does not suffice for subjective experience is precisely the argument that qualia exist, because to physics, all functionally equivalent systems are indistinguishable.

Or think about a simple program, that calculates the sum of two integers. Two such programs are functionally equivalent, if, when they receive the same two integers, they produce the same output, no matter if one of them simply looks up the result. In such a case, the first program can simply be seen as a compressed version of the second. So what you’re saying is essentially that these two programs differ in an essential way depending on how they’re implemented, i.e. depending on their architecture, for instance. Such that maybe, a program run on a PC is conscious, while a program producing the same reactions, but run on a Mac isn’t, because the architecture on the latter is wrong. But this means of course presupposing something beyond functionality as the cause of consciousness; which is just the argument for qualia.

I’m genuinely baffled by this. Where does the ‘feel’ come from all of a sudden? What your argument boils down to (or seems to, to me), is something like: computers can modify their responses in very complex ways, based on a multitude of variables, including those pertaining to their own state. So they can feel.

I don’t see why this should follow at all. First of all, however, it’s not strictly speaking whether computers can feel anything that’s at issue, but whether it can feel like anything to be a computer. (See Nagel’s ‘What is it like to be a bat?’ (pdf).) The distinction is important: feelings, as such, can be reduced to functionality – I feel hungry, so I eat. But this doesn’t imply that it feels like anything for me to be hungry; it’s just a control system relating the mismatch of a certain variable with its reference value to a central processor.

But the way it feels like to be something does not reduce to functionality (at least not in the same straightforward manner). Systems are conceivable that are functionally equivalent to human beings, without it feeling like anything for them (systems that just catalogue all the possible actions you can do to a human, and link them to all their possible reactions – lookup tables, which, for any finite span of time, will be finite). It’s not enough to be simply able to modify your own state for there to be something it feels like to be you; as I said, this can always be incorporated into a larger lookup table.