It seems quite straightforward to me; evolution produced the basic feature of ‘stimulus/response’ way back when the population of the Earth was microbes. This has become more complex over the gigayears and is now known as ‘pain’ and ‘bliss’. Any creature that experiences bliss instead of pain would seek injury, not avoid it, so that variation would be at an evolutionary disadvantage.
I suspect that you are falling for the trap of think that ‘qualia’ are somehow special, and can’t be studied in detail - on the contrary, I think that qualia will be understood quite well in due course, and that people will be sharing ‘experiences’ on an everyday basis. One of the possibilities is that the details of every person’s qualia are subtly different- so that you couldn’t just run a wire from one person’s head to another without some sort of translation software in between. My ‘green’ might look ‘red’ if transferred directly to your head without translation; more likely my green would look like ‘wibble’ or ‘flump’ to you, until translated into your internal codification system.
This would be like having a conversation in French and English through a translator; so long as your internal software and hardware recognise these qualities, it doesn’t matter at all what they look or feel like to you.
In fact I think there is some reasonably compelling evidence that different people have different experiences of qualia - if you look at the way a person with synaesthesia experiences a cross-over between two otherwise unrelated stimuli, there seems to be very little consistency between individuals.
That suggests to me that we all experience the world in slightly different ways, but that there is (in most cases) enough internal consistency to make this irrelevant - until we finally do start linking human sensoria together.
I’m glad it’s silly because those are the easiest arguments to counter, let’s see how you do.
Watson has externally visible and measurable criteria that guided it’s development. Yes there are many functional layers the team had to work through to build up to the end result. And each of those layers has outputs that can be measured and used to debug that functional area or layer. The team did exactly what I was talking about, working through the lower level details to support the higher level properties. It’s how we build everything, buildings, bridges, software, etc. Your example completely supports my position.
What exactly is the measurable criteria for creating a conscious system? (hint: this is the “silly” part that should be really easy to answer).
You’re argument in general, as others have pointed out, is based on faith, not logic/data/evidence. It’s not that we are saying you are wrong (because nobody really knows), we’re just pointing out that you are coming to a conclusion without any basis. I have an opinion about this topic also, but I recognize that it’s just an opinion based on only the flimsiest data and understanding.
So have you changed your mind on that? Because earlier on, you said that you didn’t think you could tell a blind man what it’s like to see red. What made you change your mind? Did you find some plausible way to actually do it?
Still, though, my point remains: for any other kind of knowledge at all, including knowledge we do not yet possess (say, the correct theory of quantum gravity), I know exactly how I could communicate it to somebody. But for knowledge of experiences, I don’t have the foggiest idea how to even start. That’s a very immediate, precise difference that has to be accounted for in every theory of consciousness.
The only thing we can do using language, in this case, is point: you understand what I mean by ‘a red experience’ because you have had such an experience. Language, in this case, fails to describe the object of discourse; that’s very different from, say, talking about how to ride a bike, or quantum gravity. Even if you’ve never done either, it’s no (fundamental) problem to tell you what I’m talking about, while if I were to talk about color to a blind man, I simply don’t even have a ways to communicate what I’m actually talking about.
Of course. But that’s just creating an experience. Whether a given brain state is produced via electromagnetic radiation entering through the eyes, or via electromagnetic fields applied to neurons probably doesn’t make any sort of difference. If this (or something equivalent) is the only way to get the blind man to know what seeing color is like, then my point is proven.
The brain state (or its record on a flash drive, say) is a recipe; but just as you can’t eat recipes, knowing a brain state doesn’t tell you what it feels like to be in that state.
All of these papers make the (often reasonable) assumption that having a certain kind of brain activity entails being in some conscious state; but the discussion here is exactly how this entailment works, what justifies it. So this research certainly does nothing to settle this point.
Yes, and that I can perform such complex acts entirely without conscious intervention just supports my case, that in general behavior does not suffice to settle the question of conscious experience.
So, stimuli are just intrinsically pleasurable (or not), as brute facts?
And again, we can tell a full story of what causes our behavioral responses, without ever appealing to conscious experience.
Maybe the point is still not quite clear. For anything you do, if I list its causes, I find neuron firings, electrochemical signals, atoms bumping into one another. Moreover, that’s a complete list of causes; there’s no gaps left to fill. At no point do I find a need to say that you screamed because it was painful, or laughed because it was funny: I find just atoms and the void, so to speak.
So what is it that consciousness brings to the table, exactly?
That doesn’t address my point at all. Did you quote the right part of my post there?
That’s the impression I’m getting, yes.
Which just means, of course, that it’s not a side effect: a train with less noise would function less well.
Deliberation, weighing of alternatives, calculation—none of which requires consciousness. Again, I can tell the story of how we behave in future toe-stubbing situations entirely without any reference to consciousness, appealing merely to atoms bumping one another.
Leibniz used his famous example of a brain, enlarged to the size of a mill, such that one could inspect its inner workings directly. He claimed, correctly, that for each effect, one would merely see parts acting on one another, gears grinding, wheels turning. Let’s take this literally: imagine a huge computer, composed of gears and levers, performing some computation—like coming up with a plan for a chess move. This ‘plan’ has no need of being consciously represented everywhere: it exists purely in the way the gears and levers turn. Yet, such a device can formulate plans, act on them, even learn from mistakes; but there is no reason to believe it is conscious (it might be, of course; so might electrons, but we have no reason to believe they are).
So all this planning, learning, revising and deliberating goes on without conscious experience, or at least, there is no contradiction in imagining it going on without conscious experience. Consequently, that such things go on is not sufficient to conclude that an entity is conscious.
You might even consider coming up with a plan by using some external device, like the computer you’re sitting at now. You could code a simulation of stubbing your toe, run it a thousand times with varied parameters, to come up with better ways of avoiding it. Did you decide on that strategy by, as you put it, using your conscious mind? No: at least parts of the deliberation were performed outside of your conscious experience.
Now, it might be that the computer was conscious of what it did. But even then, your implied claim that one could only use a conscious mind to come up with new plans would be falsified: since each consciousness, yours and the computer’s, would only be conscious of part of the deliberation.
The only way to have your claim that the only possibility to come up with such new plans is via conscious thought would hold water is that your using the computer in fact gave rise to a sort of super-consciousness, which would be conscious of the whole process. I grant you that this is possible, but I hope you will grant me that we don’t have any good reason to believe it occurs (beyond having the claim that behavior implies a certain sort of consciousness come out true).
Sure, but that intensity does not directly translate into any experiential facts. Thermostats don’t become conscious just because it’s really really hot.
I agree that consciousness is fully physical/chemical/biological. However, I think we’re much farther apart in our views than you believe.
People in altered states of consciousness are also in altered physical states—say, intoxicated, or hallucinating due to sickness. So, that their conscious experience is changed isn’t surprising—indeed, it’s exactly appropriate to circumstances.
The first part, you merely assert, so I will merely deny it. The second part is exactly the problem: if it doesn’t care about the specific qualities, then how come they are this appropriate? (Or there at all?)
I’m not saying that we’re never in error regarding the external world; but these errors are precisely sourced in physical causes. Thus, conscious experience is still in one-to-one correlation with, say, the pattern of neuronal excitations. But if it’s epiphenomenal, then we should wonder how consciousness is always appropriate to that specific pattern of excitations.
I was ready to respond to several points in your post but this caught my eye
So, you deny that evolution can select for behaviours? seriously?
There’s very little point trying to have a serious conversation if we can’t agree on that.
I think you may be forgetting what your post said that he responded to. You said:
“Secondly, Evolution can select for behaviours that arise as result of conscious experience…”
He’s not challenging whether evolution can select for behaviours, he’s challenging the additional part you included that says the behaviours arise as a result of conscious experience.
There is no concrete evidence that behavious arises from conscious experience, but you’ve included that as if it’s a fact.
Most research shows conscious awareness happens after the brain makes a choice, so what little evidence we do have says it’s could be after the fact.
To avoid making a long thread even longer, let’s cut to the chase here. I know I’m conscious. I can’t prove that you are. But it seems that way. Things that come from our being conscious like reading, writing (sometimes) and analysis and modification of the way we do things clearly have led to our reproductive success. Thus evolution selects for them. That’s exactly what I was claiming. If we evolved something that is identical to consciousness but isn’t, same thing, but I know we evolved consciousness because of my internal experiences.
By close I mean that certain animals have behaviors that may approach or even reach consciousness. Chimps and gorillas? My old border collie mix was not conscious, but he could plan and he could abstract behavior. He was very smart, and I doubt it would be that much of a reach, over evolutionary time.
Example: we taught him to sit before crossing a street to reduce his chances of getting hit if he got off the leash. He developed the behavior of sitting to request crossing the street, even in the middle of a block and not at the intersections where we crossed.
Stimulus and response are completely without attendant conscious experience, though: a domino, experiencing the stimulus of another one bumping into it, will produce the response of falling over, but not experience any thrilling rush of vertigo.
You claim that, once this gets just sufficiently complex, consciousness just, well, kinda happens. Make a really really really long chain of dominoes, and boom, consciousness. Somehow. But that’s just an article of faith, without further support.
And I can only reiterate what I’ve said a dozen times now: if you’re so certain that people will eventually share experiences (or, which is what I’ve been talking about, knowledge of experience: what it’s like to see red), then presumably, you have some justification for that belief. So why not share it?
What you’re saying is contradictory. If there’s no way to prove that I’m conscious, then I’m indistinguishable from something that isn’t, and hence, there is no difference in selective pressure between me and a non-conscious analogue. Consequently, either there’s a way to prove I’m conscious, or evolution can’t select for consciousness.
This is very limited. The examples you provide are about as simple as it gets. Flocking requires three rules. Ants operate on a simple state machine/pheromone model.
And yet, our simulations of these things can provide ant hills and ‘flocks’ that seem to behave like the real things, but only in simplistic ways, like showing how ants forage for food, or showing the changing shape of a bird flock. But the behaviors of those systems are far more complex than that. For example, if you flood an antihill, the ants will swarm out and form a raft with their bodies and float to safety. If ants need to cross a chasm, they will build bridges with their bodies. Ants will fight wars with other ant hills and take prisoners and enslave them to work in their own ant hills. But if they bring in too many, the foreign ants may revolt and attack the colony. Ants maintain perfect temperature in their breeding areas by bringing in vegetation which gives off heat as it decays. And so on, and so on…
Ant hills are so complex that if an alien species looked at them without knowing about the ants, they’d think the colonies were highly intelligent. And yet, not a single ant has a clue it even lives in one, let alone what its actions are contributing to.
If I gave you a complete list of the properties of an individual ant, and fully described all the stimuli it responds to and how, you could not predict what would happen when you put a million of them together. You might be able to run a simulation to try it and get an interesting result, but you’d have no way to know HOW.
This is because complex systems do not lend themselves to reductionist analysis, because the system itself goes away when you drill down in a reductionist fashion to understand it. Do you know what happens when you put 100 ants together? They mill around randomly until they die. Keeo adding ants, and at some point the rest of the behaviors of an ant hill begin to emerge.
Also, even if you can model an ant hill perfectly, you can not use that to predict the specifics of what a real world ant hill will do - what food it will gather and in what order, how big it will get, how long it will last, where it will migrate to, etc. Because another aspect of complex systems is that they behave stochastically. Randomness is part of the package. And because there is random behavior and sensitivity to initial conditions, they are fundamentally impossible to predict in detail within their complex domain. Sure, you can predict that if you flood them they will move, or if you remove all their food sources they will die. But in the middle, where complex and chaotic behavior rules, there is only emergence, and the direction of the emergent choices is unknowable.
Or the special sauce itself is emergent. A human brain isn’t Tabula Rasa - it’s an evolved structure. Neurons respond to signals which themselves are emergent and complex. There is feedback, and chemical signalling, and microbiomes and a host of other things we are still trying to understand.
It’s hard enough to understand the behavior of a flock of 1000 birds, even knowing the flock is driven by three simple rules. Imagine if each bird in that flock, instead of responding to three rules responded to thousands. Or millions. Or had a rule set that changed based on other complex systems running in its brain or body. And there were 100 billion birds. That’s more the scale we’re talking about with humans. No amount of reductive analysis would allow you to say, “Hey, here’s why consciousness happens.”
If we ever develop a computer consciuousness, I don’t believe it will ever come from a directed plan to ‘engineer’ a conscious brain. Instead, that consciousness will emerge as a result of complexity, and we won’t really understand it. Or necessarily even be ABLE to understand it by studying the hardware, even to a very deep level.
You’re not getting that complex systems are fundamentally different than complicated but engineered systems. Complex systems have as features that they are highly sensitive to initial conditions, they have feedback loops, they are more defined by the connections between atomic units than the units themselves, and they exhibit stochastic behavior at some level. The combination of these means that they are fundamentally unpredictable, and they cannot be understood through a reductionist process as we would understand a watch complication by breaking it down into its component pieces.
Everything you say is valid, but it’s still all in the measurable realm of physical state and behavior. The emergent properties, no matter how complex the system that created them, are still measurable as physical attributes of our environment over time.
We can see how water behaves, or how the ant colony behaves. We can measure the end product and compare our simulation to see how closely it agrees with reality.
But we don’t have any way to measure consciousness. We can’t tell if our simulation has succeeded or failed. We can’t distinguish between behavior that accomplishes task X without consciousness vs behavior that accomplishes task X using consciousness.
A key question is: is consciousness scalable? Or, is it a binary property that is indisputably either present or absent? Can one person (or animal) be, in some way, more self-aware than another?
For the individual, it seems to me as though consciousness is a sort of impenetrable bubble that surrounds the mental existence. I have my outward-looking view, which I can in no real way share directly with any other individual. There is, of course, empathy, but that is just me projecting my own internal experience onto someone else. There does not appear to be any way across the bubble wall, to observe or share the inner experience.
Logically. one can see the evolutionary advantages to consciousness. A creature that has this inner experience will be invested in its own survival, if only by dint of the fact that it exists uniquely: the solipsistic view, even if it is not understood, motivates the individual to persist, to preserve its unique reality. One might guess that consciousness is an affect of the basic survival instinct that selection has naturally wired into all creatures.
I see a lot of research over the last 20 years showing shared neural structures and activation for perception and imagery in the higher-level visual cortex areas. Saying that the visual cortex “has no role in it” seems to be the opposite of what the researchers are actually finding.
I acknowledge that consciousness is an elusive attribute, and its connection to behavior is not understood. But I would point out that many thinkers have tried to attack artificial intelligence in just the same way, rejecting the functional/behavioral definition – John Searle, for instance, whose Chinese Room argument tries (incorrectly, IME) to show that any system that achieves intelligent behavior by following rules or heuristics, as a computer must necessarily do, is not truly intelligent because it has no real understanding. So I see a valid parallel between artificial intelligence and artificial consciousness, and I anticipate arguments in the future about whether a machine is truly conscious just because it claims it is.
No, it’s really the same statement, and it’s wrong. Surely you’re not trying to argue against the existence of emergent properties, are you? And what on earth is a “non-trivial partial aspect of intelligence”, other than a circular redefinition whereby any component is declared to be so, simply because it has been used to build something intelligent? A calculator may be useful, and it may be many other things, but to call it a partial aspect of intelligence is just meaningless; you could apply the same description to a table of logarithms, or a relay or electric light switch.
Furthermore, while I do get what you’re saying about calculation, the argument also fails because when using the functional equivalent of the same logic gates to build a stored-program computer, we may have no interest in any sort of calculating at all, but use it for various sorts of symbol manipulation. Indeed the word “computation” as applied to computer science is rather unfortunate because it encompasses both arithmetic and non-arithmetic logical and symbolic operations, the latter particularly important in AI and fundamentally unrelated to anything any calculator has ever done.
Now, it’s true that if one examines a complex system like the DeepQA engine, what you say is quite literally true for many of its major components; for example, parsing natural language, the very first thing it does, is itself an entire discipline in AI. But herein lies the fundamental fallacy in your argument. Why stop at the major components? For each such component, one can drill down further into its constituent subsystems until one is dealing with arbitrarily primitive functions – perhaps individual subroutines or lines of code or table entries, or perhaps down to the functional equivalent of a single logic gate – that cannot possibly be described as a “non-trivial partial aspect of intelligence” except by using the circular redefinition above. Yet from each such low-level set of functions, there is a path upward through which they integrate into an intelligent automaton.
Yes. Let’s have a look and see if it’s always true that “Literally the only thing you add by increasing speed and power is that they can do so faster.”
There are emergent properties that arise from complexity or architecture. The assembly of the same (functionally identical) logic gates used in a simple electronic calculator to form a stored-program computer that is Turing complete is an example of that. In that respect you’re quite correct. But that’s not the only way emergent properties can arise. They can also notably arise from sufficient differences in scale.
The earliest computer I know of on which any kind of AI was attempted was the first-generation IBM 704, a vacuum tube computer with 4K 36-bit words and a performance of around 12,000 FLOPS. Could this machine, being Turing complete and all, be used to run the Watson software and successfully play Jeopardy?
Well, let’s look at the computing platform that actually supported the DeepQA engine in that venture. It ran 2,880 POWER7 processor threads and had 16 terabytes of RAM, and had a Linpack performance of 80 trillion FLOPS. In a word, it was nearly 7 billion times faster than the 704.
Clearly, it would be impossible to fit the DeepQA engine within the architectural limitations of an IBM 704, and even if it were somehow contrived to extend its memory through some type of memory management enhancement and an entire building full of extra core memory cabinets, by my back-of-envelope calculation it would take the IBM 704 an average of more than 2200 years to answer just one Jeopardy question.
Now, I’m willing to be patient in these matters, but 2200 years seems a long time to wait, and it would violate Jeopardy rules and test the patience of the audience! And beyond that, the 704 was a vacuum tube machine so it would fail in about a day or so anyway and have to start all over again. Actually, with that much memory, each bank of which has a vacuum-tube based memory controller, the machine would fail within seconds of being turned on.
So we can say that the DeepQA platform is, in a real and meaningful functional sense, qualitatively different from the IBM 704, and that the Turing-complete equivalence is irrelevant – a red herring. I’ll say again what I said earlier: a sufficient quantitative change in capacity can result in a fundamental qualitative change in capability. Often, that qualitative change includes novel emergent properties. A computer that is twice as fast as another is not qualitatively different. When it’s billions of times faster, you’re dealing with something completely new.
No, my example completely refutes your position. See my response above to HMHW, the one responding to “non-trivial partial aspects of intelligence”.
No, that’s a completely different question, and rather moot if, as I maintain, consciousness arises as an emergent property of a sufficiently intelligent system, and is not (and cannot be) explicitly built or created in isolation. You will note that in the opinion of many critics, among whom philosophers tend to be prominently present, the same objection can be applied to the property of “intelligence”, like John Searle who essentially maintains that computers are incapable of possessing true intelligence. And interestingly, the contrary view is held dominantly by the people who actually build intelligent systems, and who have created increasingly powerful AI, necessitating skeptics to keep moving the goalposts (what was deemed “true intelligence” 20 or 30 years ago and hence impossible for computers suddenly became a mere clever trick of programming when it was actually achieved).
Intelligence and consciousness could be an inevitable consequence of life, given enough time. Setting up an experiment to test if that is plausible would be interesting. What you’d have to do is to start with some sterile, but hospitable planets, and seed them. For now, assume that DNA-based life is the standard, and thus seed it with proteins, amino acids, etc., then check in on each planet every million years or so, more often as the life on it evolves to be more complex. That’s how real science is done. So anyone not having any major plans for the next several millennia who are willing to volunteer their time as observers, we are all eager to find out the results. Post here to let us know how it goes.
I was going to include a comment about this in my above response but forgot.
I agree with this. I like the part about “complex systems do not lend themselves to reductionist analysis, because the system itself goes away when you drill down in a reductionist fashion to understand it”.
I will add, however, that you appear to be discussing the emergent properties of naturally evolved complex systems that behave stochastically, which is related to, but different from, what I’m talking about – the emergent properties of purpose-built engineered systems. In particular, the salient fact here is that such systems can exhibit emergent properties that were never explicitly designed, and might represent behaviors that were not predictable and most importantly, not well understood – that is, they could not have been built through explicitly designed algorithms.
The problem with this line of reasoning is that it then doesn’t allow for a sharp differentiation between intelligence and consciousness: Searle argued more narrowly against the possibility that machines could have cognitive states, and held that these are necessary for genuine thought. But then, the question of whether we can build artificial intelligence is just the question of how consciousness arises, and you pointing to intelligence as an example where we have similar trouble as with consciousness isn’t an argument against the uniquely challenging nature of conscious experience, but in fact relies on it.
For the purposes of discussion, I think it’s better to understand the question of whether machines can be intelligent as whether they can replicate the behavior of intelligent agents—after all, that’s what the Turing test assesses. This is what Searle calls ‘weak AI’, as opposed to ‘strong AI’ that aims at creating machines with cognitive states. Certainly, in their everyday business, AI researchers don’t really bother with the distinction, and are happy to provide machines with ever expanding versions of weak AI.
That something show a nontrivial part of the behavior of an intelligent agent isn’t the same statement as that something is intelligent, no matter how you look at it.
I am arguing against emergence out of the blue, so to speak. Whenever something emerges, the necessary preconditions of its emergence are rooted in its components. While a water molecule isn’t liquid, its binding properties dictate the behavior of large aggregations of identical molecules.
What the supporters of the line of thought that claims consciousness arises once things just get sufficiently complicated advocate, however, is that from something without any hint at how consciousness might be connected to it, consciousness just somehow sparks up. This is, without further justification, baseless faith.
As I explained, it means showing a part of the behavior of an intelligent agent. This is common usage: any time a system is claimed to have ‘artificial intelligence’ (as opposed to ‘artificial general intelligence’), what’s really being said is that it behaves like an intelligent being over some relevant domain.
So you’re saying that whenever a system is called artificially intelligent—since it clearly doesn’t possess the full behavior of an intelligent agent—that’s just false? It’s an all-or-nothing deal? People in AI research just lie, or are mistaken?
Sure. But I was making claims about behavior of that particular assembly of logic gates. Assemble them different, use them differently, and they’ll show a different behavior. Doesn’t impinge on the fact that as a calculator, it’s used to do something that an intelligent being could do, and that could be used to succeed at a Turing test.
Each logical operation is a partial aspect of intelligence; that’s how the analysis of thought went, historically, from Aristotle to George Boole’s Laws of Thought. The individual logic gates thus carry in them the preconditions for the emergence of intelligence in the same way as water molecules underlie the emergence of fluidity: there is absolutely no mystery in the fact that you can combine them to replicate the behavior of intelligent beings (‘weak AI’).
There is no indication of any preconditions for conscious experience.
In other words, “Literally the only thing you add by increasing speed and power is that they can do so faster.”
Doing the same thing faster is still first and foremost doing the same thing. That flatly isn’t a qualitative difference.
But you can get to a billion times increase by doubling speed successively. So neither of those computers does something qualitatively new, yet somehow, somewhere a qualitative difference appears.
Now, I grant you that this is analogous to the argument that has consciousness just somehow, somewhere appearing once you pile on enough complexity. But it’s only analogous in its fallacious nature.
Sure, but that’s just an operational limitation—we can’t hold the complete story of how individual ant behavior yields collective ant behavior in mind all at once, but that doesn’t challenge the fact that collective ant behavior is rooted in individual ant behavior, and that a sufficiently powerful intellect, armed with nothing but the description of individual ant behavior, could exactly predict the collective behavior of ants.
While we thus can’t formulate the full story, or would need to defer to powerful computer simulations to do so, we can nevertheless see how that story goes—how interactions between individual ants cascade to yield behaviors of the entire colony. At no point do we have any reason to assume that we will hit some fundamental road blocks in that story.
With consciousness, we can’t even see how the story is supposed to start. We have lots of simple building blocks, and no idea how their combination could suddenly amount to conscious experience. The properties that are there on the lowest level that allow the collective behavior to emerge—the bonding properties of water molecules, the interactions of individual ants, the behaviors of starlings—lead necessarily to the collective behavior we observe, and there’s no mystery to how they do so. We can’t imagine collections of ants not showing this behavior, since that would directly imply that individual ants behaved differently. We can predict that, if we were to change individual ant behavior, collective behavior would likewise change.
This is different with consciousness. No collection of the alleged pieces of conscious agents shows conscious experience by necessity, to the best of anyone’s ability to tell. It’s perfectly possible to account for any behavior without mentioning consciousness anywhere, while it’s not possible to account for flock behavior without mentioning flocking.
Again, this is only a limitation of scale. As you’ve said yourself, we know perfectly well how flocking behavior emerges—and it’s rooted in the behavior of individual birds. That is a reductionist analysis of flocking behavior. Change the rules, and you will change the behavior (generally, though different sets of rules may lead to the same large-scale behavior). Moreover, change the flocking behavior, and this directly implies that the rules for the individual bird must have changed. That is, flocking behavior supervenes on the behavior of individual birds.
For consciousness, all anybody’s asking for is some analogue of the rules that lead to large-scale flocking behavior. Having that would dispel the mystery completely (there is, after all, no mystery to flocking behavior). But nobody can see even what that analogue could conceivably look like.
That’s why arguments along the lines that consciousness just happens if you pile up enough stuff and call that ‘emergence’ are simply fallacious: without some analogue of the flocking rules, we’re simply not entailed to conclude that consciousness emerges in this way.
Somebody with an understanding of atoms, but no understanding of electric charge, might hold that electric phenomena just emerge when you pile up enough atoms, but in general, they don’t—you need charged particles. There’s an extra property present at the fundamental level that allows the macro-scale phenomena to emerge, and without that property, they simply wouldn’t. So one solution to the problem of consciousness is to add further properties that are, in some sense, experiential, and thus, have consciousness emerge at the macroscopic level—what’s typically called panpsychism.
Now, I’m not saying that panpsychism is right, but if it is, one could make the sort of story you’re trying to tell work out, by appealing to the experiential properties as grounding large-scale conscious phenomena. Without something like that, however, the story simply remains question-begging, like a story of flocking without the rules grounding it—put enough birds together, and they simply, you know, flock.