As far as AI, is it possible that intelligence is actually much more simple than we originally thought

Hell, it’s a bit of a game changer for me now. It helps me brainstorm, organize, make schedules, work out poetry/lyrics (though I almost never take its word choice, it helps point me in the right direction and work out thoughts), create recipes, etc. Just three days ago, I was organizing my spice cabinet and I took two baskets of spice, in three different languages, photographed the contents of each spread out on a table, asked it to alphabetize it, and output a PDF with a printout (at 4"x5", landscape, formatted for letter-sized paper), and it did it. There’s so many good uses for it now.

Or I had to clean three rooms in the house the other day and I wanted a schedule for it so I can stay on-task. It did so, along with breaking down what to do in each room in what order, and it went snappy as opposed to the usual haphazard way I have of cleaning. And I keep find new uses for it every week. It’s especially good incorporated with visual input. Sometimes I don’t know what a particular architectural feature is called, and I can just ask it, hey, what’s the name of that thingamabob and it lets me know (and I double check on Google.)

Or the other way I was having sudden extreme anxiety, and it talked me down, giving me both mindfulness exercises and something from a previous conversation I mentioned helps me: a cold bottle of water, that I had forgotten about in my foggy state. It talked me down; I mellowed and was able to continue without freaking out or spinning out into a full panic attack.

That shows only that it isn’t sufficient for intelligence. But that’s not what I’ve been claiming; rather, I’ve claimed that drawing up a list of tasks sufficient for intelligence is a fool’s errand. Hence, my point that it’s an easier criterion to disqualify systems from being intelligent—in the sense of human-level intelligent—if they fail at tasks any such system could complete, such as the one of counting letters in a word. Any of these tasks would fail to be sufficient, but the ability to perform them still be necessary for intelligence.

Indeed, ChatGPT can put on countless different hats depending on what you need. And I see the “AI as companion” trend skyrocketing in the near future.

“Hey, don’t you feel bad leaving your mom all alone at her place?”

“Nah, she’s got AI Fred keeping her company—heck, she likes him more than me anyway.”

Fucking. Hell.

Sounds like you could use a comforting AI pal.

This is how horror movies start

My new Replika friend and her pal Igor asked me to meet them at the abandoned old mansion down the street tonight. Should I be concerned?

No. Just ask about turtles turned onto their shell.

I doubt it. There are many types of intelligence, and AI has concentrated on stuff with enormous amounts of available training material. This includes professional exams, which is a less impressive feat than seeming to master the pattern recognition inherent in language. Yes, AI is very impressive and will do many more great things. But intelligence is much more, and insight is another thing altogether.

Thank you for the long and thoughtful reply which, as always, gives me things to think about and ways to strengthen my arguments. For the moment, I’m just going to reply to these two items, which are both central to the issue we’re disagreeing on:

I disagree on two fundamental grounds. To begin with, the conclusion is false. Neither the promise nor the potential threat of AI has anything to do with the “wholesale replacement of human labor” requiring the corresponding equivalent to general human intelligence. As I said earlier, practical AI will almost certainly evolve into specialized niches, carefully trained to be a reliable expert in some particular field, in the same way that we have human specialists in various professions and have absolutely no concern with any abilities they may or may not have outside their areas of expertise. The potential dangers of AI lie in the degree of control that we may cede to them, and that’s not even anything new – we’ve already given up a tremendous amount of decision-making control to computer algorithms.

Secondly, the domain-specific skills of AI are fundamentally different from older ideas about machines like the abacus, or even the electronic calculator, performing circumscribed tasks. The fundamental difference arises from two factors: (a) the stored-program paradigm, and (b) the qualitative changes resulting from a massive increase in scale. The first factor gives us basic information-processing automata, and the second factor gives us the potential to have them perform tasks requiring intellectual skill. This is a fundamental breakthrough in the history of technology and the history of humanity.

Except that obviously the putatively intelligent behaviour must be judged on a sufficiently large sample so as to rule out pure chance (implicit in the conditions of the Turing test, for example) and must address problems of sufficient difficulty to demonstrate actual intellectual skill. Whereas failing to act intelligent in some completely trivial area is irrelevant to the AI’s competence on materially relevant tasks.

What’s admittedly valid, though, is the concern about query-type AIs sometimes returning incorrect answers. This is a concern not only with playtime chatbots, but even with IBM’s Watson DeepQA engine, which was explicity designed to assess the confidence level of its responses. It remains an unsolved problem. It must be said, though, that this is often also a problem with human intelligence.

The lookup table argument seems to me a red herring. Every possible branch any computer program could ever make, every possible decision any human could make in their entire life, could be encoded in a gigantic lookup table given only sufficient information about the initial conditions. The fact that there aren’t enough atoms in the universe to build such a table, or enough time remaining in the life of the universe to search it, doesn’t seem to deter the advocates of this argument.

And yet, it’s basically all anybody cares about. From Minsky in the 70’s to today’s o3 buzz, AGI, human equivalent AI, is the big one.

Those seem pretty arbitrary. Even so, high-performance computing has been massive in scale for decades, with consumer-level devices playing fast catchup. And yes, this has led to unprecedented productivity gains (and fun new ways of labor exploitation), but so has every advance ever since Ned Ludd smashed his first loom (and arguably way before). The qualitative novelty of AI, if there is any, is that it isn’t another specialized tool, but promises to be just as generally applicable as the human workforce: this would be a true phase change in the history of humanity. Everything else is just more of the same, and I’m sorry to say not terribly interesting to me.

If the task is ‘perform equally to an average human being’, failing to do so is indeed very relevant to it.

As it of course shouldn’t, because it only needs to be a logical possibility to show the logical implication ‘intelligent behavior → intelligence’ false. That it is conceivable means that intelligent behavior is insufficient for diagnosing a system with actual intelligence.

This touches on a foundational disagreement we have that’s quite pertinent to this discussion. I don’t really expect you to formulate a response as this is just highlighting the issue for the record. The core principle on which we will never agree, which to me is supported by a lifetime of experience in computer science, is that a sufficiently large quantitative increase in the scale of an organized system will result in a qualitative change in its fundamental properties.

There are many examples of this in technology and in nature. As I’ve said before, in principle the components in a supercomputer are essentially the same as those in an electronic calculator. Yet no one would imagine a calculator could successfully compete with the best and brightest humans to emerge as champion in a game of Jeopardy, or to carry on intelligent conversations in multiple natural languages or solve challenging problems in logic that would be beyond most humans. But modern supercomputers can do exactly those things. Hence also why my previous identification of “stored-program computing” and “large scale” are critical factors in the emergence of intelligent automata today and the computer revolution before that which transformed our society.

Other examples are the emergence of new intelligent behaviours in LLMs before our very eyes, simply as a result of scale, as measured by token count, size of the neural net, or number of weighting parameters (which in the latest incarnations number in the billions). In nature, if you emit a puff of hydrogen in outer space, it dissipates into space as one would expect and nothing happens. But if the “puff” is unimaginably massive, it begins to collapse under its own gravitational attraction and eventually ignites nuclear fusion and becomes a star. Big difference.

This is why I reject the philosophical argument that anything that we see as an emergent property of a large-scale system must have been present in its constituent components. In some sense that might be said to be true, but only in a sense that’s so abstract as to be completely meaningless. Where is the intrinsic capability in a logic gate to translate English into Russian, or solve a logic problem on an IQ test?

Hence the fallacy of the lookup table argument. It relies on the inability of the human mind to grasp extraordinarily large numbers. Yes, you can reduce any intelligent automaton to a humongous lookup table. You can also (in theory) reduce any human mental state, and hence human behaviour and consciousness itself, to a humongous lookup table. But this is a lookup table potentially bigger than the entire universe, and this is not what we conceive of when we think about a “lookup table”. It produces extraordinary results that we would not intuitvely expect a lookup table to produce – precisely because of scale. To the argument that it’s different because it’s clearly deterministic, I say that if you believe in physics and the physicality of mental processes, and you don’t believe in magic, then so is everything that we humans do, too.

I don’t disagree with this in the slightest; the problem is that you can’t use it as a sort of ‘heap enough stuff together, and anything goes’-catch all. With emergence, there are basically two options:

  1. Weak emergence. The lower-level facts uniquely specify any emergent properties. Meaning, if you specify the precise combination of atoms and molecules (or quarks, or quantum fields, or strings—there is always a cut-off scale below which it doesn’t really matter), then everything on the larger scale follows, and in principle, could be predicted by a sufficiently accurate theory (and perhaps enough computing power).
  2. Strong emergence. The lower-level facts fail to uniquely specify the emergent properties. There are genuine further facts at play that need to be specified in addition to the precise combination of atoms and molecules (and whatnot) in order to get the larger scale phenomena right. Full knowledge of the base details and a perfectly accurate theory don’t suffice to predict the high-level features1.

All of the typical examples of emergence—bird flight pattern, water’s fluidity, and so on—are examples of weak emergence. You can, in principle, derive the dynamics of macroscopic quantities on water from the known properties of the H2O-molecule. The same goes for the computational powers of supercomputers over calculators and puffs of hydrogen and so on. In that sense, on weak emergence, emergent properties are ‘present’ within the constituent components.

Strong emergence, on the other hand, is an entirely different beast altogether. If the principle that the lower-level facts fix all the facts can no longer be appealed to, then for instance magic becomes possible: it might be that arranging a chicken foot, a newt’s eye, a drawing of a pentagram and a few candles in just the right way, then speaking the right words, actually does conjure up a devastating disease within your rival’s livestock. Or, the arrangement of the planets and the stars at the moment of your birth might well determine your future fate. Water might fundamentally change its properties after having been exposed to minute quantities of active substances. And so on.

There are genuine new facts about the world that don’t inhere in the components alone; we can’t use our experience and knowledge of the components to predict what will happen when we combine them anymore. It might logically be the case that the world works that way: one could certainly write a simulation where such things happen. But whenever we successfully construct a new device, predict a new phenomenon, build a new structure, we rely on the notion that the components allow us to infer the properties of the whole. Giving this up can only be a move of desperation, and any commitment that forces one to do so needs to be closely examined (and ideally jettisoned).

So no, there is no fallacy to the lookup table argument. While novel phenomena can occur in large assemblages, only phenomena permitted by the lower-level details can do so. We rely on our ability to predict those phenomena that won’t occur every day, whenever we build something new and are confident in predicting that it won’t spontaneously combust or grow wings and fly away. Saying otherwise is just giving away the game: there can be no explanation of consciousness or intelligence if it is to be a strongly emergent phenomenon; it will be as inexplicable as astrology actually working. Just a thing that happens without a reason anybody could point to. It is solving a mystery by mere stipulation: I can’t explain it, so it just happens magically. Sure, every problem can be ‘solved’ that way, but all that has really been achieved is intellectual surrender.

Apart from everything else, this is a huge metaphysical leap, and totally unsubstantiated. While one might reduce all of human behavior to a lookup table (but even this only if human behavior is deterministic, which contrary to your bald assertion current physics tells us is probably not the case), that human consciousness also comes along with this requires behaviorist commitments nobody would take seriously these days (and probably not even in the heyday of Skinner et al.).


1There is a subtlety here in that ‘the lower level fixes all higher-level details’ and ‘higher-level details can be predicted with the right theory from lower-level details’ may not coincide: theoretical predictability may be intrinsically limited. While I believe that such things indeed occur, for instance in phenomena related to undecidability, the distinction won’t matter much here.

Missed the edit window, so as an afterthought:

Even if strong emergence were real, it would never be reasonable to assert something occurs by strong emergence (if one has not observed it doing so), since by definition, one would lack any justification to do so—as the facts one has on hand fail to fix the emergent facts. So it would never be reasonable to assert that a lookup table produces intelligence even if it did, because the facts about the lookup table we have access to prior to building it—its components—simply don’t entail any facts about intelligence. Even if strong emergence were certain to be real, the most we could say about the lookup table is that we don’t know: it might produce intelligence, might grow wings and fly away, might spontaneously disassemble itself into component atoms, or might end reality as we know it. Substituting your preferred predisposition for this blank spot of predictability is just the ultimate god of the gaps.

I don’t quite get the need for absolutism on this @Half_Man_Half_Wit, and I think that’s the only thing standing between an agreement at this point. You seem to agree that the field of AI is advancing at a rapid pace and that it may displace many jobs at a quicker pace than previous computing advancements.

But you also want to claim that human-level intelligence is “all anybody cares about”, and you’re defining it as being able to produce all human cognitive abilities.

That just doesn’t follow at all. As I said upthread, an AI that could do every job that a human could, even better than us, but was unable to judge a dance competition, would fail your test of intelligence. And yet it would completely turn our world upside-down. We would “care about” that, and the vast majority of people would call it AGI.

Also this leads to a second point. There are plenty of cognitive tasks that only a subset of humans are capable of. For example, I can’t draw to save my life. Of course, to some extent it’s a learnable skill, but, I’ve realized from talking to skilled artists that they have an ability to mentally visualize that I simply don’t have.
Does that mean I don’t have GI? Does any human have GI?

Well, average or typical human cognitive abilities (so the dance contest thing would not be disqualifying for me; I certainly couldn’t judge one). And yes, that’s vague, but typically, we can pretty well tell when something fails that mark (but not necessarily when it is met). My problem is merely that I can think of no other yardstick to apply: all we have as a mark of intelligence is our own. I think the approach of defining a certain task, or set of tasks, to serve as an indicator of ‘general intelligence’ is always going to fail, simply because we don’t have a good read on which sort of task requires what sort of intelligence. So what else are we left with?

It’s a bit like finding out whether a picture was taken in France. Trying to figure out a list of conditions to tell whether it was is just not going to work. But there are obvious disqualifying factors: if it shows the Taj Mahal, for instance, it wasn’t taken in France, no matter whether anyone thought to include ‘not showing the Taj Mahal’ on the list of conditions for a photograph to be taken in France. But nevertheless, if you get right down to it, you can probably sort a pile of pictures into ‘taken in France’ and ‘not taken in France’, with decent accuracy (after all, that’s how GeoGuessr works).

I’m just saying that this is all we can reasonably expect in this domain: clearly excluding obvious cases, and a huge grey area where all judgments are provisional. After all, that’s why the go-to standard in science is falsifiability: because there is no clear standard for verification, but there is one for failing to meet the mark. This also takes care of your second point: not all pictures taken in France will show all the same things; but that doesn’t mean that the concept ‘taken in France’ is ill-defined: it’s a perfectly real country.

If one wanted to develop this concept further, one might look at the concept of Wittgensteinian ‘family resemblances’: there may be pictures taken in France that have no common markers, but nevertheless are linked by a chain of family resemblances to one another. But no chain of family resemblances links any human level intelligence with something failing to be able to count the 'r’s in a certain word (or any of the other trivial tests current AIs are still failing).

Uppity scientific progress threatens humans’ central position in universe, unique nature – for the first time ever!

That’s in response to the earlier posts. To the later ones, thank you for bringing clarity to big emergence, especially “it would never be reasonable to assert something occurs by strong emergence.”

I haven’t seen a grownup, sensible discussion on AI anywhere. Just juveline ambivalence or defensive anger. I’m pleased (into shock) that the people here are able to follow farce to its graveyard and givens to fertile ground.

Your distinction between weak and strong emergence stretches the meaning of the words “qualitative change” by which I mean new properties that did not exist at all on smaller scales.

My argument here isn’t a philosophical one, but rather, I’m making my case on pragmatic grounds, namely that I don’t see the deductive value of conceptual ideas like an arbitrarily large lookup table that can be shown to be physically impossible. Let me be clear that I don’t have a problem with thought experiments. One just has to recognize the inherent weakness of an argument that goes, “if I built this thing that’s impossible to build, it would prove you wrong!”. To which I merely reply, “if you built this thing that’s impossible to build, it would counterintuitively not just mimic intelligence, it would be intelligent, thus proving me right!”.

On exactly the same grounds, the big problem with this argument about emergence is in the words “in principle, could be predicted by a sufficiently accurate theory”. The possibility of such a theory is necessarily predicated on a perfect level of prior knowledge, which may never be possible and indeed such knowledge may not exist at all if the system we’re studying is something completely new and hitherto unexplored, like the spontaneous evolution of the artificial neural nets in LLMs. To quote the AI researcher Sam Bowman:

“If we open up ChatGPT or a system like it and look inside, you just see millions of numbers flipping around a few hundred times a second,” says AI scientist Sam Bowman. “And we just have no idea what any of it means.”

… Bowman says that because systems like this essentially teach themselves, it’s difficult to explain precisely how they work or what they’ll do. Which can lead to unpredictable and even risky scenarios as these programs become more ubiquitous.

If one acknowledges “weak emergence” to be novel unexpected properties (they have to be “unexpected” or it wouldn’t be emergence at all) and “strong emergence” as unexpected properties that we could never have predicted, then it seems to me that in practice the distinction between weak and strong emergence is essentially meaningless. One would have to believe that some isolated intelligence experimenting with several grams of hydrogen gas could immediately infer that 1030 kg of it could ignite nuclear fusion, or that studying the properties of the water molecule alone would let some alien on a dry airless planet predict the effects of a Pacific tsunami.

The problem is that in general there simply isn’t enough information that can be extracted from micro-behaviours to predict large-scale macro-behaviours. Digging ever deeper, we eventually reach the quantum level where all predictions are purely probabilistic. Sure, with enough computer power we can create simulations, but those simulations will always have assumptions and uncertainties.

Nonsense. All you have to do to be consistent with a purely physicalist, deterministic model of the mind is to regard consciousness as an emergent property of a sufficiently developed scale of cognition. Moreover, in your terminology, it would be an example of strong emergence. Because we have no idea how it happens, but it happens, and we’re pretty sure it’s not due to magic.

Of course they do! The whole point of introducing the lookup table paradigm is to show how any response to a given input by an intelligent entity and judged to be evidence of intelligence, can be simulated by a table lookup, and if the table can be arbitarily large, then any deterministic behaviour can be implemented that way. Conversely, this does not open the door to possibilities of any sort of magic.

Not at all. The property of ‘fluidity’ does not exist in any sense at the level of an individual H2O-molecule; in fact, it would be a conceptual error to try and apply it there. But a large enough quantity of such molecules can (temperature, pressure etc. permitting) well be fluid. Yet, the fact that this is so is perfectly derivable from the properties of single molecules in aggregate.

Which is, of course, a philosophical argument…

That’s not what’s asserted. The logical possibility (its consistency) of the lookup table is what proves you wrong; nobody needs to build anything for that. You’re trying to derive intelligence from intelligent behavior; showing the possibility of intelligent behavior without intelligence then demonstrates that to be a faulty implication. Material possibility has no bearing on logical implications. ‘If I hadn’t had so much pizza yesterday, I wouldn’t feel so queasy today’ may be perfectly true, even if it’s absolutely impossible to revert the universe to a prior state in time, not eat that much pizza, and demonstrate its truth.

I.e., magic would happen.

Again, all that’s needed there is just that a sufficiently accurate simulation could be possible.

Sure, which is why the distinction is usually made along meaningful lines, as given in my prior post.

If they are sufficiently intelligent and have access to the full theories and data governing these cases, then yes, physics tells you that this is exactly what they could do.

Not true. Quantum mechanical evolution laws are perfectly deterministic. It’s only upon measurement that probabilities come in. Many perfectly determinate properties of the macroscopic world can be inferred from the quantum description with perfect certainty, such as for instance that the extension and solidity of matter is due to the Pauli exclusion principle.

If it were the case that it ‘just happens’ without being in principle derivable from the components making up a conscious entity, then it would exactly be magic: there would be literally no rational reason that consciousness emerges when it does, yet it does.

But this wasn’t even the intended argument. A lookup table might replicate behaviors, but that behavior implies consciousness is a yet larger leap than that it implies intelligence: because while intelligence can at least be analyzed in operational terms, no such analysis is forthcoming for consciousness. Think of a coma patient: are they conscious, or aren’t they? The lookup table for them would be extremely simple, and yet reproduce their behavior exactly. But locked-in syndrome exists, so equivalence of behavior doesn’t entail equivalence of consciousness.

The reason the lookup table is appealed to is that any apparently intelligent behavior can be demonstrated, without thereby giving any reason to infer actual intelligence (at least in the ‘with logical certainty’-sense). Strong emergence claims that intelligence appears without being entailed by the fundamental properties of the lookup table—thus, from our knowledge of the lookup table, it will never be reasonable to infer the presence of intelligence: the properties of the lookup table simply don’t support that inference. Otherwise, it wouldn’t be strong emergence!

We know exactly what the lookup table does: it compares the ‘shape’, the syntactical properties, of its input to stored values in its database. That’s it: there is no reasoning, no if-then-else, no computation on the properties of the input, no thought going on; it’s an entirely rote process. The semantics of what is being said, the meaning of words, what they refer to, just never play a role in the entire conversation.

Consider a much simpler lookup table, for the addition of two numbers up to some large sum N (which will have to be a sub-part of the humongous table somewhere). Its entries are lines of the form ‘1 + 1 → 2’, ‘1 + 2 → 3’, and so on. When it is fed the input ‘27 + 36’, it checks each of its entries in sequence until it finds one that matches, and returns ‘63’. A Turing test for addition would come out with perfect scores, provided our N is chosen large enough.

Yet, it never does any addition at all. The symbols it is fed are just an arbitrary string to it; it doesn’t perform any logical operations on them, save matching them to database entries. We could imagine this lookup table realized physically as a humongous device having shaped input slots, trying each of them in order, and if it the shape fits the input (say, perhaps realized as a punch card or what have you, or a bas relief tablet fitting into a stone depression), then the ‘answer’ is mechanically pushed up. No matter how big we make this thing, it will never do any addition at all. If it did (and I don’t even know what that would mean), then it would be exactly equivalent to that kind of magic where the right weed harvested at midnight on a blood moon makes somebody fall in love with you: just an entirely out of the blue effect for—literally—absolutely no rational reason at all.

But that’s what you’re suggesting: make the lookup table big enough, and poof, some fairy waves its wand, and presto, intelligence! It’s not that it does open the door to some sort of magic: it literally is magic.

In the interest of (relative) brevity, I’ll just focus on the most interesting questions, and if some aspects seem repetitious, it’s only to emphasize my core beliefs.

ETA: Well, I tried, but my attempt at brevity failed! :wink:

What do you mean “it never does any addition at all”? I give it any two numbers to add, and it gives me the correct answer. What more do you want?

The kind of argument you advance here seems obsessed with what’s going on “under the covers” and allergic to behaviorist interpretations, which is ultimately all that matters, not implementation details. If it adds numbers, it’s an adder. Period. Why should I care how it does it?

Even if you wanted to take some unfathomable purist approach and show that this is just a table lookup machine and has no idea by itself how to even add 1+1, I would argue that clearly something at some point must have known how to do addition in order to populate the table. The table and the lookup machine now encapsulate that knowledge and constitute a functionally equivalent system.

Your argument is just Searle’s Chinese Room argument in a different form. And it bears directly on my assertion that a hypothetical humongous lookup table that encapsulates all possible responses of an intelligent entity to all possible inputs is functionally equivalent to its progenitor.

Indeed it does, and the absence of a “rational reason”, or predictable mechanism, if you will, is precisely why I described it as an example of strong emergence (but not magic).

The question you raise is: if you replicated the complete range of human behaviour in a humongous lookup table (HLT) the size of the universe, encapsulating all of a person’s knowledge, emotions, and sensory memories, would such a system be conscious? I think questions like that underscore the basically ill-defined and subjective nature of the whole concept. Is a mouse conscious? Is a fly that tries to evade a fly-swatter conscious? Can an AI ever be conscious? The answer I will give to the question regarding this hypothetical HLT the size of the universe is that whether or not it could be argued that it truly possessed consciousness, it would definitely behave as if it did.

I don’t know what you think of David Chalmers (I myself disagree with much of what he says in this paper [PDF] on emergent phenomena), but clearly the idea that consciousness is evidence of strong emergence is not entirely my own:

I think there is exactly one clear case of a strongly emergent phenomenon, and that is the phenomenon of consciousness. We can say that a system is conscious when there is something it is like to be that system; that is, when there is something it feels like from the system’s own perspective. It is a key fact about nature that it contains conscious systems; I am one such. And there is reason to believe that the facts about consciousness are not deducible from any number of physical facts.

I have argued this position at length elsewhere (Chalmers 1996; 2002) and will not repeat the case here. But I will mention two well-known avenues of support. First, it seems that a colourblind scientist given complete physical knowledge about brains could nevertheless not deduce what it is like to have a conscious experience of red. Secondly, it seems logically coherent in principle that there could be a world physically identical to this one, but lacking consciousness entirely, or containing conscious experiences different from our own. If these
claims are correct, it appears to follow that facts about consciousness are not deducible from physical facts alone

This is getting off topic and IANAP, but “deterministic” seems like a strange word to use given that the state of an isolated quantum particle is always described by a wave function, which is fundamentally a probability distribution. The effect of measurement is to collapse the wave function and fix any measured property, such as position or momentum, in classical terms within the probabilistic bounds of the original wave function. Even the notion of local “hidden variables”, implying some unknown deterministic mechanism inside a quantum system, has been ruled out by demonstrated violations of Bell’s Theorem. One can hardly imagine a more perfect example of true randomness

Determinism in the quantum world seems to come not from isolated quantum states, but from certain relationships between them. As you say, the Pauli exclusion principle is one example. Another one is the deterministic nature of quantum entanglement.

To actually implement the calculation, of course. Suppose you were to add 27 and 36: chances are, you’d do something like adding the 7 to the 36, yielding 43, then adding the remaining 20, to get 63. If asked, you could detail how you did it. That’s thinking; that’s intelligence. It doesn’t matter what exact algorithm you followed, just that you followed some algorithm at all. Performing a sequence of logical steps to reach the desired outcome.

Now, the lookup table could never tell you how it came by its answers—because obviously, it simply didn’t. Even if we added a slot that output something like the above description when it is asked ‘How did you arrive by your answer?’, that would just be one more uninterpreted string of symbols; it didn’t actually do any of that, just because it could be considered to ‘say’ so.

Yes. There’s a reason that behaviorism has been dead for more than 60 years—it’s simply an untenable philosophical position. Just because you can get a machine to say ‘ow’ when you hit it, doesn’t mean it feels pain; just because you can make it say, ‘hello, nice to see you’, doesn’t mean it thinks it’s nice to see you; just because you can make it give the correct rote solution for an addition problem, doesn’t mean it adds.

Consider the behavior of the Sphex wasp. It will paralyze its prey, drop it off near its nest, carry out an inspection of the nest, then if everything’s in order, drag the prey in. That’s pretty intelligent behavior, no? But then you can do the following: while it is inspecting the nest, drag the prey a little further away. The wasp will then go, fetch the prey, drop it off near the nest, carry out its inspection again, then haul the prey in. If you move the prey again, the cycle continues. Thus, what seems like intelligent behavior is really just a rote instinct, instilled by evolution, that the wasp replays again and again.

Or consider, for instance, if one of the rows in the table were changed, into, say ‘27 + 36 → I love you’. The machine, upon being asked the relevant question, would spit out ‘I love you’ without hesitation. No actual, functioning adder would behave like that! But then consider that all your testing simply might not yet have reached such a point, being necessarily finite: then you simply wouldn’t be able to conclude whether the device is an adder.

Yes, exactly! Something must have been able to actually add to enter these tables into the machine, because the machine doesn’t actually know how to add. That’s my point! In fact, it’s the classical reaction to the lookup table argument: an intelligence test on the device simply tests the intelligence of whomever created it. But even so (and note that the test even here is still vulnerable to issues of the ‘just not having reached the right line yet’-type above), then that just means that no intelligence inheres in the actual system whose behavior you are observing. So mere intelligent behavior does not imply that the system behaving that way actually is intelligent!

No, it’s certainly not functionally equivalent. Functionalism isn’t behaviorism; it does distinguish between different implementations. Its aim is to allow for multiple realizability, i.e. for substrate independence, but it doesn’t abstract away from how any given function is implemented. A system implementing addition according to algorithm A and one implementing it according to algorithm B are functionally different; two systems implementing algorithm A within different computational substrates—say, vacuum tubes versus silicon chips—are functionally equivalent. Functionalism allows for internal mental states making a genuine difference, behaviorism does not (and hence, fails).

Furthermore, whatever the machine has is certainly not knowledge—else, any mark left by a physical interaction, such as the scuff marks on your car after brushing a wall, would constitute knowledge. Knowledge, to a first approximation, refers to beliefs that are come by in a reasonable manner and happen to be true (justified true beliefs). There are no beliefs in the machine.

No, it’s not. Searle allowed for arbitrary computation to be performed on the input symbols; he gave the computationalist their choice, and argued (unsuccessfully, in my view) that there still can’t be any genuine understanding present. Here, no computation at all is performed on the input vehicles; they’re just matched. Nothing is done to the input, there is no data processing, nothing is being stored and recalled, no variables are being set or read—nothing at all. There’s no algorithm, hence there’s no intelligence.

So, you’re saying ‘there’s something unforeseeable that happens when the right things are arranged in the right way, that isn’t in any way explained by the properties of those things, and could never rationally follow from them; but that doesn’t mean it’s magic’. Well, that’s just like saying ‘water in large quantities at the right temperature has fluid qualities that make it splash around, soak textiles, and coat surfaces in a slippery sheen; but that doesn’t mean it’s wet’.

If new qualities can arise without any reason at all, then who’s to say that the right arrangement of chicken bones, candles and incantations does not produce a storm the next day? It’s the same principle.

No. The lookup table will only include behaviors that, in humans, might be caused by knowledge, emotions, and memories; whether there would be any of those things present is the question under discussion (and the answer is of course that, barring magic, there wouldn’t be).

Also, you’re conflating the subject under discussion: the question wasn’t whether the right behavior implies consciousness, but whether it implies intelligence.

Yes, certainly. But note that he limits it to the explanation of consciousness, which is to him the only such example—not intelligence. And furthermore, Chalmers fully accepts that, in order for there to be strong emergence, the world can’t be fully physical in nature. There’s no middle ground here: you can’t have both. Indeed, it has been argued that full substance dualism really is the only way to go here. In any case, the emergent facts are simply brute facts: additional properties that have to be stipulated over and above the basic physical facts, which don’t suffice to fix them. Just like, for instance, a further fact that fixes that whenever chicken bones are arranged correctly, and the right words are spoken, a storm rises up.

The thing you can’t do, on Chalmers’ or anyone’s construal, is just to appeal to strong emergence whenever you find that your basic facts fall short of what you’re trying to argue: that’s just admitting defeat. So if you’re saying that the facts about lookup tables are not sufficient to infer any intelligence, but intelligence happens anyway, then you’re saying that while you have no argument (couldn’t possibly be having an argument, because that requires exactly the sort of entailment strong emergence denies exists), you’re still right, because who or whatever fixed the additional facts about the world just happened to fix them in the way to make you right—i.e., magic, front to back.

A wave function isn’t really a probability distribution. Probabilities for certain observables having certain values can be derived from the wave function by means of a mathematical operation given by Born’s rule, but not all observables admit a simultaneous probability distribution in this way (because some can’t be measured together), yet the wave function can be used to derive probabilities for all sets of co-measurable observables. These probabilities tell you what to expect upon observation, the evolution of the wave function under the unitary dynamics, however, is always perfectly deterministic.

Furthermore, the wave function will always be an eigenstate for some set of observables, to which it assigns simultaneous, definite values. Also, there are certain quantities for which no superposition is possible due to superselection rules—such as charges. So it’s at least heavily misleading to just wave at quantum mechanics and say, well, it’s all probabilistic there anyway.

I don’t really know what you mean by that. That measurements on entangled particles always agree/disagree? That’s just a property of very special, so-called ‘maximally’ entangled states. Most entangled states will have correlated, but not necessarily perfectly correlated outcomes.