The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

We can list all kinds of differences between brains and computers. No one disputes that. The question is, are they material to the question of machine intelligence?

You keep making assertions of fact that are just opinion, like “Computers are just adders, and adders can’t be intelligent”, or asserting that consciousness is impossible without some aspect of the brain that doesn’t exist in computers, without defining exactly why that would be the case.

Clearly, the AIs function differently than human brains. For example, it takes a huge amount of energy and computation cycles to do something a child can do on the energy of a cookie. But so what? How do we know that any of the differences preclude consciousness?

Perhaps, if you knew what all the component parts of the brain are and how each functions and could draw an exact schematic of the brain you could do a computational simulation of the entire brain. That simulation would function exactly like a flight simulator. It would present numbers proportional to all brain functions which you could interpret as thinking. But, just as the flight simulator is not flying, your simulator would be simulating, not thinking. It would however be performing some kind of machine think, to an arbitrary degree of fidelity, that would be interesting to study.

“The map is not the territory” Alfred Korzybski, a simulation is not it’s subject.

I believe these machines are clarifying the definition of intelligence. The behavior of the machines indicates that intelligence - the ability to focus learned information on the solution to a problem - is tangible and separate from ill defined concepts like awareness and consciousness.

Computers are just adders and if I said they cannot be intelligent then you have proven me to be wrong. But they are not aware or conscious. Those are different properties.

To be correct, my statements regarding adders are not opinion. Computer cores are in fact adders.

As to the analog vs digital issue, yes, it is relevant. The two are dramatically different in what they do.

It’s funny you mention that, as just the other day I was thinking about it in the shower and concluded that it’s bullshit.

Let’s say you’re a traffic engineer. You’re given a normal paper map and want to make conclusions about the safety of different roads, etc. You can come up with some ideas, but the paper map is not a very good substitute for the real thing.

You upgrade to a topo map, which allows you to make better conclusions about visibility over hills, whether braking distance is increased on a slope, etc. Still not good enough.

You upgrade to a computer map, which has houses and other things that give you even better fidelity, but the map is still not the territory.

You add more and more things: day/night cycles, cars driving around, which get their performance and crashworthiness simulated, people inside these cars with their brains, and so on. You want the abliity to inspect every single atom in your simulation since it could affect the behavior of the people, or the mechanical characteristics of the cars, and so on. Light is simulated perfectly, and eventually you realize that you have to simulate the entire universe, since the distribution of cosmic rays may influence the rate at which they cause vehicle computer failures.

Eventually, your “map” is perfect. Every experiment you run on it gives the same results as the real thing. The people inside it act like real people, and tell you as much. A sufficiently accurate map is the territory.

The Elvis in the wax museum is real (or he could be)

( a friend of mine is the director of a museum in NM. He is also a performer and sometimes gets in costume and poses as a museum exhibit. Visitors go nuts trying to figure out if he is real. Perhaps that’s a perfect simulation)

Well, it’s those arguing for computationalism that insist that mind is separable from body and can be transferred between different vessels, that you, too can attain eternal life, that you can speak to the dead, that the world was created for a purpose by powerful beings beyond it, that the end times are near, and that, if you don’t do what you can to aid the coming of our AI overlords, the robot devil will torture you for eternity (the true believer should probably not click that last link).

Additionally, the arguments presented so far for understanding in ChatGPT and its ilk are just the same human tendency to see intent in ill-understood phenomena that brought us gods of thunder, rain, wind, and so on—to look at a complex creation or behavior and conclude that there must be design behind it.

Sure, these are all arguments that purport to proceed from mild generalizations of known science. But so were the arguments for the existence of God by Descartes, Leibniz, or Newton perceived to be. It’s simply a modern way to find something that fulfills all the needs fulfilled by religion, while retaining the spirit of the times and rejecting all that ‘magical’ stuff about gods and fairies and the like.

But that doesn’t automatically make it reasonable, or a foregone conclusion. Many phenomena in the world are strongly believed to not be computable, despite confident assertions to the contrary:

There are many formally undecidable phenomena in quantum mechanics, such as whether there is an energy gap between the ground and first excited states of certain quantum systems, determining exact phase diagrams, whether systems thermalize, whether a given measurement outcome ever occurs, and more. Indeed, if quantum randomness is algorithmic—and there are theorems to the effect that it must be, if we don’t want to admit faster than light information transfer—then the general evolution of a quantum system is uncomputable. And the phenomenon isn’t limited to quantum mechanics: there are problems in classical mechanics that, too, are uncomputable, including for instance whether a system’s state lies in the attractor of some chaotic dynamics (as that attractor is a fractal set, and the problem of membership of a fractal set is undecidable).

So the assertion that physics itself is almost certainly computable seems somewhat premature. But of course, this doesn’t really tell us something about whether the relevant physics for the operation of the brain is. At this point, I think, the honest position is that we simply don’t know—there are certainly proposals for a non-computational element in the mind’s operation (mine among them), and again, I find the fact that the most straight-forward realization of a general problem solving agent (the AIXI-agent referenced above) turns out to be uncomputable rather suggestive.

But arguments that the ‘soggy chemistry’ of the human body is definitely computable are just as premature. The undecidability may be in the formal, rather than the physical properties (as with AIXI); but even if not, there are proposals that quantum phenomena are relevant to cognition, just as they seem to be to photosynthesis and avian magnetoception, among other biological phenomena. And ultimately, of course, everything is quantum—the fact that your coffee mug doesn’t fall through your desk is explained, in the last consequence, only by quantum effects.

I scanned through all your links. FWIW, the first one pertains to whole brain emulation, which is certainly a very speculative idea, but one which has reputable supporters and actual ongoing research albeit at relatively primitive levels. Your subsequent links quite frankly look like an attempt to ridicule the concept by association – and also by association, to ridicule the whole idea of computationalism in human cognition – using hyperbolically labeled references to increasingly fantastical and ultimately silly ideas, culminating with the “robot devil” which is nothing more than a silly post on a discussion board. A discussion board which, incidentally, I have only a passing acquaintance with, but which seems to me to be inhabited by a disproportionate number of pretentious pseudo-intellectual twits, like one that we recently booted from this board.

I know, right?! But if you follow the references of the wikipedia articles (or do some light googling), you’ll see that they’re each relatively widely publicized ideas (for a comparatively niche concern), with attention in both mainstream media and the scholarly literature. The AI devil may even have had a hand in bringing together Elon Musk and Grimes—I mean, talk about nefarious. (Again, click link at your own discretion, not just because of the possibility of dooming your immortal computational soul, but also because of the celebrity fluff.)

I love seeing stuff like this about Musk! :laughing: He may be physically on this planet but his brain is already orbiting Pluto!

I must say, the last two links in your prior post are fascinating, particularly the Matthew Fisher paper about the potential role of quantum computing in cognition. If it turns out to be true it’s certainly revolutionary, and I have no quarrel with it, particularly as proponents of the computational theory of mind readily concede that while CTM is an essential cornerstone of the theory of mind it’s far from a complete explanation.

I would note, however, though I don’t claim to understand this stuff, that more recent papers seem to question one of Fisher’s basic premises but they do propose an alternate mechanism of preserving qubit coherence.

I have the Fisher paper and will slog through it to the best of my limited understanding.

.

ARE they? It seems like they are arguing that the mind is an emergent property of physical processes, and that you can replicate a mind by replicating those physical processes.

But the mind cannot be separated from a physical process since it exists as the emergent property of a physical process.

Why someone would choose to upload their mind is a mystery to me since I don’t benefit from a new copy of my consciousness existing after my death - this conciousness would still end.

Again, the link doesn’t say what you claim it says.

Despite the headline, nobody is making that claim?

Now you’re just being silly. The fact that I do not believe that meat is magic does not mean I subscribe to the Simulation Hypothesis, any more than your apparent belief in the magical properties of gray matter imply that you sacrifice hummingbirds to Quetzcoatl to ensure that the sun rises tomorrow.

That’s got nothing to do with the end times?

Oh no, a thought experiment! Now that I’ve read it I have no choice but to assume that people take it literally and are therefore being silly!

Hey, wise guy, riddle me this. If Musk and Grimes take the concept of the AI Devil so seriously, how come they met (according to your own link) after joking about it on Twitter?

It’s almost like they DON’T actually believe in it as a serious idea…

I mean, yes?

Uh, how so?

I’ve nowhere claimed that you do, but a substantial fraction of people concerned with the other issues do.

Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

All arguments about AGI involve thought experiments, because AGI doesn’t exist yet.

That wasn’t to indicate that they take it seriously, but to show that it’s spread beyond being a ‘silly post on a discussion board’. Again, it’s been featured regularly on mainstream news outlets such as BBC.com, Slate, or Business Insider, and made its way into scholarly papers. There even was a recent thread right here, and it’s been brought up in the replies to two recent Straight Dope columns.

I guess I really don’t understand your point, then. There are some real whackjobs in the “Meat is Magic” club too, but you don’t see me saying that just because you and Osama Bin Laden agree that a computer cannot replicate a human conciousness that you must also believe that America should be destroyed.

The point is that despite your allegations, computationalism has lots of the trappings normally associated with religion, while contrary to your repeated appeals to ‘magic’, non-computable phenomena are a perfectly simple part of our best current theories of physics.

I’ll have to read through these in more detail later, but they largely appear to me about questions about these phenomena, not the phenomena itself.

Computation itself is filled with undecidable problems, the most famous one being the halting problem. But the existence of the halting problem doesn’t mean that Turing machines don’t work. It doesn’t mean you can’t run an arbitrary program and just sit around waiting to see what happens. It just means that we can’t write a program to decide if a program halts or not.

This one in particular seems related to my comment. So what if that’s an undecidable problem? It doesn’t imply that we can’t actually run the physics. It just means there are questions about the outcome that we can’t get answers to.

That sounds easy to test. Just get an NMR machine tuned for phosphorous. We already know that the human brain can’t be using hydrogen spin since people’s brains don’t get scrambled when they undergo an NMR. See if the same happens with other nuclei.

Of course. The atoms themselves would not be stable without quantum mechanics. But these things can be inserted by hand. Chemistry doesn’t depend on why the electron doesn’t spiral into the nucleus, only that it doesn’t. And while it gets more complicated as the chemistry gets more advanced, including some oddities like bond resonance, the fact that it’s all happening in a narrow temperature range and at non-relativistic speeds undoubtedly simplifies things.

Questions about their properties, for the most part. A system either has or doesn’t have an energy gap, or a phase transition, but whether that is the case isn’t computably decidable. For a system’s evolution, I agree that the existence of undecidable questions is unproblematic—as is for instance the question of whether a (sufficiently complex) dynamical system ever visits a given configuration, which is just the halting problem.

But I think the most clear-cut case is the simple randomness of quantum mechanics. Any given uncomputable function can be represented by a finite algorithm and an infinite, algorithmically random string; so, the normal quantum evolution plus the randomness of measurement outcomes will typically be an uncomputable function. For an argument along similar lines, see this paper by Klaas Landsman.

Well, it has been argued that it implies an absolute boundary to the predictability of a chaotic physical system, going beyond the usual sensitivity to initial conditions. But I don’t know enough about this point to press it.

But there’s no principled argument that conscious experience couldn’t depend on the non-computable properties of quantum mechanics. Whether there is experience associated to a certain system may be just as uncomputable as whether it has an energy gap. And, I mean, not to sound like a broken record, but I do have a published theory of conscious experience on which the process of making up your mind regularly encounters undecidable questions. Not that I’d bet the farm on it being correct, but it seems to me that at least there’s no compelling reason to exclude the possibility from the outset.

But still: does it matter? We don’t know if there actually is randomness in quantum mechanics. It may simply be highly scrambled. The most “random” thing in the universe is likely the Hawking radiation from a black hole, and yet the work so far suggests that it is not random, just highly scrambled. Perhaps scrambled so comprehensively that no realizable computer could ever replicate it.

But it probably doesn’t matter. Random vs. pseudo-random has never mattered in physical simulations so far (except when the pseudo-random generator was bad). I think the onus is on those claiming that the difference does matter.

Could it? Sure. I just find it extremely improbable, given what we know of the brain. And if you’re talking specifically about the subjective experience, that’s not something that any experiment can ever determine. I “know” I have it, but I have no idea if others do. So it doesn’t matter in that sense; humans and computers are both in the same position.

Incidentally, it appears that in-vivo NMR imaging of human brain tissue has already been performed with phosphorous:

It neglects to say whether the subjects came out brain damaged, or if they were incapable of thought while in the machine, but I feel like they’d have mentioned that.

We know that if there isn’t, it’s in conflict with special relativity, so if we believe our best current theories of physics, then yes, it absolutely does matter. Sure: those theories could be false. But just throwing them out because of a prejudice towards computability seems premature.

You’re right that we don’t have proof that consciousness doesn’t rely on these quantum processes, but we also have no reason to think that they DO.

But that’s a far cry from the only alternative to computationalism being ‘magic meat’, isn’t it?