Let’s try to avoid Chinese rooms and solipsism, but why should anyone believe it does? Where should a burden of proof lie?
For other humans I assume that they are built like me and behave like me they likely then experience like me. Ultimately unprovable but reasonable. Not accepting that assumption is a fool’s path.
If the pattern of creating the predictive output is completely other, then that assumption is not so reasonable. The output is not the interior experience or evidence of.
I’d say that we should carefully test PALM-E and what it can accomplish to determine whether or not it presents capabilities that rely on true understanding. Computer scientists who are much smarter than me have come up with all kinds of tests, and at this time both PALM-E and ChatGPT models fail in a number of key ways.
Note that I’m not claiming that these models are conscious, I’m simply pointing out that while @Half_Man_Half_Wit claims that they CANNOT be conscious because they are missing something that I have access to, careful investigation of PALM-E’s inputs as well as our own reveals no room for anything like this.
But scientific studies have shown that we cannot necessarily rely on our own experience. Our subjective experience lies to us all the time. We act and then we make up a plausible sounding reason to have done so for our consciousness to chew on.
Your subjective experience exists for one reason: it made your ancestors more likely to survive. Our human consciousness specifically evolved to survive on the savanna as a tool using species, but its roots stretch back much farther - all the way back to microbes with light-sensitive receptors.
Expecting an ape’s survival tool to reveal some deep mystery of the universe seems like the fool’s errand to me.
The interior experience IS an output. It has to be, if it isn’t magical.
Eta: aside from magical, consciousness could also be a fundamental force of the universe. But elevating your subjective experience to the same level as the strong/weak nuclear forces, electromagnetism, or gravity seems like the absolute height of hubris, especially when you have no evidence to do so.
I’m vaguely aware of it; I’ll give it a closer look, thanks!
No, there’s no need for a replicator to be consciously aware of its own theory. Compare the usual von Neumann replicator design: it has its own blueprint available to itself, and creates a copy; but this doesn’t entail that it understands what it’s doing. It just follows instructions.
By ‘transcending the structure given by the axioms’ I mean that there are things that are true of the natural numbers, that however can’t be derived from any axiomatization. This is just the incompleteness phenomenon.
I’m not sure what you’re driving at with all those chairs. Basically, physical stuff has intrinsic properties, whether it’s arranged chair-wise or not. But the intrinsic properties of the chair, or its microscopic constituents, don’t actually figure in my model. Rather, it’s the intrinsic properties of the von Neumann replicator that are important. Essentially, they are the ‘clay’ from which models of the world are build—the colors to be filled in the structural outline of a paint-by-numbers picture.
Again, this isn’t unique to my model. I think the article at the Stanford Encyclopedia I posted above gives a good introduction.
…come again?
You have a certain sort of recursive, self-referential process that happens in your brain, the von Neumann process, that furnishes access to its own intrinsic properties. Thus, these properties are present to you in experience; consequently, the problem of structural underdetermination can be overcome, and, unlike ChatGPT, your words actually mean things. For this, it’s not necessary to have had any direct contact with a proboscis monkey; it’s all happening in your head, so to speak.
Again, think about the difference between the natural numbers and a theory of the natural numbers: the natural numbers themselves are not ‘extra’ to mathematics, yet can never be fully specified by any theory, any set of axioms giving its structure. Hence, there is something about the natural numbers that goes beyond that structure. Just translate that into physics, and that’s it.
The reason you should believe my model over, say, the existence of invisible pink unicorns on the far side of the moon, is that if the world is as my theory says it is, then the nature of experience is no longer mysterious. That is, my model adds explanatory power; it gives a (possible) answer to the question of why conscious experience is the way it is.
The difference is that the folk stories don’t add any explanatory power, whereas my model does. In such stories, entities are just cooked up to order, having just whatever properties needed to fill in the gap. But the intrinsic properties of my model are independently motivated—as providing a way to overcome the problem of structural underdetermination—and then also yield an explanation for conscious experience.
Am I correct in understanding that, according to your model, a brain has direct access to its (physicalism implies physical existence) mind, and thus does not need to compute the validity of it’s mind and mental properties from basic axioms; while it does not have direct access to say the platonic natural numbers, the platonic chair, etc, and so must compute validity of each suspected number, chair, etc, from axioms?
I am not sure what output necessarily relies on “true understanding” even I can agree that failure to meet some outputs is evidence that it does not.
Irrelevant to the point. Of course it does. To some degree that its job. Inputs are noisy but we experience signals, our brains impose those signals out of the noise.
The point is however simply that “I think”, that I have a subjective experience, whether or not it reflects reality and whether or not the reasons I believe I acted a specific way is accurate or created after the action was decided upon other than by my conscious experience. I reasonably assume other humans do as well, given our similarities, but their actual subjective experiences are unknowable to me.
Basic methods in science is to find less complex models to study. In neuroscience that begins even simpler than Aplysia and gets more complex. That fool’s errand is how science often operates.
Human sentience and consciousness did not magically appear de novo, it was built upon the brain toolkits shared by our primate ancestors, which was built upon the brain tools shared by more primitive mammals before them, and down the line. To my way of thinking it is self-similar just with extra levels of nesting and integration of different forms of processing results as inputs for levels higher order, and impacting expectations top down upon the noisy inputs.
As output it is not an observable thing to anyone other than the one experiencing it, and, as you point out, even they are an unreliable witness. We need proxies of it.
Your proposed proxy is task related, basically an amped up Turing test, if it sufficiently walks and quacks like a duck, or a self-aware mind.
I find Grossberg’s ART, and Hofstadter’s Strange Loops models particularly appealing as they imply that there are patterns of processing that may be correlates of consciousness, that give rise to the emergent property. This is at least falsifiable. And we can at least hypothetically see if such correlate in machines.
If I understand you correctly, then yes—it has only theoretically mediated access to anything beyond itself, but the way it is itself is shaped by what it encounters in the world. The von Neumann replicator is an evolving design, and essentially, the data encountered from the outside world sets up a ‘fitness landscape’ that the replicator adapts to. Thus, in a way similar to how the dolphin’s streamlined shape carries information of its aquatic surroundings, the replicator carries information about the outside world—and thus, knows the outside world indirectly by knowing itself directly.
Okay. I like this then as it fits with requirement for a model to be able to bootstrap as it nests one level of outputs as inputs for higher order processing.
I don’t see though why you feel that such processing is only possible on a biological substrate.
Of course folk science and myth did. Occasionally predictions were correct even if for the wrong reasons.
You keep saying this, but you’ve provided no evidence for this as far as I can tell. You claim that I have this process, and that an AI doesn’t, and that this is self evident because the AI is computational, but you haven’t convinced me of ANY of those claims.
How? What physical process exists in my wetware that cannot exist in a computer chip? Both my brain and the computer chip are presented with the same exact inputs; you’ve given me no reason to believe that I have access to anything more than a robot with cameras and microphones. What part of my body gathers information on the proboscis monkey that is missing from PALM-E? If the missing piece isn’t gathered from outside the body but is in my brain, how did I gain this missing piece in the first place? Even if it’s hardwired into my brain’s structure, that’s just kicking the can down the road, because you’d need to explain what selection pressure made my brain evolve to be like this.
The number “four” doesn’t exist outside the context of number theory, other than as a description. Outside of number theory, the number “four” is a property - for example, four apples - but even here is is an arbitrary construct, because of course it would be equally valid to say that there aren’t four apples, there are approximately 200 million apple cells; or some enormous number of atoms; or one pile of apples.
Our mind privilege one perspective simply because it is the most beneficial to survival at our scale, not because it is inherently correct or meaningful.
When I was a kid, I didn’t think the nature of experience was mysterious; I thought it was a property of my immortal soul. Lack of mystery does not a good model make.
And maybe I’m dumb but I don’t really get what von Neumann replicators have to do with anything even after reading the article; I don’t follow why they are a good analogy for anything.
The processing pattern in biological substrate you propose is computation. If it occurs it isn’t uncomputable. It may be that current machine computation methods do not use similar patterns of processing ….
Was this claim made in the paper? I didn’t see it. He gave a PowerShell quine as an example of a self replicating program, and asserted that while the program does not have direct access to its physical structure (the physical components of the machine are abstracted away - a program may exist on paper), a conscious mind must.
It follows, in my interpretation of his paper, that no artificial intelligence computer program can achieve consciousness, but an artificial intelligence machine may still do so.
Yes, this is a continuation of a discussion from another thread that became sidetracked, but I am specifically referring to a computational AI with access to vision and hearing such as PALM-E.
I reread the bit about von Neumann replicators and I think I understand the issue - I think we diverge so widely here, that I don’t really see the need to continue -
I don’t think this tracks. We don’t need a repeating cycle of decoding internal symbols.
Consciousness emerges out of the brain decoding inputs and making decisions. It emerges from the structure of the brain and the activity going on in it. It doesn’t need to be experienced by an internal homonculus; experience ARISES FROM this activity.
The subjective experiences being discussed are to me similar as wetness and fluid dynamics are to water, an emergent property hard to understand from the most basic molecule up, a different level of analysis, but still “physics”.
There is a pattern of processing that occurs in our brains as it interacts with the outside world, forming and modifying models of that world, which includes itself as a member, forming archetypes from bottom up, and imposing prototypes top down, in constant revision, a red queen’s race. That processing occurs on biological substrate. Ultimately there is physics involved but the information processing pattern is what matters, whatever the substrate for that processing may be.
Despite being unable to comprehend the underlying mathematical theory, at this point I feel like I have a somewhat solid understanding of the philosophical theory. I have no problem putting subjective information in an environmental state variable for the replicator agent to act on.
I’ll point out my major criticism - the monolithic nature of consciousness is itself an emergent property. The underlying physical brain is not a monolithic system. I suggest your model of it as a von Neumann replicator is fundamentally flawed. The human brain, at least, has asynchronous component systems with various degrees of interdependency. It can not be modelled with a singleton pattern. I cannot reconcile your model with Sperry’s split brain experiments, for example, because you require direct physical access that the subjects demonstrably lacked.
Why do you believe that consciousness is monolithic?
Most experts believe it is anything but, that there is a symphony of consciousness being coordinated that together results in the subjective experiences of qualia and of sentience, of mind, in service of predicting the world in a social environment.
Because that is how the word is used. The mind is monolithic (hence the currency of the phrase, ‘of two minds’) and consciousness is a property of the mind. There is soundness of mind, not soundnesses of mind. etc. One can be afflicted with illnesses, but one does not ask if an entity has consciousnesses.
A physicalist model purporting to generally relate physical structures to a mind must contend with the fact that a mind is monolithic, and human brains have multiple asynchronous component systems. Half_Man_Half_Wit’s model does not, so far as I understand it, hold for the human brain. Therefore it cannot be a general model to relate physical structures to a mind.
This is excellently phrased, @Max_S. I think this is in large part my issue with the von Neumann article; I don’t see how the example of a monolithic machine that requires blueprints to produce things, including itself, is in any way a meaningful analogy to the conscious experience with or without the postulation of these properties that we are discussing.
Your conscious experience as we think about it may appear monolithic like the machine, but as Max points out, it is in fact emergent from interconected but distinct processes happening across the brain.