Sorry but that makes no sense to me. I’m not even of two minds about that. I have half a mind to just ignore the comment …
I do not see this as a strong objection. One can look at the top level of processing as one perceived mind emergent of a process at that level without having to explain from the level of synapses integrating across dendrites.
I say that despite my finding appeal in seeing replication of information processing at different levels of abstraction and scales.
But this depends on the assumption that there is a coherent, unified, top level of processing. Much of the time there isn’t. So many mental proceses are not conscious. Habits and dreams come to mind. You don’t have to dig down to the level of dendrites.
Consider a Freudian slip. You know it’s no longer correct to refer to Russians as Soviets. That information is somehow physically represented somewhere in your brain. But in conversation, “Soviet” slips out. I saw this happen to a U.S. General as he was testifying before Congress; once he heard himself say “Soviet” he realized his mistake and apologized.
It could be that the mental process which checks his speech couldn’t keep up with the process that determines his speech in the first place. How do you model for such behavior with a single top-level process?
I play piano. There are parts for the right and left hand. Sometimes one hand will play faster or slower than the other. Furthermore, if I hear unexpected syncopation, I will self-correct. It seems to me there are at least four simultaneous processes involved: a process to mentally compose/recall music I want to play, a process for each hand, and a process to listen and compare that against my expectations. Again, how do you model such behavior with a single top-level process?
You exactly illustrate there is no monolithic mind. Yet your piano analogy is apt and I will extend it: a symphony is a coherent piece made of many parts playing together. That performed piece is not any of the parts; it is not the mind of the conductor. It is all the parts working together to create a new unity that works. A piece that can be considered, understood, and critiqued, as the whole, at that top level, even as one can consider the sections and the individual instruments in each section.
I agree with all this, but I don’t see why it requires special metaphysical constructs at the top (or any other) level to function as we each observe ourselves to function.
Well, convincing you isn’t really my central aim here. I have this model, I think it solves the problems of the mind, so I’m offering it up; if it’s not for you, it’s not for you. As far as evidence goes, the existence of the von Neumann process is the central hypothesis of the model—its existence is postulated because if there was such a process, then the homunculus regress could be avoided, making a representational theory of mental content possible.
The process can perfectly well exist on a computer chip, but it can’t be a result of computation—it can’t be a merely formal process (because that would amount to stipulating to compute the uncomputable).
In mathematical logic, one distinguishes between a theory and a model of that theory. The theory is given by a set of axioms; the model (a term which I don’t like to use, since it conflicts with my usage of ‘model’ elsewhere) is the mathematical object that the theory is about. The model ‘transcends’ the theory in the sense indicated. This isn’t a claim of Platonic existence of the numbers or anything; it’s entirely compatible with a formalist understanding of mathematics. My sole contention is that the physical world outruns any theory of itself in the same sense (and if you do follow that line of reasoning, what you arrive at is basically quantum mechanics).
Presence of explanation, however, does. Ultimately, any theory is accepted because it explains things: it helps us get a simpler, more powerful summary of the data.
Lots of things that occur are uncomputable. Take quantum randomness, for one. And the pattern can occur in any substrate, not just a biological one—it’s just a self-referential ‘loop’, which uses its access to its intrinsic properties to overcome a formally undecidable (uncomputable) question that otherwise stymies its replication.
And if I’m honest, I’m still kinda proud of myself for getting a PowerShell example published in a leading philosophy journal; somehow, that tickles my funny bone.
Exactly so.
That’s just a bald assertion, but the homunculus regress is a classic and widely-discussed problem of representational accounts of mental content. Basically, it occurs if you implicitly refer to a capacity in trying to explain that capacity. So, what happens if you try to understand a symbol, or a set thereof? Say, you’ve just learned French, and aren’t quite fluent: chances are that you’ll translate French words into their English equivalents, because those you understand. But how does the understanding of English words work? Some accounts (notably Jerry Fodor’s) appeal to a ‘language of the mind’: internally, English is translated into mentalese, which is then understood. But how, again, are the ‘symbols’ of mentalese understood? Clearly, something else than further translation is needed, or we’re stuck in a regress.
The same is perhaps more immediate for theories of vision. When you see an image, a particular pattern is projected onto your retinas. Those set up a pattern in your visual cortex, which is somehow translated into an internal representation. But what does this representation represent—and to whom? If there is some ‘internal viewer’ (a homunculus) that has to perceive that representation, then how do its perceptual faculties work? If they work in the same way, we’re off again to the regress.
If that’s still not intuitive, I try to clarify the subject in this article.
But that’s not saying anything. It’s gesturing at emergence, but gives no account of how this emergence is supposed to work. It’s there that theories run into problems.
I speak of ‘the replicator’, so this is partially my fault, but that’s just discussing it as the central object of my theory. In fact, the idea is for there to be a population of replicators in evolutionary competition, with distinct populations gaining the advantage at different points, and with individual replicators roughly corresponding to simple concepts that are bound together by a co-evolving process:
A single replicator, one might propose, corresponds to some appropriate ‘simple’ concept, with an agent’s state of mind being made up of a simultaneous population of such entities. What binds them together into a unified whole?
One should take care to distinguish this issue from the combination problem of panpsychism (Seager 1995): on the present proposal, there is no need to unify distinct elements of experience, associated, for instance, with individual intrinsic properties, into one larger consciousness, as conscious experience tout court only emerges upon the unification of intrinsic properties within a von Neumann replicator. However, it seems implausible (although logically possible) to have the state of mind of an agent given by a single replicator; economy, if nothing else, seems to suggest separate replicators for separate concepts, or thoughts, or whatever else the basic elements of experience might be considered to be.
One proposal might be to look towards structures that achieve self-reference only by referring to one another, such as the following pair of sentences:
(A)Sentence B is true
(B)Sentence A is false
Indeed, it is possible to construct simultaneous fixed points for agents accessing each other’s source code, for instance, to decide whether to collaborate or defect in a multi-agent prisoner’s dilemma (LaVictoire et al. 2014; Barasz et al. 2014).
So something like a split brain situation could come about by the splitting of the environment the replicators evolve in, leading to distinct populations dominating without being able to compete with one another.
And as noted above, that’s exactly what my model proposes.
Note that by ‘environment’, I here mean the replicator’s environment—i.e. the pattern of excitations in the brain set up by the data originating from the senses. This is where the evolutionary competition shaping the replicators occurs.
I will try to respond to the rest when I have more tiem, but wanted to focus on this because it is an interesting point. There isn’t a “mentalese” without language. There are multiple documented cases of children who did not acquire language until later in life due to neglect or abuse, and one of the most common trends reported is that the formation of memory in the pre language period is severely hampered or even impossible.
It seems fairly likely to me that “mentalese” simply does not exist, and without a “first language” cognitive capability is severely hampered precisely because crafting a narrative of self as you or I know it is impossible.
Then … assuming you have the cites … to correlate damage to developing minds from serious neglect to the degree of social isolation that language of any sort does not develop as a result of the absence of language, rather than due to the holistic harms of that lack of any interaction on brain structures, is a big jump. Such is not proof that language is required for episodic memory, nor would is the lack of long term episodic memory from most of our preverbal periods proof that preverbal children do not have some form of “mentalese”, albeit not in the form of a verbal narrative.
A six to nine month old is developing and testing models and hypotheses of the world. To claim they have to “mentalese”, and anything that such implies, seems absurd. Language develops from that preverbal mentalese, in the context of normal social interactions.
I’ll freely admit that the research here is very lacking, and for good reason - I can hardly imagine an experiment less ethical than purposefully raising a child without language. Your own link there is a pretty good cite, though, and it addresses the child abuse issue by including examples like deaf children in Latin American families during a time period where sign language was not in widespread use, meaning these children were isolated from language but were in otherwise loving homes. I still find the findings very compelling - from the conclusion:
Here’s how they define mental synthesis:
That’s what I was trying (and probably failing) to get at.
Certainly, people aren’t braindead without language. They can still respond to stimuli, and learn to an extent. So can animals, who have no language skills at all.
But if we think about this evolutionarily for a moment - what is the purpose of consciousness? Why would a conscious creature (be that a hominid, or an earlier ancestor with a more rudimentary form of cognition, perhaps much further up the evolutionary tree) have an advantage over a non-conscious one?
The paper below provides what I consider the most reasonable answer - a ‘flexible response mechanism’, a method by which a creature can consider situations that its instincts haven’t directly prepared it for in order to decide on a course of action.
The human version of this skill - the bit that has allowed us to invent tools and conquer the world - is language. The ability to model yourself in completely novel situations (including completely imaginary ones), the ability to conceptualize abstract concepts or things you don’t have personal experience of - that’s what your paper calls ‘Mental Synthesis’, and it’s exactly what these languageless children struggle with even after gaining language.
I’m not claiming that they have to ‘mentalese’, not sure if that was a typo? I’m saying that mentalese doesn’t exist.
A 6-9 month old is developing and testing models, but so do animals. My daughter is older than that, but at those ages she certainly wasn’t better at problem solving than my dogs or cats. Developing babies are another great example; I can attest that in my anecdotal experience, the development of language and problem solving capacities are closely correlated (though of course, this doesn’t imply causation). And an actual study of stroke patients with and without language limitations found similar correlations between language and problem solving ability.
I think the idea that we think in ‘mentalese’ and then convert these thoughts into our native tongue is wrong. Our thought patterns depend on the language we speak.
Language even impacts the way we perceive color - not completely, there are some universal constraints based around how the human eye functions (and that’s also why unrelated languages tend to name colors in a similar order - IE, if they have words for only 2, or 3, or 4, etc colors, then they’re very often the same 2, 3, 4, etc colors), but this is definitely tweaked by the language we speak:
If we had a universal mentalese that we were ‘truly’ thinking in, with our mother tongue merely serving as a conduit to this universal mentalese, I would not expect the absence of language or the terms it uses for color to have such a profound effect on our cognitive abilities.
So yes, exposure to syntax seems to be important for developing what they label as mental synthesis. I do not see that as supporting a claim that there is no “mentalese” as the phrase is being used in this thread. I do not see those as synonymous.
That’s a big claim. The arrow of causality may be the opposite in fact: the ability to invent tools may have helped drive language and the ability to create compound tools may have driven syntax development.
Agree though that “consciousness” has advantage. It is a top level integrating the reports in from all of the subunits and then sending the coordinated interpretations and orders back out and back up in a constantly revising dance. It includes models of the world as experienced that includes itself. To the point of the thread, these models are always incomplete and always under revision.
Yes. Should have been NO mentalese. Of course they do. As do many other critters I suspect. Even if their ability for mental synthesis is less. Language and symbols are how that mentalese is manifest in verbal humans. I do not expect there is any universal mentalese.
Perhaps I should clarify that I don’t think the mentalese or ‘Language of Thought’-hypothesis holds any water. I appealed to it merely to illustrate the ‘homunculus’ problem of having to appeal to ever new layers of interpretation to decipher the meaning of a symbol. My model is intended to circumvent exactly that, by furnishing mental representations out of symbols (the von Neumann replicators) that are essentially ‘self-reading’, that don’t have to appeal to any external faculty that understands them.
And perhaps I don’t understand your meaning when you use the phrase, but the issue raised of what sort of representational or symbolic tokens are sufficient to support consciousness, which I would understand as mentalese, is an interesting one. Is language required? Did primates, even hominids, without language have conscious? Do preverbal children? Do non-verbal delayed adults?
My take is that language is a token used for consciousness but required for it. Other tokens are also used.
I’m not alone in that position. And there is some evidence to support it.
I’ll WAG that language helps us think about ourselves thinking, but that non-verbal and preverbal humans, as well as other intelligent species that may not have language, have consciousness using other tokens. Mentalese.
I thought you were saying: a population of ‘daughter’ replicators are created (by U), one for each competing goal or thought (that is, each variation \varepsilon is appended to its own clone of T_N rather than modifying the original), but all except the dominant one are discarded (by S?). That way each von Neumann automaton has a single, unambiguous definition of ‘self’, including a definitive lineage of previous replicators. As each daughter is created independently from the parent N I assumed the self-referential markers remained exactly that - self-referential, not necessarily intrinsic and instantly accessable. Your example,
(A) Sentence B is true
(B) Sentence A is false
I (mis?)interpreted the markers as having a temporal and therefore extrinsic quality like so:
(Generation 1) von Neumann automaton: The next sentence will be true. (statement A)
(Generation 2) von Neumann automaton: The last sentence was false. (statement B)
In the first instance, properties of daughter automatons cannot be intrinsic. In the second instance, properties of parent automatons cannot be intrinsic. While self-referential, they technically reference future and previous self, and so wouldn’t be intrinsic to the current self. I (mis?)understood the fixed point in the former statement to become fixed (by U?) only once the daughter it referred to is actually created, and thus couldn’t be evaluated one way or another by the Gen 1 automaton. I thought this was a solution to the liar’s paradox, because the former statement was, when presented, of an unknown truth value (not a paradox for future to be unknown); however the latter statement is false. And with any given replicator only manifesting one ‘statement’ at a time, I thought these sorts of statements could only appear in the present tense when entirely self-contained:
(Generation 3) von Neumann automaton: The first sentence and the second sentence are consistent.
I thought each automaton tries to evaluate its own statement by substituting values for each fixed point:
(Generation 1) B and (B equals [intentionally left blank]) (not well formed)
(Generation 2) not-A and (A equals B) and (B equals not-A) (necessarily false)
(Generation 3) ([A and B] or [not-A and not-B]) and (A equals not-B) and (B equals not-A) (necessarily false)
The main idea of your paper, as I understand it, is that all intrinsic variables are accessable and therefore can be used for computation. This is what makes the third statement uniquely computable - it is the only one where all fixed points are were defined in the theory (blueprint) passed to the universal constructor, and thus the first where all variables are intrinsic to the von Neumann automaton.
Similarily, I (mis?)interpreted your analogy to agents and source code such that the fixed points may exist simultaneously, such that the fixed points had external referents. But again I add that the underlying math for Lob’s theorem is incomprehensible to me.
Now, you seem to be saying that multiple automatons will coexist within the same generation, and that their fixed points will ‘point’ to each other:
(Generation 1)
Left-side von Neumann automaton: The automaton on my right is saying something true.
B and (B equals [intentionally left blank*]) (not well formed)
Right-side von Neumann automaton: The automaton on my left is saying something false.
A and (A equals [intentionally left blank*]) (not well formed)
But since these aren’t self-references, it seems to me there’s no guarantee subsequent generations will have access to the underlying values. In the case of a split-brain, the two might not be communicating at all. Even in the case of a normally functioning brain, there’s a speed limit on the transfer of any information from one system to another. By the time the former system has knowledge of the state of the latter system, that knowledge is necessarily outdated.
This is most problematic because of what the split brain experiments teach us about normal brains - regional specialization of brain function. If the split brain experiments demonstrate, say, distinct physical systems for the articulation of language and sight from the left eye, and each distinct system or process is modelled by its own von Neumann machine, that means there is no von Neumann machine which has intrinsic access to the articulation of language and sight from the left eye. There would have to be one humonculus that understands language, and a separate humonculus monitoring looking out the window of the left eye, who perhaps forwards a description in something analogous to Morse code; crucially, it seems no entity is modelled which can properly claim to see a written word and know what it means. It follows that reading (at least with one eye closed) is not a conscious experience, which seems absurd, so I’m probably not getting something.
Can you give an example of where you think science is the worse for unexamined metaphysics? As best I can tell, history suggests that metaphysics has almost always been a drag.
The Greeks of course made almost no real progress in science. They had lots of interesting, speculative ideas… none of which were really quite right, or bothered to test. They had lots of metaphysical ideas about what reality “should” be like that proved to be wrong.
Astronomy only made progress when its “metaphysics” was abandoned. The Earth is not the center of the universe; the heavenly bodies are not embedded in perfect crystal geometry; planets do not move on perfectly circular paths.
Physics has made significant progress with the explicit abandonment of metaphysics. Newtonian mechanics is basically the answer to “what if it’s impossible for an observer to distinguish between frames of linear motion?” Special relativity is the same, but with electromagnetism added. General relativity is the answer to “what if it’s impossible to distinguish ordinary acceleration from gravitational acceleration?” Quantum mechanics is similar, but with trickier observables.
Overall, it appears that assuming anything in advance of observation is a mistake, and is likely to lead one down a bad path. And at the same time, specifically looking for ways in which seemingly-different scenarios are really the same, because observation cannot distinguish them, appears to be a viable path toward discovering new physical law.
No, not really. The problem is to fashion an account of intentionality: how words (or thoughts) mean things, are about or directed at things in the world. This is solved, I claim, by how the von Neumann construction yields modal fixed points: self-referential formulae that are provably equivalent to formulae from which the self-reference has been eliminated, leaving only ‘other-reference’.
The formula in particular that’s relevant is the ‘criterion of action’ for each replicator: to only engage in replication (or self-modification, which is equivalent) if it can establish that doing so advances some goal. But this can’t be established using merely a formal theory of itself, because to do so, in general, the replicator would have to assess whether its proofs yield truths, and it can’t do so—that’s an undecidable statement, by Löb’s theorem. It’s here that the intrinsic properties come into play: while the replicator can’t assess whether it’s proofs yield truths, it can still be true that they do, and only in this case will it carry out the given action—in the same way that a car can be red without needing to prove that it is red: it’s a property about itself it may not have theoretical access to, but because of the self-referential nature of this property, it’s actions still may depend on it.
Yes, that’s what I mean by a ‘simultaneous population’ of replicators.
Why ‘subsequent generations’? Only the generation for which it is a fixed point needs (and has, by its fixed-point nature) this access.
Sure, but that argument works for any model of the mind in which there is some distribution of processing—which means essentially every model that doesn’t include some sort of ‘central meaner’ or ‘finishing line’ for information to become conscious, and such models on invariably fall prey to the homunculus regress (Dennett has worked this out well).
No, but the co-evolving ‘ecology’ of replicators has this access.
In your recent two posts, you’re asking for contradictory things: first, that minds be pluralistic, not monolithic; and now, that there is some central authority. But the mind is more like a chorus, a society, a spontaneous emergence of harmony: many different voices coming together in unison. That this can be readily accommodated in the von Neumann model is a feature, not a bug.
This isn’t really on topic for the thread, but the recent slog in fundamental physics is much on point here. George Musser, in Spooky Action at a Distance, has traced this sort of dynamics well: progress in science is made by questioning philosophical assumptions; this yields a new paradigm; puzzling phenomena within this new paradigm prompt an instrumentalist reaction that assures us science is really just there to work out the observable consequences of things; then, scientists and philosophers come to grips with the background assumptions that make these phenomena seem puzzling, and new breakthroughs are made.
Take these two poles of the debate:
Those who do philosophy, who determine the proofs and the arguments … and are accustomed to enquiring, but take part in none of their practical functions, … even if they happen to be capable of handling something, they automatically do it worse, whereas those who have no knowledge of the arguments [of philosophy], if they are trained [in concrete sciences] and have correct opinions, are altogether superior for all practical purposes. Hence for sciences, philosophy is entirely useless.
And on the other hand:
A knowledge of the historic and philosophical background gives that kind of independence from prejudices of his generation from which most scientists are suffering. This independence created by philosophical insight is—in my opinion—the mark of distinction between a mere artisan or specialist and a real seeker after truth.
The former is by the 4th century BC orator Isocrates, and directed at Plato’s academy; the latter is due to Albert Einstein. There is a famous rebuttal to the former by Plato’s student Aristotle, who of course would go on to create the most long-standing system of physics to date (a system which is surprisingly accurate when put into its correct context, as an approximation to Newtonian dynamics immersed in a fluid, namely, air).
We’re right now coming out of the tail end of an ‘instrumentalist’ slog, where the discoveries of philosophically sophisticated scientists like Bohr, Einstein, Heisenberg (whose Physics and Philosophy is still very readable today), were worked out in technically brilliant fashion by the reactionary following generation who emphasized a ‘shut up and calculate’ approach—the ‘savages’ as Feyerabend has polemically called them:
The younger generation of physicists, the Feynmans, the Schwingers, etc., may be very bright; they may be more intelligent than their predecessors, than Bohr, Einstein, Schrödinger, Boltzmann, Mach and so on. But they are uncivilized savages, they lack in philosophical depth
The rejection of philosophy is the outcrop of the philosophical project of logical positivism, about which one of its leading proponents, A. J. Ayers, later said that its only problem was that pretty much everything about it was wrong. This is unexamined metaphysical baggage, and there’s a good case to be made that it’s responsible (to one degree or another; there surely are distinct contributing factors) for the current slog in physics (a case made well by Carlo Rovelli, co-inventor of Loop Quantum Gravity).
Luckily, times seem to be changing. One example is the recent Nobel prize in physics, which was really the result of John Stewart Bell’s metaphysical project (whose book isn’t called Quantum Philosophy for nothing).
But again, while there’s a fascinating debate to be had, this isn’t really the thread for it.
I will disagree. The context is pulling your response to the following to here:
By claiming
To pull that discussion here and then claim it is the wrong place for it is mistaken I believe.
I will argue that Einstein was wrong, had it exactly opposite, and was the lesser for it.
His philosophical underpinnings made him not accept a perspective supported by the evidence that did not fit his philosophy, simplistically summarized: “god does not play dice.”
The “instrumentalists”, devoid of being anchored to an extant philosophy, were able to describe a reality beyond their ability to fully imagine, beyond even Einstein’s imagination.
To bring this to your model and putative AI awareness … yeah I am on the side that making falsifiable predictions is important.
A model should at least be able to make falsifiable predictions about correlates of consciousness in humans.
It is also important to decide where the burden of proof for consciousness in an other (be it AI, other species, or completely alien) should lie. And what level of proof. I don’t have a good answer for that and have not heard any either.
On the contrary, instrumentalism was very strongly tied to the philosophy of logical positivism, which entails epistemological commitments whose internal tensions eventually caused its dissolution (to the extent anything in philosophy ever gets (dis-)solved). And it’s not in itself a problem to follow that particular philosophy: productive work was done under its aegis, even though certain outcrops, like behaviorism, arguably caused more harm than good, and had to be laboriously repealed.
It’s only once people take such a philosophy as given, as self-evident, and cease questioning their assumptions that progress grows stale. It’s no accident that the rise of quantum information theory, which has arguably made the greatest strides in fundamental physics over the past 20 years, is explicitly correlated with a renewed interest in metaphysical reasoning—I’ve already pointed out Bell, but likewise, Dieter Zeh’s decoherence programme, which originated in a deep examination of the measurement problem in quantum mechanics, and likewise David Deutsch’s search for support of the many worlds interpretation that spurred his proposal for quantum computation. Indeed, QBists have recently even started paying attention to the phenomenological tradition, Merleau-Ponty in particular…
Eh, falsificationism was never really more than a first approximation to the scientific process, and as with anything, slavishly hewing to dogma can only hinder progress. Still, for a scientific proposal, some relation to experimentally accessible data is a good thing. But of course, the question I sought to answer is a philosophical one, primarily at least—although of course, there are some general predictions, such as the existence of appropriately self-referential patterns, that can be drawn (and above, I briefly speculated about a possible realization in terms of David Mumford’s ‘active blackboards’). But that’s really not the question issue here, which is why the question of falsifiability is besides the point.
Well even if we’re saying that falsifiability is not important (and I would disagree with that), testability is at least important. We cannot know whether a model or intuition bears any relationship to reality without that.
And a “test” of course means a useful or surprising inference; we cannot consider consistency with something we already knew as passing a test.