I do not believe any of Shakespeare’s works, let alone Stephen King’s, are encoded in the digits of Pi, and no number of monkeys on an infinite number of typewriters could type out the collected works of Shakespeare let alone one page. A consciousness is required to put things together into meaningful communication. Randomness will not suffice for any of these endeavors.
Well, the basic reason is that structure, or syntax, underdetermines content, or semantics. The first expression of this was, I believe, Newman’s problem, which points out a problem with regard to Russell’s structural realism—if the whole world is simply structure, i.e. relations between things rather than those things themselves, then any set of things of the right cardinality (i.e. of which there are enough) can be considered to implement that structure. In other words, if I give you merely the relations between things, you can’t uniquely conclude what those things themselves are.
But information, and information processing, is only sensitive to structure—roughly, a bit of information is a difference between two things (as in Bateson’s dictum, ‘information is any difference that makes a difference’). So specifying just some information processing fails to specify the underlying objects carrying out the information processing—and hence, you can view every object as carrying out that processing.
Well, an answer to how our brain interpretes the signals would be nice. Think about it: when you posit that something in our brain interpretes the signals, then there is the question of how it does so. Does it, in some sense, perceive the neuronal output, and carries out some computation to interpret it? But then, it merely again produces output—who perceives that, and how? You’re left with an infinite regress, and no actual perception ever occurs, because perception on the nth level depends on perception on the (n+1)st. This is exactly the homunculus problem I tried to explain earlier.
It’s also a huge problem for any functionalist account of consciousness, because you’re now effectively saying that function does not suffice for consciousness—which is really all I have been arguing for so far. Something carrying out all the functions our brains carry out (at least w.r.t. behaviour) can exist, without it being conscious. Basically, this is the zombie argument I mentioned earlier. But then, what is it that causes consciousness?
But this, too, could be passed without any attendant consciousness. Again, you just need some device matching CCD input to certain predefined expectations, and if they match, the robot sets the ‘that is me’-flag, perhaps by lighting up some diode. (In 2012, there was a bit of a kerfluffle about Yale’s NICO, who was then said to be able to ‘almost’ pass the mirror test; supposedly, full testing was to take place later that year, but some quick googling failed to produce any results.)
Also, I think you may have missed my prior post, because we simulposted.
It’s been a bit since I read this paper, but quickly skimming it now, I think that essentially what he objects to is that, while for any given computation, the rock may follow the sequence of states that the FSA does, under the right implementation relation, it does nevertheless not implement the FSA in the sense that it implements its transition rules—i.e. while it is the case that some sequence of states occurs in the rock that can be mapped to the sequence of states of the FSA, it is not the case that whenever the rock is in some state mapped to a state of the FSA, then the next state of the rock would necessarily be the one mandated by the FSA’s transition rules.
But I think this objection falls short, for two reasons. First of all, it merely says that in order to implement the FSA, we would need some physical system whose causal structure can be mapped to the transition rules of the FSA. It’s not clear to me that this really accomplishes such a great reduction of the possibilities—going back to Newman’s problem above, even causal structure does not suffice to single out physical systems uniquely.
More important, however, is the fact that given the implementation table and the rock, I can still implement any computation I desire. So, if I were interested in computing pi to n digits, there is a table such that I can look up the states of the rock, and generate the digits of pi as a result—that is, the outcome of the computation is independent of whether the rock faithfully implements the FSA in Chalmers’ sense. So this sort of implementation seems to be a metaphysical requirement without any grounding—if consciousness is the result of a computation in the same way pi is, then it does not matter if the causal structure of the rock matches the transition rules of the FSA.
In order to encode them, yes, but you always need the plaintext in order to generate the cipher. But the sender only needs the digits of pi, and the place where the encoded text starts, plus how to decode it, and will then generate the works of Shakespeare from that, without knowing them beforehand.
Similarly, you don’t need to know the outcome of some computation in order to ‘encode’ it in the evolution of the rock, your merely need to know the program; with this knowledge, you can set up the implementation table, such that anyone in possession of the rock and the table can carry out the (arbitrary) program.
A better way to resist the conclusion of the argument is, I think, to make use of the notion of computational complexity (this has been pointed out by Scott Aaronson): the implementation function f is itself a computation, and if the complexity of this implementation function is equivalent to the complexity of the computation to be carried out, then in a sense the rock does not contribute any computation to the solution of the problem; there would be an equivalent computation of the same complexity implementing the program without needing to look at the rock at all.
It’s sort of analogous to the question of where the information is in a code: you could set up a code that maps ‘a’ to the complete works of Shakespeare, but only at the cost of having other code words that must be longer than their plaintext, because there is a fixed amount up to which you can compress a message (on average), which is its information content. So in a sense, the information content of the message ‘a’ is contained mostly in the decryption table, not in the letter itself.
That’s still not a completely satisfying answer, to me: there still may be an enormous wealth of systems that can be seen to implement a computation equivalent to a conscious mind (if such a thing exists), if the only requirement is a reduction in the complexity of the implementation, but it’s at least a start, and saves the implementation of computations from being completely trivial.
That should’ve said ‘receiver’ instead of ‘sender’ there.
HMHW,
Then, by your hypothesis, we have the answer to the question “What is consciousness” and it requires only two pieces of information:
- The starting point of the answer in the series Pi
- The nature of the encoding (ASCII,octal dec,hex etc.)
Assuming the current algorithm for Pi is sufficient to calculate the required number of digits, I assume such a series could be found, but what is the significance of the result?
Unless the number and table are given to you by a Deity, you will have to have the answer in hand to recognize it - a fact pointed out by knowledgeable folks in posts above. Without the Deity, you are assigning significance to randomly selected states. This is the kind of magic indulged in by Numerologists.
We all know that ‘correlation does not imply causality’. You are trying to prove that ‘random association implies causality’.
Crane
I don’t see how you get any of that from what I wrote. Could you perhaps clarify?
HMHW,
I can, but to avoid tangents, I need to know the syllogism that underlies your hypothesis. As I understand it, you propose that a consequence of consciousness being a physical phenomenon is:
All predictable states are deterministic
All states of physical phenomena have a point of congruency with the states of a predictable series of digits (Pi)
.:Therefore all physical phenomena are deterministic and can be predicted
Is that the case or do I misunderstand? Perhaps you can provide a better syllogism.
Crane
This… doesn’t have anything to do with the argument I was trying to make. I’m not sure how I explained myself so badly. Where did I talk about determinism, or prediction? What do you mean by point of congruency? I’m honestly a bit lost here. What does the question of whether all physical phenomena are deterministic have to do with consciousness?
Anyway, I’ll try one last time to make myself clear. The first premise of the argument is (i) computationalism: there exists a computation—meaning a finite algorithm—that, if executed, produces conscious experience (of some form). The second is (ii) implementation relativity: any physical system can be seen to implement any computation, provided we use the right implementation relation. I suppose a third premise is (iii) implementation nondisturbance: the choice of implementation relation does not change anything about the physical system.
From (ii) and (iii), it follows that (iv) any physical system implements every conceivable computation (it’s just a matter of how you look at it which computation it implements). Now, with (i) and (iv), we can conclude that any physical system implements the computation yielding conscious awareness, meaning that every physical system is conscious (in every possible way).
Since this is an unpalatable conclusion, we are forced to reject some of our assumptions—Searle and Putnam, who debuted the argument, want to reject assumption (i), that computation is sufficient for conscious experience. Chalmers, in the paper posted by eburacum45 above, wants to reject assumption (ii), arguing for a different account of implementation than the one I gave above (incidentally, if there’s still something not clear about the argument, I’d recommend taking a look at the Chalmers paper, who lays it out much more formally than I have).
Does this help clear things up somewhat?
HMHW,
Thanks, I will read Chalmers, but probably not today.
The premise:
All computational systems can implement all computations
is a universal positive statement that is not true. A summing amplifier cannot implement the exclusive ‘OR’ function. A Turing machine cannot implement time dependent parallel processing.
The premise:
All physical systems are computational
is a universal positive statement that is not true. Physical systems have states that may sometimes coincide to the states in a computer, but that does not make them computational.
The syllogism:
All human consciousness is computational
My PC is computational
.:Therefore my PC is conscious
commits the obvious fallacy of the undistributed middle term.
Crane
Well, I’m not actually saying that, because I don’t accept the definition of consciousness as solely subjective. I believe that consciousness is a function.
In the thought-experiment of people who are not conscious, but can’t be told apart from conscious people, I was only thinking of external behaviors. They might act the same as everyone else, and we can’t know what their subjective perceptions are.
I was envisioning a black-box situation.
I wasn’t thinking of closing off all possibilities of opening the box, and observing the workings directly. I think, in practice, we can open the box and examine the workings of consciousness directly.
I should have said, “If consciousness is purely subjective,” then it would be possible for one human to be conscious, and the next human not to be conscious, and no test of any kind could ever discern one from another.
Since I do not believe that consciousness is purely subjective, I should have been more rigorous, rather than allow myself to be drawn into a contradiction. Well-played, I suppose, and I’ll happily gnaw one leg off in the process of extracting myself.
Trinopus,
What do you mean by subjective?
Crane
But a universal Turing machine can implement any computation that can be performed, at all.
Regardless, it’s not a premise I need. You’ll note I have specified to finite state automata, which can implement any computation that a finite machine—that is, a machine with a finite memory—can implement. They’re not capable of universal computation, but then again, neither are, strictly speaking, out PCs, or our minds—they have access to only finite resources. So all I need for the argument to work is just for the rock, or whatever else physical system we consider, to have enough accessible physical states, which is guaranteed for every macroscopic system.
Again, it’s not a premise the argument needs. I merely need for any physical system to have distinct states that can be mapped to the states of the computation.
True, but it’s not an argument I’m making. I argue that if there exists some computation that implements consciousness, then any computer implementing this computation then must be conscious.
Really, I think we have for some reason a failure of communication here. What you keep claiming that I say has little to no connection with what I actually am saying.
OK, then tell me what aspect of consciousness is objective.
As you seem to agree yourself, there is no in principle way to tell, solely from behaviour, whether an entity is conscious. Consciousness is appreciable only to the conscious system itself; I don’t partake in your conscious experience, and you don’t partake in mine.
Certainly, there may be objective correlates of consciousness—certain patterns of neural activity that always go along with certain conscious experiences. But the problem is precisely how those objective correlates give rise to subjective experience. If you claim that ‘consciousness is not subjective’, then in a sense this just means you’re talking about a different problem—you’re using the word ‘consciousness’ in a way that’s unfamiliar to me (and that, I would argue, misses the real issue).
Well, yes, but this black box argument applies to every implementation—even if you open up the black box, and you look at the subcomponents, in order to find out what happens really, you can once again apply it to those subcomponents—they can fulfill their functions without there being any conscious experience. So, the whole person may be the black box, and produce all its behaviours without any attendant conscious experience. But if you look into its head, then all the various areas of the brain can likewise be replaces by black boxes, which have no conscious experience, but fulfill their functions. Likewise with the neurons, molecules, atoms—at no stage does function require conscious experience. And at some point, you have a physically identical duplicate of you, without any attendant conscious experience—i.e. a philosophical zombie.
But then, what is it that produces consciousness?
Things that only the individual experiencing them can ever understand.
“Nobody knows the trouble I’ve seen.” No one can know exactly how you, as an individual, experiences pain. No one can ever have the exact same sense of appreciation that you do when listening to Mahler’s Sixth Symphony.
We can communicate some of our feelings, but there are other feelings we don’t have the tools (yet?) to share.
I blundered when I said to Half Man Half Wit that it could be “impossible” to know if another person was conscious. That’s only true if consciousness is so subjective that it can’t be detected by any test whatever.
But there are some things that (until we develop mind-reading technology) we simply cannot communicate to each other. Your sense of personal “selfness” is largely beyond your (or anyone’s) ability to describe.
Consciousness may be largely beyond our ability to communicate, but I erred in saying it might never be subject to objective analysis.
I believe that consciousness will succumb to scientific analysis, much the way that “life” has. There were times when “organic chemistry” was believed to be beyond the ordinary rules of chemical laws, and that there was an “elan vital,” a “spark of life” that chemistry could not explain. We know better today.
I believe that consciousness, intelligence, and mind are all subject to a very similar de-mystifying revolution.
(I first wrote that as “the exact same” de-mystifying revolution, but decided that was too strong.)
I’m doing the best I can!
Computer random processes have generated small snippets of Shakespeare. A full page, no, but entire lines of dialogue, yes. It isn’t a philosophical objection, merely a technical one.
Similarly, lines of Shakespeare have been found in the digits of pi, but, again, not full pages and certainly not his entire works. It’s only a technical problem.
If pi’s digits are infinite (which, in essence, they are) and non-repeating in a particularly rich kind of way (it is easy to construct non-repeating series that aren’t rich enough to encode useful information) then all of Shakespeare’s works are encoded in it…somewhere. It isn’t that hard to calculate how many digits are needed before the probability of such an encoding exceeds any given level of likelihood. If you want a 99.999% chance of capturing all of Shakespeare, you only need to look out to X digits of pi. Want to increase that? Easily done…
The problem here is that you have just insisted on there being limits to what can be accomplished with infinite operational processing…and that (largely) contradicts what the word “infinite” means.
That we’ll never actually see those digits, or employ those monkeys, is only true because we live in a universe that is, for all working purposes, finite. In the ultimate ideal Platonic sense, yes, Shakespeare is embedded in pi. So is Debbie Does Dallas.
HMHW et al,
Thanks for the reference, Chalmers is excellent.
With regard to the ‘wall’ nonsense:
*“It is worth mentioning an argument by Searle for a thesis similar to Putnam’s (Searle 1990). He argues that any physical system can be seen to implement any computation: computational properties are not intrinsic to physics, so that computational descriptions are observer-relative. Under the right interpretation, for example, his wall might be seen to implement the Wordstar program.”
“From what we have seen here, this argument fails. Whether or not computational properties are intrinsic to physics, the implementation relation between abstract automata and physical systems is perfectly objective. Even if there is a correspondence between states of the wall and states of the Wordstar program, and even if we do not worry about inputs and outputs, it is almost certain that the wall states will not satisfy the relevant strong state-transition conditionals, as the wall will not possess the requisite causal organization. Implementation is a determinate matter, as we have seen, and it puts a very strong constraint on the relevant class of physical systems.” Chalmers*
The wall to WordStar argument is pure pseudoscience.
Chalmers concludes:
*“It is not implausible that minds arise in virtue of causal organization, but neither is it obvious. It is also plausible but not obvious that the discrete CSA framework can capture the precise causal organization (perhaps continuous, perhaps even non-computable) on which mentality depends.” *
Crane
HMHW,Trinopus,
"Consciousness is appreciable only to the conscious system itself"HMHW
That is the point. And because of that, it is likely we will begin to understand the mechanism but never be able to connect it to what we experience as being conscious. There is no external referent. We cannot use the phenomenon of consciousness to describe itself, just as you cannot take your temperature by sticking your thumb in your mouth.
Crane
It’s an interesting theory, but I still don’t buy it. It’s kind of like if Macbeth in its entirety were encoded in the digits of Pi, but nobody had the time to discover it, would it matter? It would be unmanifested until someone discovers it and let everyone else know. Unmanifested equals irrelevant. Otherwise, it’s like a tree in a forest that no one is around to hear. Does it make a sound? It doesn’t really matter. It is the one who watches who also makes the thing real, kind of like the Heisenberg Uncertainty Principle. The thing attains meaning when we recognize it. In any event, I’m not sure I buy into the meaning from randomness argument. It is impossible for we human beings to remove ourselves from that which we study.
Trinopus,
Hey, 314159 gives you Ca… as the first two letters, using simple single digit coding. That’s a good start on “Call me Ishmael”. Only 1,236,110 characters to go (plus spaces and punctuation).
You should get “Call me Ishmael” in the first 100,000,000,000,000,000,000,000,000,000,000 (10^30) digits. For Moby Dick, with spaces and punctuation, maybe 100^3,000,000 digits.
Crane
Getting a one off word or phrase is one thing, but you seriously think Pi will get you a connected series of statements intended to establish a proposition? Let alone a full and complete narrative? By chance? I say horse-puckey. If it did, that would be one magical number.
I certainly agree that we will never see those digits specified and discovered concretely. So, yes, it “doesn’t matter.”
Again, remember, we’re talking about infinity. It not only has to happen, it actually has to happen an infinite number of times.
Anything having to do with infinity will, inescapably, violate our concepts of reason. Infinity is intrinsically not reasonable. (Insert obvious pun on “irrational.”)
It’s a little like Euclid’s proof that there are an infinite number of prime numbers. “You multiply all of them together and add one.” No one can possibly do this. No one in existence today could conceivably even multiply together the first million prime numbers. The process is nonsensical.
Yet the proof is still valid. Euclid showed how, in abstract theory, one can construct a new prime number.
So…you’re both right and wrong: right that it will never be done in concrete fact, and that it doesn’t matter an iota. But wrong in that pi’s infinite series of digits doesn’t ever encode Shakespeare. For you to make that as a positive claim would require you to prove that this doesn’t happen – and that would require you to examine every single ten-million digit substring of pi, a task that simply can’t be accomplished.
Remember, in the same overall string, there also exists every other conceivable ten-million digit string. If you claim that Shakespeare isn’t encoded, you also have to claim that Cervantes isn’t encoded, and Spinoza isn’t encoded, and Virgil isn’t encoded, and Michener isn’t encoded, and also that every author who ever might have written a book isn’t encoded, etc. Also, that none of these books are represented by any system of encoding, of which there are an infinite number. By a painful process of redutio ad absurdum, this would demonstrate that pi cannot contain any ten-million strings of digits…but we know that it does.
Reason goes out the window when infinity comes innuendo.
And here…I don’t know. Maybe? Maybe we can learn what consciousness is, by observing how it breaks down. (We learned what atoms are by smashing them!) Oliver Sacks’ work with patients with brain damage has been (horribly) enlightening. Some day, we may encounter a stroke patient who is awake and volitional but not conscious. I don’t really know what this means, but it isn’t beyond the range of our imagination.
Maybe we’ll assemble AI systems that exhibit the traits of consciousness, and we can begin to understand it from that end.
If consciousness if only subjective – if that is how the word is defined – then maybe not. But at that point, it also stops being interesting.
“I have hidden a piece of paper with a secret password on it, on the far side of one of the moons of one of the planets of one of the stars of one of the more distant galaxies in the observable universe. Find the password, and I’ll give you $100.”
Not a very interesting challenge, is it?