No, it’s still accurate to say that a simulation of a thing is not identical to the thing.
Whether it’s behaviour could be identical within its domain, to the thing it’s simulating in that domain, is another matter. Note that for a model to make useful predictions, that doesn’t need to be the case. I can make predictions about how a ball will bounce without needing to model discrete atoms.
No. I define computer, for instance, as a Turing machine, or any other physical system with solely algorithmic capabilities. Using the same arguments you propose, you could also consider the steam of a steam engine to be part of the computation—but then, the word ‘computation’ becomes meaningless, as it can be applied to anything that happens physically at all.
No, that’s wrong. We can completely describe physical systems at a given level, without knowledge of what goes on at a different level. In complex systems, this is known as universality, and in the context of quantum theory, this is known as effective field theory (Sean Carroll here gives thanks for the concept, and succeeds in bringing it across very clearly, I think).
But really, as I said twice now, if the lookup table thing bothers you, let’s just drop it, the discussion really misses the point of the argument.
Well, but the point is that in the case you allow true randomness, then the robber loses even if he knows the cop’s plan precisely—the game is simply a completely different one.
And regarding security, that’s actually one of the biggest real-world applications of true (and certified true) random numbers: in cryptographic protocols, you typically assume the worst, i.e. for instance that any eavesdropper has complete access to all your devices. With true randomness, you can still defeat the eavesdropper, while without it, you generally can’t.
If what you imagine is coherent, then yes, it does; and in general, it’s held that you can’t imagine anything incoherent, like for instance a round circle. There may be hidden incoherences, but if there are any, nobody has yet succeeded in pointing them out.
Well, if it can be so connected, then it is possible that it is (or might be) so connected.
This is question-begging. Why should a system with feedback give rise to consciousness any more than one without it? A thermostat includes feedback—is it conscious?
Basically, this sort of response is along the lines ‘…and then things get complicated, and well, who knows what might happen? Perhaps then consciousness arises?’, but of course, that doesn’t explain anything. All responses so far, in fact, have an element of ‘and then maybe something happens’, without really even touching on what that something might be, much less how it might give rise to actual consciousness.
Think about explaining how pictures are produced on a screen. There’s a simple story that captures how a given pixel lights up at a given time. This story completely explains how pictures are displayed on a screen. I simply want the same story for pictures in the mind; but there seem to be fundamental difficulties in providing it. I simply think we should take these difficulties seriously, rather than just comforting us with vague handwaves that ‘something happens’ at some point that gives rise to consciousness.
Because I can coherently imagine the process of a system producing outputs from inputs without there being any subjective experience associated with it. The process is just as transparent as the one that makes pixels light up on screens; I don’t associate the latter with conscious experience, why ought I believe there is any in the former? And in what sort of processes then is there conscious experience, and what determines whether there is any, or not?
Well, we can make things in a simulation behave as if there were gravity, but they do not behave that way due to, or because of, the mass of the black hole, but because the simulation says so—in the same sense, we can make a toy ‘spaceship’ crash into a softball we declare to be a ‘black hole’ in accordance with the laws of gravity, but that doesn’t mean there is any gravity there.
And besides, if we did live in a simulation, we could not have any certifiable randomness, except if the simulation wasn’t closed.
Trinopus raised this point several days ago, but it was passed over. We look at our own consciousness from inside it - this is what it is like to be inside an ongoing program, looking out. The possibility that a thermostat, or Google, or a bacterium, or a mosquito, or a dog is conscious cannot be dismissed until we know what it is like to be one of these responsive entities from the inside.
Assuming that “program” is a reasonable description of the mind, which is essentially the point under dispute.
However the burden of proof is pointing the opposite way. I have no reason to suppose there is something it is like to be a thermostat; that it has an inner experience. Until I have any reason to suppose it does, I assume it does not.
While I can see the problem with over-generalization, it seems odd to throw away computer graphics, computer music, text-to-speech, optical character recognition, CAD, and connection to the internet as non-computational. Is a computer allowed to have a clock, so we can tell what time a process started and stopped? Even the monitor screen and mouse would be suspect.
I would urge a slightly broader definition, such that a peripheral device hooked up to a radioactive source and particle detector would be allowed, as a generator of random numbers. How much different is that from allowing a scanner, to input graphics?
ETA: I seriously don’t see the point anyway, other than a pure abstract philosophical one. How does the introduction of perfectly random numbers alter the working definition of a computational machine? It doesn’t really offer any dangerous concessions in the debate over consciousness – in either direction!
We know that organisms use quantum effects to gain maximum efficiency (e.g. photosynthesis) and that there was a recent discovery of quantum vibrations in microtubules (which may or may not be accurate and/or meaningful, who knows), so it would be a poor assumption to start at or above that level.
So if you you are saying we can approximate quarks, etc. then I will trust you, but any level above that would require a lot of detailed work to prove that there was no impact and based on what I have read about the various process inside the neuron (and other cells) that are important, I don’t think you could do go any higher.
The discussion is “is it possible to have the exact same physical structure without consciousness”.
I say no. All the evidence points to physical structure being critical to both consciousness and correct operation of the brain. (I can provide example after example, but I assume I don’t need to).
To think that we could have the exact same physical machinery churning away but suddenly consciousness is gone seems ridiculous and requires some form of argument to support it. But I went back and read through the Stanford Philosophy Encyc entry on p-zombie last night and nowhere is there a compelling argument.
Exactly. Whether you would count a randomness source as part of a “computer” or you consider it to be an external peripheral, so what?
Summing up my thoughts along the p-zombie idea, it comes down to this set of questions:
[ul]
[li]If you build something that seems to respond like a real person would, by implementing a gigantic lookup table of inputs to outputs, would this machine be conscious?[/li][li]If you have a software simulation of every neuron in a brain such that the computer (possibly with a randomness source attached) would respond just like the real person, is that conscious?[/li][li]If you built a hardware simulation of a brain’s neurons, but using transistors or logic gates that each behaved like a neuron, would that be conscious?[/li][li]If you built a hardware machine that was made out of proteins and cells and protoplasm and all the same stuff that a natural brain is made of, and gave it blood flow and nutrients, such that it’s identical to a natural brain but is just man-made, would that be conscious?[/ul][/li]
I think we would all agree that the answer to the last one is “yes.” As you go up, each step seems different by an insignificant amount, but where is the dividing line between what’s self-aware and what’s not?
If it is a giant static look-up table, one that could be printed on paper and the answers read in a single operation – I enter a value in one of the columns, and look up the answer in the corresponding row – then, no, because no processing is going on.
But if it is a dynamic look-up table, where there are lots of re-entries and recursions, and the answers come from vast numbers of loops in which data refers back to itself – then, maybe, because it is now more closely following the processes our brains perform.
Consciousness is a process, not just the end-results, or the answers to the questions.
However, in practice, I think that, if the process is absolutely “black box” and I don’t know which kind of table I’m dealing with, then I’m stuck saying, “Yes, it is conscious” because it always succeeds in giving the answers that a conscious entity would give.
If I’m denied any knowledge of the inner workings, I can only judge by the output. And I am not prepared to descend so far into solipsism as to deny consciousness to you or anyone else, just because you’re all “black boxes” to me.
I agree, but the position does not seem to yield any fruitful ground from which to continue out investigations. The fact that anything might or might not be conscious does give us precisely zero information to work with, so I think it is more promising to try and find some some reasonable grounds from which to start. One such starting point would be, since we know of exactly one example of consciousness, we judge what is conscious by similarity to that example—at the risk of going wrong in that judgement. But if we do, then we will ultimately find out, and then that will be an increase in knowledge itself.
Well, the problem is that then the question just becomes one of definition. Say that there is some consciousness-producing module, the way that there is a randomness-producing module. Then, is consciousness computational, or not? It is, if we consider the consciousness module to be part of the computation; it isn’t, if we don’t. Again, there is no insight to be gained that way.
Besides, when Turing set out the definition of the modern computer, he was guided by an abstraction of what a human mathematician can produce with pen and paper (by the way, can we create consciousness using pen and paper?); I merely propose to stick to that definition, and have ‘computation’ essentially be synonymous with ‘algorithm’: anything that can be carried out by a fixed, finite sequence of symbol-manipulations.
A clock is certainly an algorithm—we can just implement a counter via some appropriate loop construction, for example. Monitor and screen, however, are part of the periphery and interface. If you recall the discussion in the last thread, I there held the monitor to be essentially part of the implementation relation that maps the physical state of the hardware to the logical states of some computation, or at least to some human-intelligible representation thereof.
Well, I introduced it merely as an example of something a computation can’t produce, in response to your question, so the bearing it has on the debate is that there exists phenomena that can’t be reduced to computation, which opens up the possibility for consciousness to be such a phenomenon.
Furthermore, access to randomness allows us to do things we couldn’t do without it; it’s probably far fetched, but not impossible that producing conscious experience might be such a thing, which then could not be captured in a solely algorithmic paradigm.
I never fixed any level on which to start, but maintained from the start that one use the first level whose dynamics have no impact on brain processes.
If you mean evidence of the form that, e.g., brain damage alters conscious experience, then that merely shows that brain processing and mental experience are correlated, but does not establish their identity. If I have a computer controlling the coolant circle of a nuclear reactor, then damaging that computer will result in the coolant circle malfunctioning, with potentially catastrophic consequences; but that does not establish that the computer cools the reactor, rather, the coolant does—the computer just says how it is supposed to be distributed.
Well, the p-zombie argument is an argument designed to support this idea. Putting my cards on the table, I myself think that p-zombies are probably not possible, but probably for some subtle reason; and I definitely think that this argument, and others like it, deserve to be taken seriously, rather than being handwaved away by vague statements about ‘feedback loops’ or ‘complexity’ or something else happening, which then somehow gives rise to conscious experience, just to attempt to shoehorn things into our favourite paradigm.
But where, exactly, do you believe the argument fails?
Well, a proponent of the zombie argument would perhaps agree with that, but nevertheless hold that it is metaphysically possible that such a human-identical brain might not be conscious—i.e. that there is a possible world (a ‘zombie world’) that is physically identical to ours, but without any conscious experience.
Consider the above example of a computer steering the cooling circuit: if you rebuild just the computer, it doesn’t mean that it is capable of cooling a nuclear reactor—it needs the appropriate physical resources to do this job. Just as, for example, in the case of randomness generation.
I know, I didn’t think you had, I was just pointing out why I had/have issues with a neuron level replacement.
First, let me say that I agree with the idea that exploring the argument helps refine our thinking. I also agree that complexity and feedback loops don’t answer the question about how complexity or feedback loops cause consciousness, it’s not an explanation, just a vague notion.
My problem with the p-zombie argument is that you must first assume physicalism is false to be able to state that a p-zombie is possible, and then the fact that it’s possible implies that physicalism is false.
The only way we can state whether a p-zombie is possible is if we can state whether physicalism is true or not, which is exactly the question. It doesn’t get us closer to answering the physicalism question.
But that’s not how we work and we know it! We know full well that we are subject to stimuli which is random. To limit an algorithmic computer by refusing such randomized input destroys the analogy. I concede that a purely algorithmic input-output machine with no access to the inputs we are privy to could not exhibit randomness in the way we do. This, however, proves essentially nothing.
I would say, absolutely, yes. You could even perform analog computation with pen and paper, something that is tricky using a digital machine, but might be vital for consciousness.
In practice, of course, no, as the operations for even ten seconds of consciousness would take more scribbling time than we have left in our universe, and more paper than we could fit in our galaxy…
But as an abstract ideal, I would say, yes, but only if you actually do perform the manipulations. If you stop for a couple of years, the consciousness is not being processed. The performance is the consciousness. I don’t think consciousness can exist “in amber.” The mere existence of a “conscious” lookup table is not consciousness, unless someone is actually using it and looking data up in it.
(Otherwise, one could claim that a text print-out of the DNA code for a bacterium – AGCT etc. – is “alive.” It’s only when the code is actualized that it can really exhibit life. I think consciousness is sufficiently abstract that it can be emulated in data processing, but life, as yet, depends too much on the chemicals.)
(That said, if we could build little “robot” atoms – objects about a millimeter in size – which obeyed all the laws of chemistry – the little O robot could merge with two H robots, and a bunch of NaCl robots would dissolve in a bunch of H2O robots – then a fully realized “living cell” robot would, in fact, be alive.)
I think that mischaracterizes the argument. Perhaps we should try to lay it out more formally in order to clear up what is being assumed, and what is being concluded. I propose the following formulation:
[ol]
[li]Metaphysical identity is necessary—that is, whenever it is the case that x is A, then, if A and B are metaphysically identical, then x is B[/li][li]If something may be false, then it is not necessary—if there is a possible case of an x that is A, but not B, then ‘A is B’ is not necessary, and A and B are not metaphysically identical[/li][li]Physicalism proposes the metaphysical identity of (states/processes of) consciousness and (states/processes of) the brain[/li][li]If something can be coherently imagined, then it is logically possible—i.e. whatever does not contain a contradiction could actually be the case[/li][li]It can be coherently imagined that states/processes of the brain exist, without there being any attendant conscious experience[/li][li]By (5) and (4), it is then possible that there could be the relevant brain processes, without conscious experience[/li][li]By (6) and (2), ‘consciousness is brain processing’ is therefore not necessary[/li][li]By (7) and (1), there is thus no metaphysical identity between consciousness and brain processing[/li][li]Therefore, by (8) and (3), physicalism is false. [/li][/ol]
So you see that we need not assume the falsity of physicalism in order to get it out as a conclusion from the zombie argument—the only assumptions we need to make are (1) to (5), which are: (1) a definition of metaphysical identity, (2) a definition of necessity, (3) a definition of physicalism, (4) a proposition that coherence entails possibility, and (5) a proposition that a certain state of affairs is coherent. If you wish to resist the zombie argument, you need to attack either of those premises. You could, for instance, attempt to deny that the state of affairs in (5) is coherent—which would mean giving a mechanism by which states of the brain are, in fact, always accompanied by conscious experience, and we had just falsely believed that they weren’t; this, then, would amount to a solution of the hard problem.
But note just how this works in other cases of metaphysical identity, as in the example of water and H[sub]2[/sub]O—as long as you know what water is, and what H[sub]2[/sub]O is, you know that the equivalent to (5) can’t hold, because just that knowledge entails that we can’t coherently imagine a situation in which something is water, but is not H[sub]2[/sub]O. But in the case we’re discussing, it seems that we know both the subjective experience and the brain—subjective experience is nothing but what we know about it, and the brain’s functioning is just a special case of physics we know well beyond any level that could conceivably be necessary to brain processes. And yet, we can coherently entertain (5)—so something seems to be very different to the water/H[sub]2[/sub]O case! But what?
The analogy is between the machine, and what we can do using symbols on paper, which are being manipulated according to fixed rules, i.e. math. The question is, does this manipulation of symbols suffice to give rise to consciousness?
I disallow randomness because you can’t create it in this way—you’d always have to refer to the real world, throw a coin, roll a die, something of that kind. For that sort of thing, you always need what’s known as an oracle, which, in the broadest sense, is a machine (or whatever) that solves a problem that can’t be solved algorithmically, and hands you the solution to use. Now there’s a lot of interesting results that you can obtain by enhancing computation with various oracles, but of course, if you allow arbitrary oracles, then the answer to ‘can x be done computationally’ will always be ‘yes’, since there is some oracle that does x (if it can be done at all). So the question becomes trivial.
Could you elaborate on your reasons for thinking so?
This is tangential (anything that can be done analogically can be done digitally as well—as per the Nyquist-Shannon theorem, the full information content of an analog signal can be encoded in a digital one, at least as long as it uses only a finite spectrum of frequencies, no matter how much analog enthusiasts harp on about the superior ‘warmth’ of the sound or whatever else), but how would you do that?
Well, with a pen and paper you could draw a series of curves, and use them to extrapolate data. This might help with some of those biological feedbacks that were mentioned upthread, where it was pointed out that the brain is affected by hormone levels in the body and other biological phenomena, which might be tricky to model digitally.
Note that I only say tricky, not impossible. Using vector graphics you can get some beautiful curves, much more accurate than anything you can draw by hand.
This makes me think about the quality of consciousness, and the way this might be affected if you attempt to digitise it. Leonardo’s wonderful drawings are far superior to any vectorised or rasterised version of them. If you try to digitise a human consciousness would you lose authenticity in a similar fashion?
Once you get beyond 1500 pixels to the inch, you pretty much lose the ability for the unaided eye to tell the difference between an original pen-and-ink drawing and a digitized scan.
With digitized minds, the difference would probably just “blur out” in our perceptions. After all, the pen-and-ink drawing is “digitized” in reality, at the level of individual ink molecules – at the rather insane level of billions of molecules per inch. This, at least, demonstrates that, in principle, if the pixel density is high enough, our senses are incapable of telling the difference.
What’s a bit sad is how very low that density needs to be. 300 dots per inch is actually pretty darn good repro quality.
#4
Assuming “coherently imagined” implies that we have not violated any of the laws of physics or any other relevant cause and effect rules, then there is a very low limit to what our brains are capable of coherently imagining.
#5
My problem is with #5. We do not have enough information to make that claim.
The information to make claim #5 is exactly the information we are seeking.
Water analogy:
Before we knew that water was H2O, and before we understood everything about chemistry/physics (we understood some but not everything):
Philosopher #1: I have this intuition that H2O is water
Philosopher #2: Hmmm, I can coherently imagine we could have H2O that is not water, therefore it is not
Just because someone says “I can coherently imagine X” doesn’t mean it’s accurate. I don’t think it’s valid to just assume that type of statement is correct given our limited knowledge with regard to the relationship between the physical and consciousness.
We know the underlying physics.
We only know some about how various physical properties are being used in the brain (e.g. quantum influences in biological systems, the different ways particles are used to communicate (e.g. continual flow of protons for inner ear computations)).
Either way, I personally don’t think there needs to be an undiscovered physical particle or attribute that leads to consciousness, my unproven intuition is that it is related to physical structure.
We know that a continuous (analog) space-time will have some different attributes than a discrete (digital) space-time.
If we live in a continuous space-time, would it be possible to simulate it with a discrete space-time? Wouldn’t there be inconsistencies somewhere? Meaning that people living in that simulation (if we can call it that) would detect a set of physical laws that do not match exactly the underlying continuous space-time?
I’m following this discussion as AI is one of the first subjects I started with in this message board. I agree mostly with what you are reporting and one comment should be added here just to get some clarification from others.
I only like to add to this post that research shows that the human mind can do a lot with even less than those resolutions, in fact it is not a resolution or storage issue, the brain does not record everything, it just records the things that the models it has finds to be different.
On the big picture, IIUC what the new research means that there is a lot that can be done once we begin to apply the new realizations and insight into artificial brains. “Consciousness it what it feels like to have a neocortex”, nothing unusual.
If the evolution of AI goes as this view reports we will eventually make machines that can identify how consciousness works, the question I’m curious is:
In his book and speeches Hawkins points out that consciousness is not magic, there is evidence of what is going on and once we figure out how to replicate that (and he is showing already many examples of research and tools following hierarchical principles) we should be able to build intelligence machines that work with the same principles. And because there is nothing magical about consciousness the machine will get it. Is there a lot of evidence against that idea?
The impression I get is that proposing the zombie is only an effort to continue to claim that there is something magical or religious to what consciousness is.