Anyone else who doesn't accept that they are conscious?

So what if they are chaotic systems? They are sensitive to initial conditions, yes. They are also still deterministic, and can be computed theoretically, though not practically.

Quantum indeterminacy adds a purely random element, yes. See this post. The fact that the added randomness is purely random implies that such processes, while not strictly deterministic, can be realistically simulated. With sufficient computational power, low-energy QM processes can be simulated to arbitrary accuracy, with the caveat that the world-line you are simulating may diverge stochastically from what is observed in your own world line.

See above. QM != magic. Quantum fluctuations are purely random within deterministic probabilistic envelopes. Though not technically deterministic, a realistic world-line of Hormones can be simulated to arbitrary accuracy.

I just posted, before seeing your post, a similar argument using different language.

I didn’t say consciousness is the string of statements. I defined it as a process. My definition is in complete accord with “the *ongoing process *of selecting, updating and editing that string.” Though I also point out that, as per my definition, the specifics of the process are irrelevant:

Because consciousness is nothing more than some deterministic process with the ability to cause an entity to say it is conscious, the specifics of the deterministic process are irrelevant.*

Hey look at that. I would just call a process that uses non-deterministic input a non-deterministic process, but that’s just definitional stuff.

[QUOTE=Mr. Dibble]
No. The memory locations could be, the dynamical system that creates the narrative couldn’t. Like I said earlier, it’s non-computational.
[/quote]

The dynamical system or whatever you want to call it can have its state copied to disk. If that state is used to start up the same dynamical system. or a different one, it won’t necessarily behave the same way as the original does, or would have. What difference does that make? It’s just the transporter problem. A copy of you is not the same as you. But there is no way to distinquish which copy is the original by examination.

And are not Turing machines, which is the computational model **iamnotbatman **keeps referencing, so I was running with that.

That’s a self-contradictory statement. If it can’t predict, it can’t model precisely.

Approximate predictions.

Again, I’m not getting how those two sentences aren’t contradictory

That’s *exactly *what I’m getting from iamnotbatman’s lookup table analogy.

I agree

I don’t think it’s irrelevant - that unpredictability lies at the core of the conscious experience.

No, they are not deterministic. They are sensitive to environmental conditions, not all of which are deterministic.

…for a sufficiently coarse level of approximation on a simplified model, yes. e.g. look at climate modelling. You can make systemic-level predictions, sure. But that doesn’t make it deterministic. You can’t predict with even 90% accuracy whether it will rain on my house in exactly one year.

And no, randomness ramifies in manifold ways in a dynamical system. No simplified simulation of such a system could be called “realistic enough

Making such a model pretty useless for predictive purposes.

Not really. Hormones in a lab rat, maybe. But not hormones in a human, not when you consider how environmental factors (including that weather!) affect hormone balances. To simulate realistic hormone balances, you’re going to have to simulate a realistic world along with it…good luck with that.

No, it can’t. Heisenberg, again.

MrDibble – you seem to be confusing the difference between theoretical and practical computability. The fact that the weather makes prediction difficult has nothing to do with whether something is computable in principle.

You are misunderstanding TriPolar’s point. Because of quantum indeterminacy, a virtual human brain will not act identically to a real human brain. But a quantum mechanical simulation would be complete and correct. It would be equivalent to exactly cloning a human (along with the brain state) – due to the probabilistic nature of quantum mechanical processes, the clone would not act identically to the other copy, even though they are both the same person with the same brain.

I’m not suggesting the model would be predictive. I am addressing your position on computability. If you accept that the human brain is nothing more than matter, then you are forced to accept that the human brain can in principle be simulated. This is because the relevant laws of physics can be simulated to arbitrary precision (proviso for pedants: at low energy, assuming our current theory is correct). If you try to simulate my brain, you will possibly not be able to predict my exact thoughts, due to quantum indeterminacy. But you are still fully and completely simulating an honest-to-goodness brain.

I’m not saying it makes it difficult, I’m saying it makes it impossible. There is no amount of computronium you could muster to make this prediction, even if you had multiple universes’worth.

“complete and correct” how? In comparison to what?

No, what I’m saying is whatever you copy, won’t be the same. In copying, errors will creep in, not just in the states but in the physical matrix. It will never be “the same brain”, ever. Certainly not in the case of clones, where developmental differences would be pervasively manifest.

I’m fine with saying that, with the provisos that a) it’s not because the brain is nothing more than matter - all permutations of matter don’t reduce to “computable” i.e. universal determinism doesn’t exist.
b) that just simulating the brain, without a body in an environment, is IMO insufficient for simulating consciousness.
So yes, brain behaviour is (theoretically) computable. That’s (to use your idiom) trivial.

Complete and correct means exactly that: the full materialist description of the brain can be simulated without leaving anything out, using the correct equations to describe all physical properties (correct meaning the actual laws of physics, as opposed to using some model). The simulation would be an exact (meaning: to arbitrary precision) simulation of the human brain. The correct description also includes quantum mechanics, so of course there wouldn’t be absolute predictability. That means we would be doing it right. There is not absolute predictability in any physical system for which we have an exact description of the laws of physics. That doesn’t mean we can’t simulate those systems! That doesn’t mean their full quantum mechanical description is not computable! We have the rules, and the effect of applying those rules can be computed!

It won’t have the exact same state at time t, but yes, it will absolutely be the “same brain!” My brain isn’t in the exact same state it was in one second ago. It is still the “same brain”, hormones working and everything! In fact, that analogy is not an oversimplification. It describes precisely what the difference would be between the “real” and simulated brain: some quantum variability whose degree and scope is functionally equivalent to letting the “real” brain evolve in time. If you were to rewind and start the universe over again, the “real” brain would evolve slightly differently (due to QM). It would still be a “real” brain with “real” hormones.

So? Who is to say we can’t simulate a body too? We can simulate the physical world in general, remember?

Let me ask you something, TriPolar.

Do the claims of the “Subjectivists”, make any sense to you? Are their claims well-defined?
I mean, perpetual motion is not real, yet the concept is well-defined to me. OTOH, the invisible pink unicorn is not real, and not well-defined.

Where do qualia fit on this scale for you?


It is often the case when discussiong the philosophy of the mind with people that they begin from a position of “what hard problem of the mind?” or something like “Oh, that’s simple, that’s like how different cultures have different meanings for colour…”.

Then, after giving some examples – pain is a good one – the penny drops and they suddenly appreciate the magnitude of the problem.

Frankly, I don’t think you appreciate what the problem is yet.

Turing machines are theortical, dependent on an infinite resource. So if you want to nit-pick you are correct. That is irrelevant. The system is Turing complete. Calling it a Turing machine is an figure of speech.

It’s clear that you are not getting the point. Human brains do not precisely predict behaviors of other systems sufficiently complex enough to be affected by uncertainty. Neither does a computer emulating or simulating a human brain. A model is not necessarily a predictor. Sometimes it is just a model. A unique and distinct system. You are claiming a relationship between predictability and computability which does not exist. I can make a Schroedingers box, I can’t predict whether the cat is alive or dead when I open the box. I can model a Schroedingers box also. I still can’t predict whether the virtual cat is virtually alive or virtually dead when I virtually open the virtual box. A model does not have to predict, and a model of something non-deterministic can;t predict, but is still a model, and still computational.

Computers now are subject to uncertainty. Radioactive decay in my laptop could cause the rest of this sentence to ao*2ol as;;do0 dap. So you are contending that computers are doing non-computational things. That is not a useful definition of computational IMHO, never-the-less, your definition would apply equally to the interaction of hormones or the interaction of charged particles in an electronic computer.

No, unpredictability lies at the core of all systems as I have explained. You are assuming it is something necessary to conciousness. I disagree. Predictable conciousness might be horribly dull, but no different. Predictability does not mean non-variant, it just means predictable. It’s possible that your brain will function predictably for the next 5 minutes because no uncertainty affects the results of its operations. You aren’t unconcious for the next 5 minutes though. Your conciousness just hasn’t done anything random for 5 minutes.

[/quote]

Now you are talking to iamnotbatman. But I’ll point out again for reinforcement, predictability is not relevant. A model does not have to be predictable. All models are an approximation of something else. At the same time, all models are precise description of themselves. Your brain is an approximate equivalent to iamnotbatman’s brain. Does that mean you are not concious because you can’t make a prediction of what his brain will do? We are not modeling to predict. We are modeling to make a virtual concious system, that is not intended to precisely predict anything complex, and is not is contended to predict anything complex by anybody in this discussion.

I could answer more precisely if you better defined ‘subjectivist’. Are you talking about people who assert that perception of a ‘thing’ is the ‘thing’? I respond coito ergo sum.

Whether their claims are well defined is irrelevant. 1+1=3 can be a well defined claim, and wrong.

I don’t even know what you are asking. A concept is a definition. It can be a logical, consistent, complete definition or not. It can define something real or not. Nothing is complex about that. As far as I can tell, qualia could be subjective perceptions of definitions. What is the significance of that? A computer can make a subjective perception of something that is not well defined. And it will be no better at creating perpetual motion, or describing an invisible pink unicorn than you are.

Now I see, you are talking about the philosophy of the mind! I’ve been discussing the science of the mind!* Coito ergo sum*!

Try this: Humans exist. Humans are composed of matter and energy subject to the principles of physics. Computers can model anything made out of a well defined set of matter and energy, and well defined principles of physics. How is it that computers cannot form qualia in the model? Are we missing a physical principle? You seem to think not? Are we missing a precise definition of the matter and energy? Yes, there’s a gazillion bits of matter and energy in a human being, and we are incapable of defining all of them at the present. But the means of defining them is very simple. We make precise definitions of much smaller things all the time. We just don’t have the resources to do that with something on the scale of a human. If we had such resources, why could we not model a human, and have that model produce qualia the same way a human does? That is the question you still fail to answer.

And pain is poor example for you. I explained to you several times how it works. You haven’t provided anything to refute my explanation. I’ll try again, pain is the description of the processing of stimuli (real or virtual) by a reflective system. It is described as subjective because each system is different. Yet each system can still describe its processing for the same stimulus. How does that not explain what you call qualia in relation to pain?

I was asking whether you thought that the concept of qualia is well-defined.

Here they overlap because there is so much left to be explained.

Also cogito ergo sum, which you keep using, probably doesn’t mean quite what you think it does, because Descartes thought it demonstrated mind-body duality.
Not to mention that it is based on a subjective description.

Simulating a phenomenon is not the same as the phenomenon itself. A simulating hurricane won’t blow my house down.
To be clear: it’s my opinion that the brain is a machine. However, there is no reason to suppose it is a computer.

The normal definition of pain is that it is an unpleasant sensation. Your “definition” is just any ad hoc attempt to avoid using those words.

And you’re still using subjective in the sense of “different for each individual”. Subjective in the context of philosophy of the mind relates more to phenomena being first-person and requiring a viewer. It’s far less important whether these phenomena are unique for each individual.

No I don’t.

Please tell me what the unexplained part is. A lack of detail is not a lack of explanation.

I did not say cogito ergo sum. I was not quoting Descartes. Read more carefully.

You are concious, I am concious, we are not the same thing. Simulating a hurricane with something that produces wind could blow down the house. The point here is not that a computer can be the same thing as you, it is that a computer can do the same things that you do, including forming what definitions that you are calling qualia.

When I explained what an unpleasant sensation is, you insisted my explanation was invalid because it is different for every person. So I explained the process which we all use when feeling pain. The words are just aliases for concepts. The concepts are simple.

Now that is the kind of subjectivism that makes me respond coito ergo sum! Phenomena exist whether there is any person to view it or not. The model created in your mind is not the phenomena. It’s your unique model. Each of us can have such a model. The models are not phenomena because you do not detect them with your senses, they are just processes in your mind. You detect them by reflection, just like a computer does. I don’t care what the philosophy of it is.

I don’t want to sound like I know that a human brain can be simulated by known processes in a computer. That is what I believe is true because of a lack of evidence to the contrary. And the only thing I can find that stops me from implementation is a lack of resources, not a lack of understanding. Given the resources, one might discover an unexplainable process. But in that case, I doubt philosophy will be used to explain it.

I don’t want to make it sound like philosophy is non-scientfic either. But philosophy presume things that are undemonstrable. That’s fine with me, but it is of no value in explaining actual phenomena. And given an explanation based on demonstrable evidence, I will take that over the one that is not. Both philosophy and physical science are susceptible to error in fact or logic, so in that regard I wouldn’t hold one school to be superior to the other.

Qualia are the building blocks of subjective experience. In this thread, unless I’m mistaken, people are saying qualia don’t exist but that subjective consciousness does exist.

My head hurts.

This is the answer that I expected.

You’re being dismissive of a claim that does not make sense to you.
That’s fair enough: I’m dismissive of the claims of the Time Cube guy, and he doesn’t make a lot of sense to me either.

But the difference here is that this is not some crackpot theory.
The existence of qualia, of subjective experience, is the majority view among philosophers of the mind and damn near the unanimous view among high-profile neuroscientists.
When I studied cognitive neuroscience as part of my degree, the various (steps towards) theories of qualia were part of the course material.

Of course they could be wrong and you could be right.
But Step 1 for you should be at least trying to understand their claims.

I’ve been asking you to clarify the definition. I’ll look for more info on my own. I’m not calling the idea crackpot, and I don’t mean to say the concept of qualia is not well defined, rather that I haven’t seen it well defined. Maybe its too complex to explain in a thread post, so I’ll do some research. I’ve been asking you to refute my position, shared by many, that a simpler explanation works, but I couldn’t really say I’ve refuted yours if I haven’t given it sufficient study.