Question about AI and conciseness.

I never said it was the definition. I was objecting to your definition: “that’s what consciousness is, the awareness of self, not awareness of surroundings.” If you deny that at least some of the things of which people are conscious are the things in their surroundings then you are not talking about the phenomenon that everybody else is talking about.

Definition is not the issue here. You made an empirical claim: that a brain cut off from all input and output could remain conscious. You can’t back it up. The experiment has not been done, and almost certainly cannot be done. I make the contrary empirical claim. I do not have the empirical evidence either but I have theoretical reasons for believing it, some of which I have tried to explain in my posts in this thread. I think you must have theoretical reasons for your claim too (since I know you do not have empirical ones), and I tried to articulate what I think they would have to be:

I am not surprised to hear that you are not a Cartesian dualist and that you think that consciousness is physical (so do I), but I did not assert that were. However, I think it is a reasonably safe bet that the theoretical reasons that lead you to think that a disconnected brain would be conscious amount to what I called Cartesian Materialism:

It does not matter whether you believe that that something might be a particular bunch of neurons, or a particular pattern of electrochemical activity in a bunch or neurons, or in the brain as a whole, or something else physical in there if you like. What makes it Cartesian, in the sense I intend, is that whatever it is, it is something inside you, that is, or can give rise to, consciousness in its own right, regardless of how it may or may not be able to interact with anything outside it. If you do not believe that, then I do not see how your claim that consciousness is possible in an isolated brain could be justified.

I do not disagree with that. I do not think it is to the point, though.

I do not see any reason to think that consciousness requires those abilities. We have those abilities, but I do not see why there might not be animals that are unable to think abstractly or consider the past and future, but are nevertheless conscious (of the things around them that they are able to sense, for example).

Two points:
First of all, you could not think about the cosmos, or a ball game, or a to-do list if you had never experienced, or heard about, or otherwise indirectly learned about all of those things from some external source. (Maybe you accept that, since you sem to agree that a brain that has never interacted with the external world might not be conscious.)

Secondly, although I certainly do not deny that you can think about a ball game, or whatever, when you are not currently experiencing a ball game, I do in fact deny that you have ever thought about anything when you are not experiencing anything at all external to your brain, or, indeed, when your brain is not interacting (back and forth) with many, many things external to it. When you are thinking about the ball game you are still seeing things, and hearing things, and probably doing things, and your brain is still regulating your body in all sorts of complex ways. It does not follow from the fact that you can think about things that are not present to you, that you could think about them if nothing whatsoever (including your own body) was present to you. What I am suggesting is that it may be the case that a brain that is not interacting with anything outside itself (a very unnatural condition for a brain to be in) might not be able to be conscious of anything. It may be the case that brains alone (systems of interacting neurons, glia, etc.) are not sufficient support consciousness, but that it actually requires systems consisting of brains, bodies, and their surrounding environments. There is certainly no empirical evidence to the contrary. Furthermore, even if it could be shown that disconnected brains can be conscious in some way, such consciousness would undoubtedly be very different indeed from the conscious experience we richly interconnected human beings actually enjoy.

Yes, that may be the closest we are likely to come, but it is not very close at all to what I am talking about. It may cut the level of external stimulation down a lot, but it certainly cannot block all external stimuli. Nor, I think, are you prevented from moving, and thus acting on the external world (even if it is only the warm water, or whatever, you are floating in). More importantly, it does not interfere at all with the brain’s continual, rich and multifarious interaction with the rest of the body. This is not the experiment you are looking for.

Well, it’s easy to create theoretical machines that are capable of hypercomputation (Turing himself did it with his O-machines), but as far as I know, it’s not been shown that such devices are actually physically implementable. If I’m not mistaken, Siegelmann’s proposal is basically a kind of analog/real computation, which typically depends on being able to measure a physical quantity to arbitrary precision, something that generally runs into problems with thermal noise (or, even if you could eliminate that, into noise generated by quantum fluctuations).

Exactly. Turing’s PhD thesis was on the subject of hypercomputation. We’ve know about models of hypercomputation for 60 years: just allow a UTM to call an oracle for the halting problem as a subprocedure. But coming up with a model means nothing unless you can actually demonstrate that hypercomputers actually exist “in the wild”.

Yes but that’s the path to Solopism. Philosophically, you can’t be sure of anything other then “I think, therefore I am”.

I’d put fourth the counter argument that if it quacks, walks and learns like a duck you’ll be pretty safe assuming it’s a duck.

Here’s a thought experiment which I first read about in Discover Magazine, but with unclear origins. I have edited out asides that reference the technical feasibility of actual performing this experiment, but that’s not really germane to this point.

I don’t have an answer, but with the “duck test” you would have consciousness.

True, and I should have pointed out that it’s not necessarily physically implementable, but it did seem like an interesting result that could imply that simulating the brain requires a different physical structure for the computer.

I personally believe that once the process is completed (if done properly, not as described), then yes the resulting software/hardware is conscious.

However, while the thought experiment gets the idea across, in reality you would need to take into account everything that impacts computation. The neurons are sitting in a vat of chemicals that impact communication, as well as electric fields doing the same (coordinating activities and being used as a medium for communication across large distances of physically unconnected neurons).

From the Wikipedia article on hypercomputation:

Given that I’m pretty sure the brain doesn’t go through an infinite succession of states, I would guess it’s not applicable.

People have made working calculators out of things like tinkertoys or legos so assuming a ridonkulous amount of mechanical parts you could do it.

The biggest problem, as some have mentioned, is that the reaction time to stumuli would be much much slower, so you could only simulate a really slow brain lol.

The idea that neural calculations are somehow unique seems painfully easy to refute since you can simulate any neural net using standard computation.

The original article I read about the proof talked about a small network with arbitrary real number weights. It seemed to me that it would be possible that an analog system could have arbitrary real weights instead of a set of discrete values. But, maybe not, I’m not a mathematician/physicist, but it seemed like a pretty interesting result.

You can’t simulate an analog recurrent neural network with arbitrary real numbered weights as described in her proof. It can perform computations that are beyond the Turing limit. But it looks like this is really in theoretical mathematics land then in our brain.

I’m with Dennett in that you can’t simulate consciousness without being in some way conscious. In other words, zombies don’t, and can’t, exist. Or we are all zombies, take your pick.

The first thing you need is a logic diagram showing what consciousness is, once you know the process then you would try to program it into a computer, and you could if you can say what the process is. I don’t think that can be done because has anyone ever made a diagram of just what it is to be self-aware, all the processes needed? That knowledge has to come first, then the program for those processes.

That is why I have always thought consciousness is what is referred to in scripture as being in the image of God. If you can duplicate it and invent a program which actually understands itself, that it knows it is a computer program, and has its own opinion about you, you will win every science prize on the planet. But to start, you need that logic diagram flowchart. first.

Clearly you don’t hang out on Usenet very much. :slight_smile: