Question about AI and conciseness.

Ok so I have a few questions, and if this turns up in GD or IMHO opinion thats fine I believe this will turn into a philosophical debate at some point but here is my question.

It is my understanding that there is some type of law of computers (can’t remember the name) Which basically states that any computation can be preformed on any computer, some may just take longer to compute. Meaning in theory I could run the same calculations on my TI-83 that my computer does to Windows 7, it would just take forever and ever. Now assuming this is true, once our computer power gets powerful enough to simulate a human brain (I don’t think AI will ever be possible as long as computers just know, if, then, else) I could simulate a human brain on a calculator, just not in real time. Now here is where my question comes in about computing AI. What about mechanical calculators. I am not talking about an abacus but the more high tech ones which use little balls dropping down holes or a winding gear. These are able to preform calculations, albeit at a very slow rate. Could these machines ever be used to calculate human conciseness, ignoring the lifetime it would take to compute?

In principle, yes: if there’s a program that can simulate human consciousness, then it can be run on any computer. However, the average pocket calculator doesn’t have the memory you’d need to run Windows 7, which probably means that it wouldn’t be suitable for simulating a human brain.

Given infinite memory, yes, you could do it.

There’s an xkcd strip about it.

I am not sure what law you’re thinking of. Turing determined that the halting problem was undecidable (you can’t know in advance whether a particular algorithm and data will run forever, or halt), but that sounds different than what you’re getting at.

I think you can build mechanical devices that implement logic gates, so I suppose you could design an arbitrarily complex mechanical computer that could do anything a modern computer could do (given enough time). But the limits of simulating a human brain on a computer are not simply the limits of computing power. Even if we had unlimited computing power, we just don’t understand how to do it. I took an AI course when I was studying computer science in the late 1970s and though the field is still thriving it has never delivered on its promises. The industry is getting better at solving fairly narrow problems, and neural network systems and genetic algorithms can mimic learning without explicitly programming it. But we don’t even really understand what consciousness is, much less know how to program a brain.


And examples of why not to rely on spellcheck:

preform => perform
conciseness => consciousness

You’re thinking of a Turing machine. Not really a law, so much as the foundation of the theory of computation.

It sounds more like the Church-Turing thesis to me.

Ah, yes, even better to reference the actual theory than the thought experiment. I retract my contribution in favor of yours.

We dont know how to simulate consciousness the way we know how to simulate flight or simulate 3D environments. So its difficult to say what kind of computer power you require.

There are some different schools of thought. One states that you can kinda figure out some algorithms that describe thought well enough to come up with a simulated intelligence, like ask.com. Another is that you need to do a simulation of all the neurons and their interactions. Another school thinks that you really need to simulate on the atomic level by simulating every atom.

They all require exponentially larger computational power.

Now, some of the research Ive seen has been around simulating mouse brains, or part of mouse brains. Its important to know that they use the “neuron modeling” method. The larger issue here is no one really knows if any of this works. We can take inputs and see outputs but until its hardwired into a real mouse’s brain, we cant be certain that what we are doing is right. The last thing I read about this is a mouse brain running in 1/10th time.

As far as humans go. We still dont have the software. I guess you can extrapolate and say we need x amount of neurons with x amount of synapses and walk away with an answer but it has yet to be proven that any of these models do anything related to what we consider consciousness. These simulations show that its computationally possible but without the magic that is the structure of the brain, its not fair to call it AI or simulated AI. It may be that maintaining the proper structure makes things a bazillion times more complicated.

Only if the computer is a Universal Turing Machine, but not every possible computer is a Turing machine, let alone a UTM.

Assuming you mean “consciousness”, then yes, if you think consciousness can be simulated by a fast enough computer (I do).

It appears that there are things that can be computed by a recurrent neural network that can not be computed by a turing machine that is allowed to complete an infinite number of steps.

This is the person that created the proof, I’m not sure if it’s been validated or not:

Given that the brain appears to be a recurrent neural network, it would seem to imply that we can not simulate everything the brain does on a regular computer. Maybe.

The thing about the mouse brain model is that it absolutely does not model all of the functions of those same neurons and synapses in the real version. To the point of being functionally different (meaning that things left out of the simulation have been shown to influence brain function in various research).

It’s interesting from the structure perspective but not from the function perspective.

Philosophically, there isn’t any way to know for sure whether something is experiencing consciousness or if it’s just dumbly but completely simulating all the outward signs of it.
This applies to you and me too - I know I have an internal thought life and sense of self, but I can’t be truly sure you have. Likewise you can’t be sure this wasn’t typed by a complex, but completely unthinking, non-experiencing automaton.

The issue is that these outputs dont correlate to anything. I can write a two line program that can output random things. The real issue is does it solve any problem or provide any function. So far these simulations are academic exercises into the software engineering of simulations on supercomputers. The “mouse” has never shown anything like the ability to run through a maze to find cheese or the ability to run from predators. When they can prove they are simulating something then we can discuss how much energy it takes to simulate brains.

That is absolutely not a simulation of a mouse brain. At best it is a system with equivalent computational power* to a mouse brain, but it does not compute the algorithms that a mouse brain computes. (Figuring out what those algorithms are is the really hard part.)

*Or 1/10th of the power, I guess, since power is a function of time. My Dell can calculate something in the blink of an eye that would take a Turing Machine (I mean a machine with a tape and read-write head, such as Turing actually conceived) many hours. It does follow from that that my Dell is more powerful, even if the TM can eventually complete any computation that the Dell can.

ETA: OK, I guess you knew that.

Fair enough - it’s not even a Chinese Room mouse yet…

To return to the point of the OP, actually speed does matter when you are talking about simulating brains and what they do. Your brain does not exist in isolation. It is constantly taking in information, at high bandwidth, from the rest of the world and the rest of your body, and outputting (via muscles, glands, etc.). Furthermore, those outputs affect the inputs, and are often done specifically for that reason: every time you move your eyes (which you do several times per second) or scratch your knee, or sniff the air you are changing the inputs to your brain, usually because your brain is actively looking for certain environmental information (what is over there to the left? what chemicals are in the air?) that may be relevant to your ongoing behavioral purposes.

This continual, high bandwidth interaction between the brain and the environment and the rest of the body is essential to human cognition and consciousness. After all, that is why we have brains, to enable us to interact more adaptively with our environment (and to maintain bodily homeostasis). The trouble is that the world, and the body, runs on its own schedule. If the brain computes what it needs to compute too slowly, its output signals will be too late to be relevant. That smell will no longer be in the air (or the poison gas will already have killed you); the tiger whose movement you noticed in your peripheral vision will have eaten you before you are even able to look towards it to see that it is a tiger, let alone take any defensive measures; but, in any case, you won’t be able to see properly, because the iris of your eye will not be able to contract quickly enough to shut out the dazzle of bright light (or dilate quickly enough to let in sufficient light when you eventually turn your eye towards the shade where the tiger might be hiding).

In a way, though, the tiger is a misleading example, because the point is not just that a brain that works too slowly will get you killed (although it will), but that this high level of real-time interaction is constant (much of it below the level of consciousness, but essential to maintaining consciousness), and it is really what cognition in general, and consciousness in particular, are all about. Computation on its own (even if it is computation of the right form: even if the algorithms being computed are the one the brain actually computes) won’t get you consciousness.* Computation that controls rich, two-way interaction with the world probably will (indeed, probably does) get you consciousness, but if the computation is too slow the world will have moved on, and the interaction will break down.

*If, *per impossibile*, you could cut off someone's brain from all inputs and outputs, it may be that consciousness - thinking, remembering, imagination and dreaming - would continue for a short while. However, I am confident that the mind would soon fall apart. Try to imagine what it would be like: absolute nothingness. If you could somehow make a brain that had *never* had any input or output interaction with the rest of the world (note, even a foetal brain does have some degree of interaction) it would not be conscious in the first place. It would have nothing to be conscious *of*.

And experiments in sensory deprivation bear you out.

Aha, but that’s what consciousness is, the awareness of self, not awareness of surroundings. No external input required to establish consciousness, although a brain that was so deprived from birth probably couldn’t develop and mature. But if you cut off all sensory input from a fully developed conscious brain, it would still be conscious.

I think computers are already much better than humans at being concise :wink:

Awareness of surroundings most certainly is consciousness. I am conscious of the monitor screen in front of me, amongst many other things, right now. There is room for argument, perhaps, over whether we are also conscious of a self, but many philosophers and scientists hold that the self is no more than a grammatical fiction, a placeholder subject for sentences about experiences and thoughts. An alternative view is that your self is simply your body (including your brain), but consciousness of that still depends on high bandwidth information exchange between the brain and the rest of the body. (Brains are not conscious of themselves.)

What evidence do you have for those claims? I will tell you. None whatsoever. No brain has ever been cut off from all input (and output) and then brought back into contact so that it can be questioned about whether it was conscious in the meantime. (Note, the brains of people who are asleep, or have locked-in syndrome are still in high bandwidth interaction with the external world and their own bodies, even though the bandwidth is not so massively high as it is in awake, healthy people). You (and, I am sure, many other people) believe these claims not because of any evidence, but because you explicitly or implicitly subscribe to a more-or-less Cartesian philosophical theory about the nature of consciousness. (I do not mean, necessarily, the idea that consciousness depends on something non-physical, but the idea that it depends on something inside you - a “self” if you like - that is real, but less than your physical organism as a whole. For Descartes that something was an immaterial soul. For modern Cartesian Materialists it is something-we-know-not-what in the brain.) There are arguments in support of this view, from which your claims would more or less follow, but I happen to think that there are better ones against it. What there certainly is not, is any empirical evidence for these claims, and, for both technical and ethical reasons, the experiment is unlikely ever to be performed.

Relatively intact humans and (arguably) animals are the only beings that we have good reason to believe to be conscious. Their brains are all in constant high bandwidth interaction with the world around them and with the rest of their bodies, and certainly most of the things of which they are conscious are either in this surrounding environment or in the body itself. It is therefore far from unreasonable to think that this interaction is actually necessary and important for consciousness.

Well, yes, a conscious mind is processing input from its surroundings but that doesn’t mean it’s the very definition of consciousness.

I don’t believe that consciousness has a scientifically testable definition today. IMHO it is partly a matter of science and partly a matter of philosophy. Therefore I am not making a claim, I am putting forth one model.

You are putting words in my mouth. I’ve said nothing of the kind. In fact I reject wholesale the outmoded models of the “the man in the driver’s seat” that require something separate from the physical self to establish consciousness. I believe that consciousness is a physical process, not a metaphysical entity.

Human consciousness requires something more than reactionary processing of external stimulus; much of that processing has been shown to fly under the radar of consciousness. It requires the ability to think abstractly, and consider the past and future, for example. It is not necessary to have external stimulus to do these things; I can think about the nature of the cosmos, yesterday’s ball game, or my to-do list for tomorrow work day without a whit of external stimulus to do so. That is the basis for my “claim” that you could cut off input and still remain conscious.

The closest we are likely to come is the sensory deprivation chamber, which for short-term use is reported to enhance meditation (certainly a conscious activity). (Longer use can have profound psychological effects but that in itself does not affect a theoretical discussion about the nature of consciousness.)