What causes consciousness?

It seems like the brain creates it, but how? How can inanimate material create something like consciousness? If it indeed can, there is nothing stopping us from building artificial intelligence, right?

It is clear that emotional and mental states have correlates in the brain, but neuroscience still isn’t anywhere close to explaining how awareness actually arises.

I would assume that consciousness is the result of the workings of the brain. Like a shadow is the result of something blocking a light source.

We can make intelligences already. It generally involves a special hug. :smiley:

As for whether we’ll ever have the engineering capability to scratch build one from parts, I don’t know. The laws of physics say that consciousness can exist, but that doesn’t mean we’ll ever have the capability to craft it.

Well, that’s the problem of consciousness in a nutshell. No one knows how it works or why we have it. But recognize that saying we are conscious is an assumption. Maybe we are actually philosophical zombies. That’s too reductionist for most people, and there’s been 10+ page threads about it on this board, but I’d bank on that one. Kinda like the story about Napoleon and the physicist telling him he had no use for a deity in his model. He didn’t set out to do it that way, but it just never really came up.

Qualia (the stuff you’re probably talking about when you’re referring to “consciousness”; the “what it’s like” of an experience, the “redness of red,” etc) don’t seem to fit into any physical explanation of anything, and moreover don’t seem to be the plausible end result of any concievable physical explanation.

This isn’t to say qualia don’t exist–but discussion of them, it appears to me, isn’t best undertaken in a scientific context.

This troubles me, in fact it troubles me alot–because it appears to me to make some important-seeming questions impossible to answer–but still, it’s how things appear to stand.

It’s true, I think, that qualia are not by physical in the sense of being described by our current laws of physics. The other side of that coin is that they can’t be directly tested for since they are not defined physically. So we can’t test for consciousness to try and determine the neural correlate of consciousness. But once we have identified the neural correlate, we can test for it.

And it seems like we might have some pretty good info with which to determine a neural correlate. For instance, Giulio Tononi, a sleep psychiatrist, used this “definition” of consciousness: what you have when you’re awake and lose when you fall asleep. From his investigation, he produced the only theory of consciousness I know of. He identified consciousness with integrated information (which can be precisely measured, as he explains) and the type of neural activity that integrates information is the neural correlate of consciousness. His theory has the consequence that consciousness comes in degrees and that almost every system has “some” consciousness - even (or especially) parts of your brain that are not part of the largest complex which is responsible for the consciousness you’d call “your consciousness.”

He’s written multiple papers about it. One can be found here http://www.biomedcentral.com/1471-2202/5/42/ or by searching “An information integration theory of consciousness”

Right now, we are not certain, but another generation of neurobiology will get us very close I think, and perhaps make artificial consciousness and strong AI possible. The behavioral sciences are still very young as well in understanding cognition and other mental processes and are at least a generation away from developing a comprehensive theory.

Personally, I think Western philosophy is reaching a dead end in understanding consciousness due to incorrect assumptions on how it is developed. What little we know does make it appear that it is derivative of biological processes which in turn are derivative of chemical processes and not innate, or created. The separation of mind and matter created all sorts of false paths.

In this regard I find Eastern philosophy, specifically Buddhist and Hindu texts and the theory of dependent origination, are more ‘enlightening’ of our mind and the physical universe interact with one another. I am certainly no expert on those philosophies, but I find the layman explanations more compelling than Western explanations.

Yet the majority of Western scientific research is based on the Western paradigm. I think once Eastern philosophies are more fully understood and subject to the same rigorous research more breakthroughs will be made. And much of that depends on building a comparable institutional structure in those countries such as India where those philosophies are prevalent. But I have seen promising studies of neurological research on Buddhist monks and other ‘mystics’ that are what I would call experts in their theories of consciousness and that provide deeper insights than what I think Western psychology does.

What makes it difficult is that it does require an interdisciplinary approach combining neurology, biochemistry, behavioral science, the computer sciences, psychology, philosophy, and others.

I can say this. Much progress has been made since I was a teenager twenty-odd years ago and first started asking the same question. Unfortunately, most of it was in just defining the question in a way that it could be tested through empirical methods, and not just philosophical inquiries.

I’ve thought about this some but haven’t done any reading on it, so here are my thoughts. It seems to me that consciousness is an emergent property of a bunch of brain cells with high computing power being packed together. That is–brain cells have the ability to make connections and retain memories and deduce etc., and once you get enough of these together there’s some “spark” that creates consciousness. The whole is somehow greater than the sum of its parts.

A couple of implications/observations based on this:

  1. Animals have some lesser consciousness than humans due to their smaller amount of brain power, and animals with higher brain power have more of a consciousness than animals with lower brain power. This seems to be supported by simple observation (i.e., a cat knows it’s alive and tries to remain so, an elephant will protect its young whereas a mouse sometimes eats them).

  2. Computers may become conscious once we can pack enough computing power into one. Defining “one computer” for this purpose is a bit tricky–perhaps the internet will become conscious once enough computers are connected to it. However, perhaps the spark is a phenomenon that can occur only with biologically derived computing power–it seems to me that the spark may need some temporal and spatial consistency to work, which wouldn’t be present in a computer (i.e., once you pulled the plug, there’d need to be a new spark).

  3. In teleporter hypotheticals that involve destroying a human and recreating him exactly on the other end, I believe that the recreated human is not the same person (i.e,. the same consciousness) as the one that was destroyed (even though he will believe himself to be and it would be impossible for anyone else to tell he was not).

I would have to say that this unfortunately sounds a little wooish.

I do think this issue is related to the quest of artificial intelligence, and Jeff Hawkins has mentioned that one should be careful before going for philosophical explanations.


I agree that there has been a significant aspect of ‘woo’ in the initial research, far too much of it philosophically oriented. But Buddhists especially have a deep literature on consciousness and different mental states. And as neurological imaging has developed, particularly fMRI, there has been some very good studies mapping the minds of monks while in meditative states.

This article provides a decent overview of some of them. And there is a growing literature examining Buddhism and neuroscience.

There has also been very good research and empirical data on mindfulness therapy, combining cognitive behaviour therapy with Buddhist meditation. I have been using it to control my own issues, and have found more effective than traditional psychological techniques. Specifically this book - The Mindful Way through Depression. The authors are all clinical researchers and/or practitioners.

One major difference is that (most*) Buddhists never viewed the mind as a static entity separate unto itself, but a dynamic process in continual development, and very much subject to one’s environment and is an organic function - both literally and metaphorically. A major part of meditation and the monk lifestyle is creating an environment that allows one to engage in deep mediation and view how their cognitive process work. I think that paradigm will lead us to a better understanding of the brain and how consciousness arises.


*most, since as with any philosophy that has been around as long as Buddhism as had various sects, schools and teachers that comprise a very wide range of beliefs, but I would say that mainstream Buddhism comprises the above viewpoint.

From what you bolded in the quote you gave, it sounds like you’re equating “philosophical explanations of the mind” with “metaphysical dualism.” Is that what you intended? I ask because in fact metaphysical dualism is an extreme minority opinion in Philosophy of Mind.

To state my bias upfront, I’m a physicalist (or perhaps more accurately a ‘computationalist’, which is kinda the same thing plus the assumption that it’s impossible to realize physical processes that are capable of hypercomputation); thus, I believe that the cause of consciousness is the activity of the brain, and nothing else. To me, any different position amounts essentially to defeatism – saying that we can’t explain it now, and never will – which I don’t think can ever be a well justified stance; just because I can’t explain, say, the emergence of qualia from physical processes, doesn’t mean it’s inexplicable.

My reason for this position is mainly the problem of interaction – that if there is some sort of mind/body duality, that is, if mind and body are of fundamentally different substance, there seems to be no way for one to interact with the other. This is an old problem; Descartes, the architect of the modern dualist view, was well aware of it and had to weasel himself out of it by positing a special gateway, located within the pineal gland, through which the mind could influence the body, i.e. its physical state and motion.

There’s also the idea that mind and body are in a sense ‘synchronized’ – i.e. that while there is no actual interaction between the two, they evolve ‘in parallel’. Picture two clocks, one on your desk, the other in the hallway. You’ll notice that whenever the clock on your desk shows the full hour, you hear chimes from the hallway indicating the same time. However, there is no interaction between both clocks – they’re just synchronized. A similar kind of synchronization then might exist between the mind and the body, with both realms firmly separated. This explains why, when we think ‘grab the pot of coffee on the desk’, our body actually executes this action, without running into the trouble of having to explain the interaction taking place in order to inform the body of the mind’s will.

However, this runs into a problem – since everything we ever see, think, do, or more generally, experience, is, in fact, mental, there would be no difference to the realm of the mind if the realm of the body did not exist at all! We would still feel, think, do and experience the same things ‘in here’ whether or not the body ‘out there’ actually acted correspondingly. This kind of dualism thus abolishes itself.

So the only viable option, to me, seems to be monism – the idea that, whatever it may be, there is only one kind of substance, one kind of stuff for stuff to consist of, in the world. In particular, both mind and brain must consist of the same kind of stuff.

To this end, then, one might conduct a thought experiment. The behaviour of one neuron – roughly, how its outputs are correlated with its inputs – is relatively well understood, to a point that it seems feasible to construct something like an ‘artificial neuron’. If the behaviour of this artificial neuron can be made to be absolutely indistinguishable from the behaviour of a real neuron (and I see no reason why that shouldn’t be the case), one could imagine swapping this neuron in for a real one in one’s brain. There would then be no observable difference in the functioning of this brain, and, since brain and mind are of the same stuff, correspondingly no difference in the functioning of one’s mind – i.e. one wouldn’t notice any difference. The same goes for exchanging a second neuron. And a third. And so on, until, at some point, the whole brain has been replaced by artificial neurons (of course, there’s more to the brain’s functioning than just the neurons, but for the sake of simplicity, I’ll just pretend there wasn’t; at any rate, there does not seem any reason to assume that other parts could not similarly be duplicated and replaced). (I call this the ‘brain of Theseus’-experiment.)

So if, indeed, at no step there occurred a catastrophic loss of consciousness (for which there doesn’t seem to be any justification), then, after having my whole brain replaced, to me, I’d still be me – i.e. subjectively, there wouldn’t be any difference. But then, this must mean that the rejection of dualism – or the acceptance of monism – implies that there is no ‘mysterious element’ to consciousness; and conversely, that those claiming for there to be a fundamental mystery to consciousness are really dualists in disguise, whether or not they consider themselves to be. Thus, everybody that claims that, say, the apparent existence of qualia precludes an explanation of consciousness as arising from fundamental non-conscious, physical processes has to concern themselves with the contradiction inherent in proposing the existence of fundamentally distinct, yet interacting substances.

This has so far been a wholly negative account of the causes of consciousness, which has its roots in the fact that, while I think I have a good idea of what consciousness is not, my thoughts are rather fuzzy on the subject of what, then, consciousness is. But I think perhaps its central phenomenon is its reflexivity – as Douglas Hofstadter put it, ‘a self is a pattern perceived by a self’. So if I had to put my finger on a sloganized ‘cause for consciousness’, this self-reflexivity would probably be what I’d point to.

I think Dennett is right in his criticism of the thought experiments leading to the idea of qualia. How an (apparently) irreducibly subjective account might result from processes in which there is no room for a subject is hard to imagine, and it is easy to present the matter in a way such that it seems impossible to imagine – but I’m not sure that’s quite as clear cut as it is often made out to be.

The classic example is, of course, Mary the colour scientist. I believe that one reason it seems so convincing is the mismatch in bandwidth between visual and textual modes of information gathering: Our visual system gathers tons of bits effortlessly, while reading – implicitly the way Mary avails herself of ‘all the physical information about seeing colours’ – conveys relatively little information in an instant. Indeed, one could posit that the way information is processed in the brain depends on the speed that information is presented to it; then, it would indeed be possible for Mary to ‘read’ all about colour, yet learning something new about colour upon seeing it for the first time, without this having any deleterious consequences to physicalist explanations. But I don’t think even this is necessary.

Tale a variation on Mary: Marv, who is, through unknown means, confined to a two-dimensional world. One could argue analogously that there is no way for him to imagine his experiencing a three-dimensional world, as there is no way for a linear combination of two dimensional vectors – which is all he has in his repository to build representations of the world from – to be three dimensional. However, this conclusion we know to be wrong – it is hard, though not impossible, to teach yourself to visualize four dimensions – at least well enough to conceivably not be confronted with something ‘fundamentally new’ when suddenly exposed to a four-dimensional reality. (French mathematician Étienne Ghys has a page where you can learn to do visualizations of four dimensions.)

The difference between Mary and Marv is, I think, only one of quantity, not of quality. While it is just barely possible to imagine that Marv might succeed in visualizing the third dimension, the problem is just so much more intractable in Mary’s case that we are easily persuaded to call it impossible. But if Marv can succeed, Mary just might, too, though we perhaps can’t begin to imagine how.

The thought experiment of Mary succeeds due to the trick of presenting as a sharp divide what is actually a far more fuzzy gradation – generating new experiences from old ones. If the way you need to go to produce a new experience is short, we do not think of it as difficult – you can, say, often predict how a dish you make for the first time will taste, or you can imagine a scenery from a prediction in a book well enough in order to recognize it when confronted with it in actuality. These are easy cases – you directly possess the building blocks necessary to ‘assemble’ the ‘new’ experience. It gets harder in Marv’s case, where the building blocks you possess – two dimensions – don’t suffice to generate a three-dimensional manifold. And in Mary’s case, it seems that there does not exist any way to reassemble the known to create or infer the unknown – colours can only be thought of in terms of colours. But of course, when it comes right down to the fundamentals, everything in the mind is composed of the same fundamental building blocks – information, encoded in neuron firings. It’s very hard to create the ‘what it is like’ of colour vision from these building blocks; but there is no reason to think it should be impossible.

Terms need to be used with some precision in these discussions.

“Intelligence” ≠ “Consciousness” or “Sentience”.

The first can be defined, subject to some debate, as observable problem solving abilities. The second cannot be directly observed and can only be studied by measuring proxies for it, and correlates of it.

The fact that consciousness cannot be directly observed or measured limits the manner in which it can be studied scientifically … but so long as we recognize the limits of our study it does not mean that it not able to be studied. After all, pain is an aspect of conscious experience as well that cannot be directly measured, yet we study its perception with scientific tools by accepting various proxies for it, such as patient reports of pain and assumed correlates of pain such as heart rate and blood pressure changes.

Serious work is being done on finding a set of neural correlatesof consciousness. The thought is that we can identify a pattern of information process that occurs in brain tissue that correlates with reports of experienced aspects of consciousness. And so far Hofstader’s basic idea of “strange loops”, independently expressed by Stephen Grossberg as “conscious states are resonant states”, seems to be how things are shaping up.

IF we can clearly identify the information processing characteristics that correlate with reported conscious perception and sentience in the human brain, THEN we are closer to determining if any future artificial intelligence may have it as well.

I think you are right, I should had concentrated on saying more “mystical” explanations, but even philosophically speaking, there are approaches that do not seem to fit a scientific method.

…even though they don’t.

I was just carefully noting that nothing I said implied they don’t exist.

I agree.

Consciousness = ****Kozmik ****or ****DSeid ****or donniedarko

One theory is that it derives from the structure of the human brain. Our brains are divided into two non-identical halves which perform different mental functions. The theory is that one half of your brain will be doing something while the other half of your brain is essentially sitting back and observing what the first half is doing. This process of simultaneous mental activity and observation of mental activity is what leads to consciousness.

This reminds me of Stephen Thaler’s ‘Creativity Machines’, neural networks in which there exists indeed a bipartite structure, divided between the ‘imagitron’, which, in a sense, comes up with ‘mental’ content, and the perceptron, which judges and filters this content; though I’m never quite sure how far to trust Thaler’s claims, he touches the far out a fair bit. (I’ve asked about, and given a brief description of, his work here, but never got many takers.)

For some reason I can’t watch the 4-d visualization video.

But does it really help one visualize a four dimensional space? Or rather, does it show what a three dimensional slice of a four dimensional space would look like as things moved around in the 4-d space over time?

Conscientious is our ‘created essence’, and is a actual eternal piece of God given to us. Our mind is little more of a worldly processing center where we take the input and translate it to things our conscious can understand, our mind is basically a translation and observation module.

The mars rovers gives us a analogy, the rover doesn’t have a consciousnesses in itself, but the controllers, programmers and ‘drivers’ on earth do.

Yes the rover does have some processing ability and our mind does also, but that is a result of the programmer’s input, which for us is in the spiritual.