What Is Consciousness?

This is an extreme version of the “false positives” problem with the Turing Test. We can already write programs that will deceive very young people and rather naïve people.

There was that movie where the guy painted a face on a volleyball and talked to it. After a couple of years…I wouldn’t be all that surprised if it started talking right back.

(I also see this as a problem in “existence of God” discussions. A lesser entity could very likely deceive any of us into thinking it was “almighty,” even if it was far from omnipotent, but just very much more powerful than we are. “Q” from Star Trek, with trivial effort, could make just about anyone think he was the Creator of the Cosmos, even though he wasn’t.)

These are all possible approaches to a “reductionist” or break-apart approach to consciousness. (Much as optical illusions are an approach to theories of vision.)

We really do learn a lot from when things go wrong.

(Stephen Jay Gould notes this in his essay, “What If Anything is a Zebra?” He calls attention to certain birth defects in zebras that cause the stripes to break up, to become spotted, so to speak. This tells us something about the way the stripes – or spots – are laid down, and helps to answer the question, “Black with white stripes, or white with black stripes?”)

It is a little disturbing to study mentally retarded people for an understanding of human cognition, but so long as we aren’t actually going out and causing it, the moral difficulties can be made acceptable.

(Monty Python: “There just aren’t enough accidents. It’s unethical and time-consuming to go out and cause them…”)

Sort of like Wilson, the volleyball in Castaway. But of course that’s just delusion. Unless the ball talks back at some point.

It was just hobby stuff, no supercomputer, I kept trying to find more power but it just wasn’t in the budget. Generally I ran it on my PC at home, typically for a few days and sometimes for weeks. I wrote it to be able to run on networked PC’s and I tried it a few times but the the communications overhead prevented it from being very much faster.

The environment was 2D and about 5,000 x 5,000 units. Each creature was a hexagon with a short perpendicular line on one side that acted as a long mouth (if touching other creature then energy was transferred). For movement they could rotate either direction or move forward as if they had legs/wheels under them controlled by motor neurons. Creatures were about 15 units in diameter, they had touch sensors distributed densely and evenly at every position along their surface, they had eyes with about 20 receptors/radars that were aimed to spread out a little for wider distance vision.

There were basically creatures (which could be eaten for food), food items, walls/objects and hazards (objects that suck energy).

Their brain’s neural structure was completely random at first and evolved based on competition and survival. The brain could map to input and output neurons, it could have many levels, it could be recurrent, skip levels, connect to itself, etc. Each neuron was only connected to whatever it evolved to connect to, there were multiple types of neurons (different functions) and synapses were also functional and could have many different ways of operating.

Well, I’m not sure that follows. You said it yourself: a photograph of a volcano isn’t hot, and neither is a simulation of one. A photograph is a representation, and I don’t see why a simulation should be more than that. In that sense, certain properties are identical for a thing and its simulation, certain aren’t. The photograph reproduces color and shape of the volcano; a film might reproduce its dynamics; a simulation ‘incorporates’ those dynamics in a way that makes predictions about future events possible. But nowhere will you find a property such as heat. Likewise will the simulation of a black hole not warp spacetime around it, and the simulation of a solenoid through which a current runs won’t attract the paperclips on your desk.

Some properties are simulatable: certain properties, like complexity, or intelligence, and certain abilities, like writing novels, composing music, or a rat’s ability to find the right way through a maze—if you can write a computer program that writes novels, then it will itself be a novel-producing entity, so a simulation of a novel-producing entity is a novel-producing entity. But some properties appear not to be: a simulation of a gravitational well-producing entity is not itself a gravitational well-producing entity. In this sense, it seems entirely possible that a simulation of a consciousness-producing entity is not itself a consciousness-producing entity.

This possibility does not detract from naturalism, and is not at odds with the laws of physics being computable: laws tell us how things behave, not necessarily what those things are. So the laws governing gravity well-producing entities are not themselves a gravity-well producing entity, and in a sense, a simulation is just a kind of ‘animation’ of those laws. Besides, the same bit of mathematical formalism may be applicable to completely different physical entities—for instance, you can use the same mathematics to describe a two-state atom, and the spin states of an electron. You can then fill pages and pages of calculation without ever specifying what system you’re talking about (it’s something I spend quite some time doing). So any simulation incorporating those mathematics will similarly not ‘talk about’ what underlying physical system it is they are supposed to apply to.

So I think there is room for a naturalistic account of consciousness without buying into the notion that it’s sufficient to run a program for consciousness. I mean, think about how strange your last quoted sentence would sound if the words ‘thinking mind’ were swapped out: ‘But if it really is running a program that emulates a volcano…then it is a volcano.’ There doesn’t seem to be any sense in saying this; and nevertheless, that doesn’t mean volcanoes aren’t completely physical objects.

I’m impressed! It sounds like a heck of a lot of fun, and something somebody might have amped up into a Bachelor’s Thesis!

This is one of the great joys of the computer age: people can accomplish more, as hobbyists, than professionals and PhDs could, in times not so very long ago! And have so much fun doing it, too!

I won’t let you have it both ways…any more than I, myself, can be allowed to have it both ways.

If the emulation runs every detail of the real thing, then how can it be told apart from the real thing? If the photo of the volcano happens to be hot, and emit molten rock, and cause new landmasses to arise in the ocean…how do you know it’s not a volcano?

If the emulation of a consciousness isn’t actually conscious…then what’s missing? What specific detail is lacking?

You can’t say, “It’s a perfect emulation” and then suddenly yank the rug out by saying, “Oh, but it isn’t perfect after all.”

Most of us here have taken “operation” as what’s lacking. The mapping model, without actually doing functions, isn’t conscious, because it’s inert.

Once you start operating it, once you have a non-inert modeling of consciousness, what basis do you have to say, “But, of course, it’s not conscious?”

I’m seeing some circularity here.

Working theory: Consciousness is that quality of mind that is present with us while we are awake and aware. We cannot escape from it, although we can become unconscious or semi-conscious, but we are never separated from our consciousness insofar as we are never separated from our bodies, from whence out consciousness arises. It is not possible for us to experience another person’s consciousness directly, though we may make educated guesses about what someone else is thinking or feeling.

Q. Can we be conscious when we are dreaming? Or perhaps lucid dreaming?

Agreed about computer age, I can’t imagine not being able to explore these types of things. I always feel bad for people like Turing not having a PC to play with.

One can certainly “zone out” to quite a degree, while reading or watching a movie, to the degree of losing much of one’s own self-awareness.

Dreaming seems to have some degree of consciousness involved. The “dream self” seems to be at least some variety of “self.”

Lucid dreaming is more conscious than the other kind. There is a higher degree of volition, and a higher degree of intent. In an ordinary dream, the observer is largely passive, just watching events unroll, but in lucid dreaming, there is a clear (or clearer) desire and object. Not just, “Hey watch the butterflies turn into candy canes,” but “I want them to fly in formation, like a big letter ‘H.’”

Wouldn’t that have been the coolest thing? (Or if Mozart had had a synthesizer?)

As I understand it, lucid dreaming also involves the dreamer “waking up” within the dream without actually waking up, but fully realizing within the dream that they are dreaming and can literally do anything. Most of us get too excited about this realization and wake up. Perhaps there is higher level of consciousness during this waking life as well that we occasionally get a glimpse of. A higher level of awareness if you will.

Higher? I would call it “lower,” really. (I’m only able to do semi-lucid dreaming. I can roughly steer the course of a dream, but in a rough, very inexact kind of way. Like steering a raft through the rapids: I can choose the left channel or the right channel…but I’m still on a raft going through the rapids!)

To me, it feels like partly waking up, but only partly. One chunk of the mind is still in the midst of sleep paralysis, but another part is awake.

Among other things, long-term memory is still paralyzed even in lucid dreaming. When the lamp turns into a parrot, you don’t remember that it should be a lamp, just as in ordinary dreaming.

In my opinion, it’s more like drug use. It may feel like some kind of “higher” plane of consciousness, but, really, it’s diminished capacity, not expanded.

On what principle?

But no such simulation can exist! It’s not possible to write a computer program such that any device running it emits the heat of a volcano, or creates the gravitational field of a black hole.

That’s not what I’m saying. Rather, I’m saying that a perfect emulation doesn’t, and can’t, recreate all the properties of the original. If you wish to argue otherwise, then show me how I program my computer to generate a gravitational field.

Then most of you are attacking a strawman, since as I’ve been pointing out, the rock performs operations just in the same sense as any computer implementing a program does.

Interesting points. Sometimes I wonder whether that alternate dream reality, where lamps turn into parrots, is maybe a preview of the real world, and we stumble through it like drunken sailors because we’re just not ready for that kind of reality yet. And then other times I wonder if I remembered to take my meds.

Not in the “same” sense as any computer.

With a computer, we directly alter the initial state of the hardware (alter the voltage level in many places) as a direct representation of the program (based on how we know the computer transitions from state to state) and when the program is run the voltage changes are directly dependent on that initial state we set.

With a rock we are just stating that due to any kind of flux and mapping we can claim it is running any arbitrary program.

In reality WE ran the program in our head to arrive at the correct sequence of states and mappings and then merely presented the results.
It’s basically a one time compression where all of the original information and transformations got moved to a different place and we left a token in it’s place.

I get the impression many of you believe we are on the verge of achieving conscious AI and not just low-level consciousness, but high order (HOT), which includes self-awareness. I don’t believe we are anywhere close to this goal and, in fact, a case may be made that it will never be achieved, at least not without including organic material that can physiologically mimic the limbic system. How can you “feel like” something if you can’t really feel anything at all?

Sensory input devices coupled with intelligence alone won’t cut it. In order to “feel”, something crucial is needed in between sensory input and intelligence, namely: highly specialized brain structures, an autonomic nervous system, glands, hormones and more. Perhaps a computer can, for example, be programmed to understand “fear” and the logic behind it, but it will never “feel” fear unless it possess something that can emulate epinephrine, norepinephrine and cortisol; and other structures that can be triggered by and respond to them?

I think we can agree that lower order consciousness evolved before higher order consciousness, but at what points along the evolutionary tree they each emerged, we don’t know (e.g. is an octopus self-aware?; does an insect hive possess some type of collective lower order consciousness?). I view them separately (higher order emerging from lower order), and each being continuums (i.e. a species may have weak lower order consciousness and strong higher order consciousness, or visa versa).

Consider an advanced, but purely mechanical, AI entity vs. your average dog or cat with regard to self-awareness. I believe dogs and cats have higher order consciousness (why wouldn’t they? Evolutionarily speaking, it was selected for in us, so it should have been selected for them too, IMHO), yet, they fail the mirror test. On the other hand, I’m confident that advanced AI would have no problem passing the test. Does that mean the AI is more conscious than cats and dogs? No.

I argue that the AI entity has more advanced lower order consciousness than cats and dogs, but it possesses no higher order consciousness. Cats and dogs (I think) have fairly advance higher order consciousness, but they are weak with the lower order. IOW, Fido and Snowflake are too dumb to pass the mirror test, even though they feel aware of self as an individual, apart from their environment. The reflection they see doesn’t feel like them, so why should they think otherwise? AI, on the other hand, can deduce that it’s an entity separate from its environment and could easily pass the mirror test from visual input alone; but it can’t feel it.

What is needed to feel self-aware? Well, you need sensory input, but that can be via mechanical input devices or organic sense organs. You also need cogitation, but that can be through chips or neocortex. But, most importantly, you need a functional limbic system, or its equivalent, which I doubt can be achieved mechanically.

Another question to consider: what type of sensory input has been most important in the evolution of self-awareness in species and in the (slow) development of self-awareness in the individual from birth? I argue against the more common senses of sight, hearing, taste and smell—they may assist in the development and feeling of self-awareness, but aren’t necessary.

I cannot, however, fathom feeling singular, apart from the environment, without the sense of touch (touch in the broad sense, including the somatosensory system: temp, vibration, light touch, pain, proprioception…).

So, the mechanical engineers not only have the exceptionally tall order of developing a functional limbic system, they also have to develop the most complicated sensory input device to go along with it. Good luck with that!

You computer geeks will just have to partner with biologists and genetic engineers in order to achieve high order conscious AI—cyborg is the only way. That’s my opinion, anyway.

Justice.

Great. Can a computer program be written that adds numbers? That corrects spelling and grammar? That plays a championship level game of chess? That answers trivia questions? That is conscious?

If that’s the case – how is it a “perfect” emulation? To the degree that it doesn’t recreate a certain property – the specific property that’s under debate here – how can it possibly be called perfect?

Great. Then if computers can attain consciousness, so can the rock.

But if computers cannot attain consciousness, how is it that atoms can?

Agreed.

How does a series of carbon chemicals “feel” in a way that other chemicals cannot? Why can’t the interactions of neurons be emulated, and, if they can be emulated, why should they not “feel?”

Claiming that limbic chemicals have properties that other systems cannot have is an echo of 19th century vitalism. It used to be believed that “organic chemistry” entailed some miraculous spark of life, which inorganic chemistry could not. But this idea has been wholly discredited.

Can you define “feel” in this context? How do I know you “feel” anything? Can you convey your feelings in objective terms? This is edging into the “qualia” problem.

Science can only work with observable quantities. The best we can do is observe people and animals, and judge from their behaviors what they are feeling. If a computer shies away from a stimulus, vocalizes, and is very reluctant to go near that stimulus again…how is that different from a person, or a cat, being afraid?

I do agree that specific brain structures are involved. A “pain center” would be part of a “fear” response. Emulating hormonal responses would also be part of a fear simulation.

Why, specifically, do you doubt this? After all, it has been achieved mechanically, via the process of evolution.

I think the key to consciousness is an internal sense, the sense of self-observation and self-modeling. Consciousness doesn’t depend so much on outside senses (although they provide a “universe” of data to work upon.) It’s “Now, what was I just thinking about?” The mind looking at its own contents.

This is one of the reasons that many AI researchers have started with the lesser task of trying to comprehend vision. It’s a stepping stone to the larger problem.

Certainly, I agree that a multi-discipline partnership is the best way forward. And I agree that various sorts of cyborgs will be hugely instructive. For instance, we may live long enough to see implants that regulate brain activity, as a treatment for epilepsy (somewhat akin to pacemakers for hearts.)

Is it too great a leap to imagine implanted extended memory, so that, instead of looking stuff up on Wikipedia, I just concentrate, and “remember” it as if it were in my regular neuronal memory?

One step at a time: who knows where we’ll get?

From today’s “Tom The Dancing Bug.”

I don’t see dreaming as a conscious state. But, maybe it’s one clue as to what is going on. I believe we dream as we go to sleep or are waking up. As that portion (whatever ‘that’ is) becomes active it is subject to fragments of data that are lying around. As they get flushed out, we perceive them as nonsensical strings.

The same thing happens in computers. The brown out state can do awful things to applications.

Crane

I don’t believe organic chemistry entails a miraculous spark of life nor do I believe carbon-based compounds necessarily hold a privileged spot with regard to achieving self-consciousness in the universe; carbon just happens to be the privileged element on Earth.

I have no doubt HOT may have evolved elsewhere via one or more of a handful of alternate element-based biochemistries (e.g. silicone, arsenic, sulfur…) and solvents other than water (e.g. ammonia, hydrogen fluoride…). On planets with alternate-biochemical life, I would re-define “organic” to mean the base element (s) used there and carbon would just be part of the pedestrian “inorganic” crowd. Of course carbon and water are pretty ubiquitous <waves hand dismissively toward space> “out there” </waves hand dismissively toward space> and they do have a positive consciousness data point of at least one…so maybe they are the organic dynamic duo of the universe, who knows.

I’m confident that if consciousness evolved naturally by any pathway elsewhere in the universe, it did so in somewhat similar fashion to the way it did on Earth: slowly and by the filter of selection. I think it’s absurd to think self-aware consciousness could evolve anywhere in time-frames shy of hundreds of millions of years.

Yes, I know this doesn’t address the question of artificially created consciousness and this I believe is the point where we disagree.

If it takes nature a minimum of hundreds of millions of years to achieve self-conscious beings, I think it’s a bit delusionaly grandiose to think we will achieve it in a matter of decades, even with shortcuts and the privilege of hindsight.

I don’t think higher order consciousness can be achieved or realistically emulated by the flow of electrons alone. I believe you computer-centric heathens (I’m not pointing fingers…but you know who you are :D) put too much emphasis on the “electro-” part of “electro-chemical” and not enough on the *“-chemo” *part.

I envision higher order consciousness as an electro-chemical process that emerged from and supervenes upon the electro-chemical process of lower order consciousness, which in turn emerged from and supervenes upon the electro-chemical process of non-conscious brain activity, which ultimately emanates from a particular array of elemental particles/compounds/cells/neurons/neural network. It’s like an enigma, wrapped in a riddle, surrounded by mystery, only more ephemeral and elusive. I find it difficult to believe this can be realistically emulated by a man-made device that deals only with the process of electron flow through hardware.

I believe to achieve HOT you need the flow of electrons and chemicals. You need fluids and tubes and action potentials and semi-permeable membranes and ion-exchange and lots of squishy stuff. If I run over a conscious being with a steam roller, I expect to see a big, flat, wet pile of goo on the pavement when I turn around. If I see a dry pile of bits and pieces, I call, “fake.”

Alright, I’m exaggerating a little, but you get the point.

Can higher order consciousness be synthetized? Maybe, but if so I believe it will involve more than shear computer power and electron flow. Perception and intelligence are the easy parts of the equation (heck my programmable toaster is smarter than my ex-wife). It may take millennia or more to achieve synthetic self-awareness. And, if we do achieve it, it’s likely to be as complex and willy-nilly as that which nature evolved over hundreds of millions of years. So, why not cheat and use what nature already provided and use that as the substrate upon which we plug in our future iPads?

No, I can’t adequately define “feel” in any context. All I know is that I feel something and I use the word “feel” to describe that feeling. I don’t know for certain if you feel anything at all. But, if I assume I’m not in a Matrix-type environment (which I do) and I accept that you are a real person, like me (which I kinda do :D), then I logically conclude that you probably “feel” in similar fashion to the way I “feel.”

Likewise, I believe my dearly departed cat, “Tibby”, probably had self-awareness and felt a little bit like you and I, because his species evolved not unlike ours did and would similarly benefit from higher order consciousness (being apex predators with the capacity to hunt cooperatively and all). You and I are very similar; Tibby and I (and you) are kinda similar (Trinopus, I’m going to be pissed if I learn you were a cast member in a production of “Cats”!).

It was bio-mechanically achieved by Mother Nature and it took that bitch hundreds of millions of years to get it.

Can you imagine self-observation and self-modeling with no perception of the external world? I can’t. With no sight, taste, smell, hearing, touch…echolocation, or anything else, what are you going contemplate…your belly button? You need all the pieces of the consciousness puzzle to be conscious: intelligence, perception of the external world and limbic/thalamic-type integration at minimum. Take away one and the others crumble. All the pieces need to communicate with the others <Cool Hand Luke> *what we have here, is a failure to communicate *</Cool Hand Luke>.

I like the cut of your jib (not that there’s anything wrong with that) and I wish to subscribe to your newsgroup.