Consciousness

Well… some of us, anyway.

I could tell you, but then you might get it, too. Uh… on 2nd thought…

Go away, kid, you bother me

{{quote:

{{The answer to all is “None save man.”}}
Sorry, wrong. Indeed, it is one of the brightest red flags there is to go about stating that such and such is what separates man from “the animals.” The difference is that we’re smarter – period.

Well… some of us, anyway. }}
What do you mean “us,” white man?
{{quote:

Nonsense. Of course animals can choose what they are going to do. Where do you get this stuff?

I could tell you, but then you might get it, too. Uh… on 2nd thought.}}
In other words, your fulla shit and can’t even begin to support the asinine assertions you’ve made.

{{quote:

Someone who goes around prattling about this and that being “what separates man from the animals” really ought to be careful about labeling those who disagree “ridiculous.” Capice?

Go away, kid, you bother me.}}
You’re no W.C. Fields, you conceited punk. Yeah, you’re “special”

Nickrz writes:

I cannot refute what Adler says; he’s right, only man exhibits these characteristics. What I can refute is the inference that these characteristics demonstrate that only humans are capable of intellectual thought. Adler has arbitrarily defined “intellectual thought” to mean “human thought”.

I’ve read these refutations from a number of sources, both philosophers and linguists. Adler’s points were nothing original. For every expert that I can find that says animals communicate, you can find some other expert that says my expert didn’t use the proper controls and I can find yet another expert that says your expert overlooked critical data, and so on, in a constantly escalating battle of science and philosophy… You use the word “proof” rather carelessly, I think. If it were proven, there would be no discussion.

Nano,

Regaurdless of how many times you try to force the “AI using Neural net” argument down my throat I refuse to swallow it. The fact remains at some beginning point the methods of data collection must be defined in ultimate terms for the computer we are talking about.

In a human or animal sense percepcion happens automatically, often without the subject realizing it.

Programming a computer to notice something that is there is much different than a living entity recognising stimulous and objects and reacting to them.

In short, aside from the fact that you obviously believe that AI’s are conciousness this does not change the fact that in a clinical, definiative, or philosphical sense they are nothing more advanced than an unlimited dictionary.


To deal with men by force is as impractical as to deal with nature by persuasion.

JoeyBlades says:

“I believe a fundamental element of the brain’s organization is the ability to
reorganize.”

Well, I don’t understand your mapping of this thinking, I guess. To me, the very essence of ANNs is reorganization, actually of even certain tasks, while maintaining their organizations/responses to others. Beyond that they don’t need to reorganize to do brain-type tasks. Insofar as a mature brain is still be able to branch neurons and synapse at extended locations, that only adds an extended degree to the basic NN methodology. I don’t know where the basic science/technology of sprouting new topology beyond the initial setup stands, but I assume you could do rudimentary such things in ionic solutions. Of course, the crude method of overkill is to start out with more than enough nodes than you think you’ll ever need, and set all those unnecessary at any point in the evolution of your development to zero weight.

“Today, while we technically know how to build ANNs that could reorganize, we
don’t have a clue how to do this in a predictable and productive way.”

We do certainly have clues. E.g., the first thing I bring up on the Web, Boston U, is this (check out item 3 here):

http://cns-web.bu.edu/muri/year5-report.html

“Therefore, as a
model to the human brain, ANNs fall short of the mark.”

I didn’t postulate ANNs as models of the brain as a whole. I assume, even within a brain cortex peculiar to a given function, the structure would be more like a modular hierarchy or grid of ANNs. This is why I couldn’t understand why, in a discussion comparing sophistication of information processing subject to correlation to a subjective notion such as ‘consciousness’, you would complain about limitations of individual ANNs. Hell, in the task to replace Kasparov (or whoever), we’re certainly allowed to dump in whatever electronics or other inanimate junk we can find in whatever amount we can fit into whatever space the rules say we can use.

“quote:
Are “procedural computations” different from the ‘computational procedures’
that software performs all the time? Define this term of yours – in opertional
terms, not mystical ones.
No, but we’re talking ANNs here, which are not procedural.”

You refused the clarification I asked for here. You apparently have equated a narrow technical software definition of ‘procedural’ to what you’re thinking of as “procedural” in a subjective outlook on human thought. I think you’re again very stiffly mapping everything one to one.

“No. ANNs require new inputs and expected outputs in order to be trained. They never
hypothesize and test theories to learn new things of their own volition.”

Entirely too stiff! ‘Hypothesize’ and ‘volition’ are subjective or teleological terms. It’s inappropriate to try to dump them onto individual ANNs. Say you have a black box and you put whatever kind of presently available hardware/software/whatever-inorganic into it with appropriate feedback on whatever environment it is set to attack, given any goal or absence thereof that you want to associate with what it does, and you set it running like the Energizer Bunny (which apparently died a few weeks ago). Next to it you put a sea slug or something else organic with a very limited CNS or equivalent, but which exhibits behavior just complex enough that you attribute “hypothesiz[ation]” and “volition” to it. (These attributes are in the empathetic/intersubjective aspect). On a level this simple, I believe you can get comparable behavior which would bring you to reexamining exactly why you attribute these two subjective characterists to such things as sea slugs but not to inanimate black boxes. (Note that this may all be done in any room without writing any Chinese – thus making the CIA less suspicious. :wink: )

“I agree that this is feasible, from a technical standpoint. However,
today we don’t have a grasp of the mechanics or logic required to do this.”

No, Sir, think with black-box objectivity, not in subjective anthropomorphic-biased concepts.

“Again, reminding you that I’m talking about neural nets here. The memory in a neural net
has no discernable organization that would allow you to extract conceptual data. Sure you
can address the data, but to use this data to try and make an observation about why an
ANN made a particular choice is not practical.”

A decade ago they had feedback indicating why ANNs make some of their choices, but not on the why of others. Again, your quest to “extract conceptual data” is just a ploy to objectify a subjective notion – ‘concepts’. You can likewise stare at the very hunks of gray matter of a human brain and never see a “concept” other than in the way associative synapsing is pathed to known areas, even with any instrumentation not yet available.

“ANNs do not provide the total
solution.”

They seem to model the bare basics of what modules of neural assemblages do in the human brain. What constitutes recognition of achievement of the “total solution”?

“I don’t maintain (or even believe) that it is inconceivable that machines might,
someday become sentient (I don’t think it’s likely, but it’s not inconceivable).”

I don’t recall your stating where, as one descends in the animal kingdom, you consider “sentience”/consciousness to cease. Is a sea slug conscious? (I believe it’s a California variety they generally fiddle with. Californians aren’t always necessarily conscious. I know; I am one – third generation.) I think you’re doing basically the same thing, on the instant issue, as Nickerz; you just have a much subtler way of raising the bar.

“Certainly, if
we could combine many of the modern computing technologies in just the right way, we
could probably fool most people into thinking that we had an intelligent machine.”

How do you think each human being convinces the next one he is an (intelligent) human being? He hangs around his fellow critters from the day he’s born and crams a lot of junk into his head that is familiar to his cohorts. Heck, tabulae rasae are available at the nearest quarry.

I thought the issue was supposed to be over more than just degree of sentience. So our automatons are down somewhere in the sea-slug realm, but I think the Web page reference above indicates more knowledge than that. I doubt sea-slugs have six layers of cells in cerebral cortices that they don’t even have.

“Hey, it must be intelligent!”

Only if it refuse to waste its immortal life arguing in a forum on the Internet.

“I see your point. I’ve been limiting my assumptions to the observable universe as we know
it (and that’s not sarcasm). If you and I were connected in some psychic manner, then
you could possibly KNOW my consciousness.”

Hey, it sounds like I’m getting close to fooling this knave into believing I’m an intelligent machine afterall. But I’ve got to get that “psychic” stuff out of there. Look, they’re wiring up various parallel sensory inputs and «effectory» outputs to humans. You just wire enough of such connections between the same two and you get a intersubjective coupling between processors that could approach the degree of coupling an individual’s corpus collosum achieves. . .and you get merged consciousnesses. So much for individuality in the march of technology.

“Model was, perhaps, a poor choice of words. Do you prefer “parallel” or
“analogue”? I’m open.”

I wasn’t quarreling over the particular word used, but rather, the restriction of the second member of the comparison from not being subsumed within the generic designation of the first member. A simulation is a construct that replicates a certain range of of those attributes which define what it simulates. If this range cover the full extent over

[[And quivering before Big Iron, I would dare to ask whether he’s questioning my neuron count or the squid’s. ]]
Actually, I said I WASN’T necessarily challenging your count of squid neurons.
[[Beyond that, I would ask exactly which “higher mammals” he claims are stupider than the squid or cuttlefish and would also ask for his authority behind this ordering of intelligence. I wouldn’t want to improperly characterize his counsins, but. . . ]]
Perhaps I phrased it clumsily, but what I meant was that cephalopods are a LOT smarter than most people think, and that their intelligence appears to approach the level of higher mammals.

As for my authority for the proposition that cephalopods are rather intelligent creatures, well, I have read about this in a number of news articles and a feature on the Discovery Channel. My understanding is that cephalopod’s general intelligence is not a matter of dispute.

Here’s a good web site discussing the issue (among many other issues relating to cephalopods).

http://is.dal.ca/~ceph/TCP/faq.html#Smarts
[[. . .I wonder if Big Iron has ever contemplated how his cousins might construct an excremeditation chamber under so much H2O. Perhaps they would have it flushed with (hot) air (which is plentiful in all however-inhabited environments).]]

I wonder what your fascination is with my cousins – perhaps another misreading on your part?

Big Iron said:

“Perhaps I phrased it clumsily, but what I meant was that cephalopods are a LOT smarter
than most people think, and that their intelligence appears to approach the level of higher mammals.”

Well, I’m about as far as you can get from a marine biologist, and I guess, although I had memory of references to the nervous systems of cephalopods, as well as gastropods, my rough quantitation of neurons no doubt referred to the latter, not to cephalopods. I don’t know the range of neuron counts for cephalopods; however, when you compare their intelligence to “higher mammals”, you’re making quite a jump. The claim of the intelligence level of cephalopods, if you look on the Web page you reference, is only that of being at the top of the range of smarts of invertebrates, probably below that of essentially any mammal or even any vertebrate. You wouldn’t be confusing the intelligence of your invertebrate friends with that of cetaceans, would you (which are up there a ways on the mammalian scale)? Of course, intelligence is not a unidimensional concept; an octopus may be just great at copying activities like removing corks from watching others do it.

“I wonder what your fascination is with my cousins – perhaps another misreading on your part?”

I was just referring to your assimilation, as a “higher mammal”, with cephalopods.


I don’t know how long threads here remain open, but I wonder if a sort of summary of any agreement, among posters here, could be stated at this point – in regard to the relationship of human mental capacity, and whatever be implied by the notion of human “consciousness”, to similar things that may be seen in, or in association with, other organisms and inanimate artifacts. I still think that those who hold an innate feeling (I’d label ‘religious’) of personal loss in the context of thinking that places them or their species in merely a high position in the continuum of the complecting of matter-energy – rather than in a select, distinct category divorced in basic essence from all the rest – will concede to little commonality in their thinking with that of those accepting themselves as merely part of such a continuum.

I choose to break down the issues of this discussion, on the highest level, as:

  1. The relation of consciousness to any degree of objective centralized control of complex functionality – the age-old philosophical arguments of free will / determinism, mind/brain, subjectivity/objectivity, or however you wish to tag it; and

  2. The degree of complexity and similarity of such centralized control, as expressed in objective terms – arguments basable on modern science, a more limited discussion space.

Having no “religious” constraints, within the full range of philosophical outlook, I may be accused of imposing the limits of the scientific domain on my outlook as to item 1 above, when I claim that the notion of ‘consciousness’ in other material assemblages arises from an individual’s (human’s, in the first instance) neurobiological formative development in the context of such assemblages of sufficient complexity and similarity to themselves. Proof of this might be seen to rely on studies of children raised in the wild by other species and children raised in the context of treatment on a par with playmates of other species. Data on such situations is very limited and contaminated, given the ethical problems seen in involvement with them. As a result, we have only humans having honed in on recognition of consciousness as attributed to such assemblages as constitute other members of their own species, and any attribution by them of consciousness to other assemblages in the universe is only as a result of secondary, limited or adult-stage coexperience with such. Given the state of the art of our inanimate artifacts, they come out on the short end of such attribution, with only higher animals being considered to date to have any degree of consciousness.

As to item 2 above, in regard to organisms – down near the simplest level of those having central nervous systems or more primitive systems of equivalent function – there has been complete systemic diagramming and functional and biochemical analysis of how such entities relate to their environments. Of course, up at the other end of the neurological-complexity spectrum, where humans are likely to attribute the notion of consciousness to the animals with such informational systems, the corresponding such detailed knowledge of systemic and electrochemical functioning is very limited with respect to the full range of such knowledge needed to objectively explain the complex behaviors of these animals.

When it comes to objectively describing the behaviors of man-made entities which include centralized information-processing systems, we, of course, having all the design plans, can explain their behaviors at the lowest levels, although some emergent behaviors of the most complex of such entities may remain sometimes inexplicable or dependent on the solution of as-yet incompletely solvable mathematical modeling.

As to my thinking on the present state of the art of programmable bus-oriented hardware incorporating very large conventional memory banks, operating at very high speeds, and accommodating I/O from/to all kinds of energy sensors and effectors: Given various modern programming techniques, including artificial-neural-net (ANN) and fuzzy-logic (FL) structures – these human-artifactual entities really are capable of replicating, at least up to a fairly sophisticated degree, depending on the logistics of doped silicon and software nerds, the behaviors of higher organisms. One might claim that the use of this methodology of an underlying digital, von Neumann architectural substrate could limit carrying this scheme feasibly to the level of human neural complexity due to ability to access sufficient materials and manpower, but less-kluged schemes of producing, at the upper level, the same sort of centralized-threshold-noded-logic-generated behaviors will become available in the near future. While the present sort of hardware may not be such as to extend its complement in network form as a result of training, given a sufficient intial outlay of hardware, it can certainly accomplish the same sort of previously organismic-attributed behavior, given connection to the proper sensors, effectors and given the proper training environment. The future ought also to see actual expansion of the physical extent of such networks according as the applied problem-solving should demand.

My position, though perhaps being a little tenuous, is that I believe the basic scheme of ANNs – disregarding their present rudimentary node designs, compared to those of natural neurons and restriction of reorganizability to only the software level – reveals what is the essence of general brain information-processing – and that all the humanoid specialties of the human brain are the result of the less-universally-exciting suborgans of that organ, which attend to its specifically biological needs of sustenance, competition, reproduction, etc. The effects of hormones on slanting logic to various ends or distortions for reality constitute the sources of behavior that turns on the noisy spokespersons of the humanities, but scientifically, I might liken such to simply throwing monkey wrenches into your computers. . .and, if you really dote on such things, you can program them in as simulation – before you’ve found a way to design any special hardware for their functions.

I also don’t recognize any real line between simulation and true implementation of anything. If you fully achieve whatever is contained within your conceptualization of any given established generic entity, you are no longer simulating that entity. If you conceive of the thing as including something beyond what you’ve managed to implement, then, of course, you’re still just simulating that thing.

How much do others here disagree with these positions, or else organize the components of the argument dif

ZZZzzzz…

NanoByte,

You wrote:

FYI, there are already non Von Neumann implementations of simple ANNs on silicon. Unlike the more common computer simulations, these chips don’t use an ALU to process each node serially. The inputs are filtered through the net in true parallel. The architecture is, of course, fixed by the design.

Here’s one area that we diverge in agreement slightly. I contend that ANNs emulate only one of the general, brain ‘modes’. I believe there are other ‘modes’ of brain function that ANNs cannot emulate.

This is a silly position to take. In any simulation, compromises and assumptions are made that differentiate the simulation from the thing that is simulated. I think it’s a huge leap of faith to believe that just because your hypothetical simulation of a human brain passes all of your tests, that it, in fact, is a brain (ignoring implementation details).

So the trick is to show that your hypothetical simulation encapsulates all of the attributes of the human brain… including consciousness. But before you can implement consciousness, you must be able to specify it, and to specify it you must be able to define and measure it’s atributes… which brings us right back to the original question… which I think we’ve all failed to address in this thread.

I don’t feel too bad, however. Some of the greatest thinkers in the history of the world, when given the challenge to define consciousness, have taken some wild tangents rather than answer the question.

“FYI, there are already non Von Neumann implementations of simple ANNs on silicon. Unlike
the more common computer simulations, these chips don’t use an ALU to process each
node serially. The inputs are filtered through the net in true parallel. The architecture is, of
course, fixed by the design.”

Yes, I knew there were some time ago, but I haven’t followed whether they made any large enough, or reasonably aggregable enough, to handle problems of the size that could be handled with available software ANNs.

“Here’s one area that we diverge in agreement slightly. I contend that ANNs emulate only
one of the general, brain ‘modes’. I believe there are other ‘modes’ of brain function that
ANNs cannot emulate.”

Well, to reduce this to nuts and bolts, you’d have to explain what you consider to be a “mode”. I would see the large variety of NNs, using many kinds of nodes/neurons, using in turn, many kinds of neurotransmitters and neuroreceptors, (but all the same basic idea) as forming the human-brain implementation of intellective capability, as well as non-cognitive capabilities requiring logic and memory. Beyond that, as I mentioned, there are the various suborgans of the brain, which are under both neural and hormonal control, that may contribute to what you refer to as ‘modes’, but I suspect that most of what you include in this category would also use NNs in cortices.

It might be relevant to speculate on the differences in congenital kernal NNs that would account for such variant human mentalities as idiot savants.

If you consider analogic thinking as being exclusive of implementation via NNs, I would disagree. I think the NN schema is ideal for this.

I would see emotions only modifying NN activity through the special suborgans of the limbic system and somewhat under control of hormonal and other non-neural chemistry. (Note that I am not formally trained in a biological field and may make some statements regarding the brain that need improvement for accuracy.)

Any useful discussion on any of this would certainly require that you define your “modes”.

“quote:
I also don’t recognize any real line between simulation and true implementation
of anything.
End quote.
This is a silly position to take. In any simulation, compromises and assumptions are made
that differentiate the simulation from the thing that is simulated. I think it’s a huge leap of
faith to believe that just because your hypothetical simulation of a human brain passes all
of your tests, that it, in fact, is a brain (ignoring implementation details).”

This, of course, is the basic enigma: If it quacks like a duck, is it a duck? Well, you have to decide first what a duck is. If Donald quacks, or even if he doesn’t, in a Disney cartoon, by a useful definition he’s a duck. Referring to your statement here, if we agree ahead of time what a “brain-ignoring-implementation-details” is, and then at least one of us builds something that fits that definition (which I take to mean something that produces behavior, within the range we agree upon, when connected to whatever we agree upon is necessary for the demonstration, and fits within physically limits we agree upon, and takes power within limits we agree upon, etc.), then it fills the bill, so to speak. . .er, quacks up to be a “brain-ignoring-implementation-details” in the, well, not exactly flesh. You did agree it didn’t have to be made of flesh. What’s silly about that, or do you think I ducked the issue?

“So the trick is to show that your hypothetical simulation encapsulates all of the attributes
of the human brain… including consciousness.”

Whoa, now! Once we start talking about consciousness, I claim we’re dealing with the second side of the bifurcation of the issue set up in this thread. I claim this portion of the issue extends beyond science into philosophy. I claim science deals only with objective entities, and that consciousness per se is only subjective. I then say that I consider only a dualistic viewpoint (not necessarily to clarify whether this be substance dualism or what). You haven’t told me whether you agree with this modeling of the overall issue or not. I can’t conceive of opening a brain and pulling out a consciousness. I can conceive of correlating (whatever one takes that term to mean) some indication of the activity in a brain, such as higher energy levels in areas which have feedback to some integrating location), to what people appear to use the term ‘consciousness’ for when posing issues of the instant nature.

“to specify it you must be able to define
and measure it’s atributes… which brings us right back to the original question… which I
think we’ve all failed to address in this thread.”

I simply say that you cannot “define and measure” consciousness per se within science. DO YOU DISAGREE? I do say you can define and measure activities in a brain such as the example I give above, and claim that they correlate to consciousness – across a philosophical gap of aspect. Many, far too many, have expounded upon that gap. . .and gotten nowhere. That is no doubt the basis of your final statement:

“Some of the greatest thinkers in the history of the world,
when given the challenge to define consciousness, have taken some wild tangents rather
than answer the question.”

NOW HEAR THIS!: JoeyBlades is about to answer this question that no one has heretofore. Right?

I maintain that you can only impute consciousness. . .in whatever you “feel” (read, ‘sense’ (?)) you empathize with. And what do I mean by empathize? Well, I maintain that you can’t jump outside and look at this snake biting its tail. All brains are finite, and furthermore, they are designed as pragmatic organs, as are all one’s organs, for assuring propagation of the species, not determining the absolute nature of reality as seen while perched on a mountain top or anywhere else. And even all the aggregated knowledge of mankind up to any given date is finite. . .and always will be. I mean, what can you expect of NNs, anyway? If your brain works on a very different basic principle, then hey, maybe Your Omniscence can set down The Absolute Answer here,. . . but we feeble NN-types wouldn’t be able to comprehend it, so I guess you’d be wasting your time.

Some philosophers look at the objective and claim it is the only real thing and the subjective emerges from it, and others (like the one they named this crazy town I live in, Berkeley after) look at the subjective and claim and claim it is the only real thing and the objective emerges from it. I say they both are part of something that bites its tail, but I do wonder if, at some point, one might not be able to locate some structure in the brain that would correlate to this bifurcation in the correlate mind’s manner of dealing with the world according as the problem before it be best solved bottom-up as built upon simple entities requiring an objective approach, or very complex ones requiring a subjective/empathetic approach. More primitive men appear to have put even what we view as inanimate entities, such as mountains and forests, into the subjective bucket, rather then the objective one. But bits are undeclared, so they have their own bucket, so maybe that’s the one you use. :wink:

Ray (only a bit confused)

[[I don’t know the range of neuron counts for cephalopods;]] Nanobyte
That was my main point – you short-changed them.
[[ however, when you compare their intelligence to “higher mammals”, you’re making quite a jump. The claim of the intelligence level of cephalopods, if you look on the Web page you reference, is only that of being at the top of the range of smarts of invertebrates, probably below that of essentially any mammal or even any vertebrate. ]]
Well, my uinderstanding is that they’re considerably “smarter” than fish, toads, or mice, for instance, but I may be overestimating them.
[[You wouldn’t be confusing the intelligence of your invertebrate friends with that of cetaceans, would you (which are up there a ways on the mammalian scale)? ]]
No, as should be obvious to you (I mentioned squid and cuttlefish and linked a cephalopod site).

“Well, my uinderstanding is that they’re considerably “smarter” than fish, toads, or mice, for
instance, but I may be overestimating them.”

FYI, fish and toads are NOT mammals, at least they weren’t the last I knew! OK, I’ll start training a mouse on that plugged-bottle trick right away. :wink:

Ray

[[“Well, my understanding is that they’re considerably “smarter” than fish, toads, or mice, for
instance, but I may be overestimating them.”

FYI, fish and toads are NOT mammals, at least they weren’t the last I knew! ]] Nanobyte
Thanks for the illuminating science lesson! Meanwhile, I recall that it was you who brought into play the notion that cephalopods don’t compare well with non-mammal vertabrae in general. But hey, clam, octopus, what’s the diff, right? :wink:

NanoByte, we all know that fish and toads aren’t mammals. But in the post to which Big Iron was responding, you wrote:

The emphasis there is mine. You say that any vertebrate is probably smarter than a cephalopod; B.I. offered fish and toads as possible counterexamples. Last time I checked, both those animals were vertebrates.

BTW, can we continue this in a new thread? This one is extremely long and is taking a while to load.

Consider it done.

Look for “Consciousness, Part Two” appearing soon!

your humble TubaDiva/SDStaffDiv
for the Straight Dope

The Maharishi Mahesh Yogi deals quite a bit with this heart of this question: What is consciousness? He separates the states of consciousness into conscious (awareness); unconsciousness (sleep or any lack of conscious awareness); and transcendental consciousness (awareness of the self as observer separate from “normal” consciousness. if one answers the question posed by the poster, consciousness can be simply answered as awareness of the experience at each moment. Yet, it is quite apparent the difficulty in momentary awareness. I’ll get back to that in a moment.

Unconsciousness can just as easily be described and documented in the few words used above. The real crux of the question rests in the reality of transcendental consciousness. If a conscious being can simultaneously be constantly aware of one’s self, then as the observer that person is simultaneously a sentient (conscious) being and one that is aware of himself as an actor in the world quite different than what one normally describes as conscious.

Since most of us normally operate at the conscious level, this true self awareness posed by TM and the Maharishi describes a state of being separate from what we normally perceive as an individual being. Much the same as Jung’s notion of A collective unconscious, this reservoir of knowledge truly defines the mind as a separate entity from the biological self.

The biggest stumbling block for those who try to define consciousness In a more limited, biological or computer analog fashion, will always fall into the reductionist trap. Answer this question: How can the observer processing an event be sure that they can ever understand an event, since they are both the observer and interpreter, yet using the same brain to process that information and reach any conclusion? In other words the very mind that reasons, processes, concludes and acts on that information is no more objective in that process than the ability of the user mind to recognize the inherent limitations of what we normally call consciousness.

In short, experience transcendental consciousness and we all begin to realize the inherent limitations of what we call objective, rational scientific explanations of consciousness.

it is those aha moments, flashes of clarity, inspiration that define true consciousness. Without the “silent witness” our thoughts and ideas are nothing more than rambling assumptions based on perceived experience. Think about those times when you are emerging from sleep or even day dreaming. Remember the times when you are aware of your body, but that awareness is not quite yet a part of the body. That’s closer to true consciousness than these didactic, feeble attempts to explain.

Oh dear.

MediaPro, we’re gonna assume you meant well – but this thread is so old we can’t even tell who some of the original posters are.

I’m going to close this. It makes no sense for it to be open.

If you want to continue this, start another thread, please.

Thank you.