What is consciousness?

I am not sure exactly what you mean by this. The cores in a multi core CPU can indeed operate in parallel, plus the GPU(s) working at the same time on their own clock cycles.
Now if we want to use multiple cores to solve a single problem, then some synchronisation has to happen, where we wait for all the cores’ results to form some final answer. But of course something like that needs to happen with brains too.
For all the parallel processing going on in my brain, at some point it needs to form the single decision of whether to go to the gym or continue posting on the Dope :slight_smile:

But do you acknowledge that this kind of language is misleading?
When you say “instruction” it sounds like we’re saying e.g. for a program that spots tumors in MRI images, we are inputting instructions like “IF <see discrete high contrast delineation around a structure that is roughly oval> THEN <likely tumor>” but in deep learning algorithms no human ever writes such an instruction.
All the programmer does is implement a system that can self-organize and self-learn.
The instructions that tell it how to do its job, it learned itself.

And indeed, a human radiologist could potentially learn things from the computer. We’re already at that level.

Now, this actually has little to do with consciousness…It’s just an interesting tangent in this thread.

[QUOTE=Crane;22245078
Could you elaborate on the above? As you point out the brain is concurrent, the computer is not. What part of the computer is aware of what it is doing. How is a computer more aware of what it is doing than the brain. How is the computer aware if it is not concurrent?[/QUOTE]

Repeating this, a computer can have concurrent processing. Whether of not it does is irrelevant. Concurrency is not an aspect of consciousness unless you define it to be, there’s no reason that a non-concurrent machine can’t produce the same results as a concurrent one. As far as I am concerned, if the output appears to come from a conscious entity, then the entity is conscious, no matter it arrived at the output.

A computer has the ability to be far more aware of what it is doing than a human brain is. Computers can monitor everything instruction executed, they can look at the source code of the instructions, brains can’t do that. In some other column I pointed out specifically that our brains appear to have one-way interfaces between processes so we can’t through reflection figure out in great detail how our brain works. Computers can more readily (for the time being) be aware of everything they do in comparison to human brains.

Consider this computer:
Memory bits are pieces of paper laying on the floor, one side of the paper has a 1 and one side has a 0. The cpu is a person that follows the rules of a typical basic cpu, read the next instruction and operands, manipulate the appropriate bits according to the rules, etc… A portion of the memory (the pieces of paper laying on the floor) is considered the input area, and another the output area (e.g. display screen).

You set the pieces of paper according to a sequence of bits that you saw on the internet and you try it out:
You set the input bits according to the ascii representation of this question: “what is your name”
Then you perform the bit manipulations according to the cpu rules.
After doing this for 3 hours, the system spits out some output, when you translate it to ascii, it says: “I’m a computer, I don’t have a name like humans do”

Are you convinced that consciousness is contained somehow within that sequence of flipping pieces of paper on the floor?

No, it’s clearly too simple to determine that.

Do something even simpler though, take a bag of Scrabble tiles and randomly select them one at a time from the bag.

Write down each letter you select and throw the tile back in the bag. If the tiles spell out “I am a conscious being existing in this bag of Scrabble tiles”, and then you say to the bag (because you are alone and no one will see you talking to a bag of Scrabble tiles) “Tell me something that will convince me that you are conscious”, and then you start pulling tiles again and it spells out “What on earth do you expect me to do that will prove that to you?”, are you convinced one way or another about the consciousness of the bag?

If the entity can communicate on complex matters and you can’t find a difference between the entity and a human then the entity is conscious. Maybe it’s just a really good luck up table that is conscious for a limited amount of time, just as humans are, but I don’t see what difference it makes how it got that way.

Let’s stick with the paper bits example so we can close it out, then we can move to scrabble.

You say “no, it’s clearly too simple…”, but in your other responses you say that only the output matters. Maybe you meant “no” because I only provided one question answer combo?

If you repeatedly provided input and did the paper flipping and the output seemed like good human answers, would you think there is consciousness in there somewhere?

Do it repeatedly and yes, I would say that there was consciousness ‘there’ for some period of time. I think some inanimate objects can behave in a way indistinguishable from a conscious being. I don’t know if people still consider it conscious if they know it’s inanimate, I don’t care, it’s not important to me until someone finds bits of paper or Scrabble tiles behaving in these ways.

I don’t think lookup tables/inanimate objects/coincidences satisfy a reasonable test of subjectivity, so in knowing their construction and not just their output we can eliminate their consciousness.

I would be inclined to break this question up into two separate questions.

a) How do you define “consciousness” from the outside? At what point or according to what criteria would you consider a given entity or presence to be conscious?

b) Let’s posit from the outset that YOU are conscious. Otherwise “you” are not here “considering” this question. Some mechanical question-processing algorithm that doesn’t really qualify as a “you” is doing so, but not “consciously”, in which case we don’t care about your input, which you don’t really have, nor “feelings about it” or “attitudes” or “opinions”, and we don’t care to read the output that the program that is you produces when this question is provided as input, ok? So… what does it mean to you to be conscious? How do you know that you are? Have you ever questioned that you are, in fact, conscious? What would it mean to you if you were to conclude that you were not, in fact, conscious, and how would it differ from concluding that you don’t exist?

For my own part, I’m uninclined to play “gatekeeper” and be picky about defining consciousness from the outside. If someone or something insists it/they are conscious, I’ll accept that. There are ramifications. One can discard a program, erase a routine. One should not kill a consciousness. One should probably allow a consciousness to cast a vote in November. I recognize those considerations in opting to be pretty open in accepting others as conscous.
I do consider myself conscious and I can’t imagine any meaningful sense in which I could exist and not be conscious.

Tripolar #42,

Allow me to define an aspect of consciousness as a point of discussion:

Consciousness includes the instantaneous act of summing immediate sensory input to evaluate the availability of resources
and the level of threats. The result of this conscious act is a gradient of feelings from urgency -> complacency -> satisfaction.

To some degree a computer can simulate these acts. And the computer can invoke an annunciator to indicate the position it has calculated on the result gradient. But, at what point does the computer adder ever see (know) more than the two values at its inputs and the value at its output. Does it ‘know’ the difference between a value and an address? Does it know whether the number will be go to memory location or a timer or a peripheral? They are all addresses in the memory map and the adder never ‘sees’ the map. The adder has no means to be aware of is context.

An analog circuit comes closer. An operational amplifier with a bank of input resistors, each connected to a sensor, and a single feedback resistor will sum the inputs and provide an output for the result gradient. The operation is continuous and the inputs are concurrent. This is a small part of the operation of a neuron. An argument could be made that the operational amplifier is conscious of its input and operation, because the op amp is a single component that ‘sees’ the entire operation concurrently. The same argument cannot be made for the adder. It sees nothing but arbitrary bits.

Your concept of the working of computers is too far from reality. The computer is a machine running a process which does much more than the basic operations of a computer. Just as an automobile is much more than a single piston in it’s engine.

Nonsense. There is nothing an analog circuit can do that a digital one cannot with a high degree of resolution to the point where the outputs are indistinguishable insofar as consciousness is concerned.

Missed the edit window:

The continuous nature of a changing analog signal does not magically produce consciousness.

Tripolar,

You are correct. I am using the term computer to mean only the Central Processing Unit (CPU).

“The computer is a machine running a process which does much more than the basic operations of a computer. Just as an automobile is much more than a single piston in it’s engine.”-Tripolar#51

Computer based systems accomplish complex tasks using the basic operations of a CPU. The complex tasks to be accomplished are not defined at the time of manufacture of the CPU. The CPU is only capable of manipulating binary numbers in the memory map. Some of the memory addresses are used for input devices and some for output. The CPU doesn’t make any distinction between sensors, memory and output devices. It’s all memory and it’s all numbers. The CPU may be attached to a complex system that drives a car. The system generates numbers describing the status of the car and requires numbers in response that result in driving the car. The CPU reads from and writes to the memory locations under control of the program. Any significance assigned to the values in memory is provided only by the program and the attached system The CPU executes the program in memory. The complex system drives the car. The same CPU does exactly the same thing in a toy or appliance. The attached complex systems are different - the CPUs are not.

You are correct. The CPU can serially simulate the operational amplifier. But the CPU never sees all of the data simultaneously - in context. The continuous nature of the analogue signal is a component of consciousness. One could make an argument for consciousness in the op amp. The same is not true of the CPU.

You have stated this but not made any argument for it. Why would an analog be any more conscious than a digital circuit that produces the same result?

  1. The operational amplifier has some structural similarity to a neuron.

  2. The operational amplifier is a single component that possesses neuronal characteristics.

  3. The operational amplifier handles all instantaneous values simultaneously.

  4. The operational amplifier operates on the data stream continuously

a. The CPU has no structural similarity to a neuron.

b. The CPU does not possess any neuronal characteristics

c. The CPU never ‘sees’ all of the data and does not distinguish between input and output

d. The CPU operates on the data in a serial pieces. It may be interrupted causing the operation to be completely discontinuous.

I do not believe, and have not argued, that an operational amplifier is conscious. It does however have some characteristics of a conscious system (as defined above). The CPU has none.

The illusion of consciousness created by computers is impressive. My printer constantly communicates with it’s manufacturer. When it is low on ink it orders more and bills my credit card. I never see the transaction. I just get ink in the mail. Is my printer conscious.

I’m stopping at this point in my response because that’s another claim without explanation. I can simplify this.

Is a single neuron conscious? If you think it is you have a lot of explaining to do, please be precise and detailed in what you mean.

If a single neuron is not conscious then explain how multiple neurons can form a conscious entity. When you do that you will also have the explanation for how a digital computer can form a conscious entity.

I wonder if this really is true.

I remember reading that some mathematician proved for the n body problem with 5 bodies, it’s possible to accelerate one of the bodies to infinity in finite amount of time without a collision.

If this was your analog circuit, it seems like it has the ability to get into a state that can’t be represented by a digital computer.

Here’s another angle on the digital simulation of analog:
What if the analog circuit can solve some problems in polynomial time and the digital simulation is in exponential time. It’s possible that the digital version can’t be effectively used for the size of some of the problems our brain solves, and maybe some of those are the ones that result in consciousness.

I added that part about “insofar as consciousness is concerned”. I can’t rule it out, but I’m not convinced it’s a factor in consciousness. You’ve commented on neural processing before, do you think there’s something special about analog processing in regard to consciousness?

As a practical matter that may turn out to be true without some advances in technology. I don’t think it affects the definition of consciousness though.

Ed,

Thanks for the thoughts.

“The operational amplifier has some structural similarity to a neuron.”

The statement is true and I explained it above. The input from the sensors are similar to dendrites, the input resistor values act like the synapsis and the transfer function acts like the body of the neuron. I have built simple neural nets from op amps. Operational amplifiers are not intelligent and they are not neurons, but they are structurally similar. The CPU is not and that is the topic under discussion.

If you are considering a PC in a box rather than just the CPU then you have the problem of the program structure. The PC is driven by an interrupt stack. It’s ‘thoughts’ are brief and chaotic.

So, my printer engages in conscious acts. Is my printer conscious?