What is consciousness?

If you know what you are seeing is a hallucination, then that is a key aspect of being conscious to me. It’s not whether you have them or not, but the knowing what they are.

  1. We are told by folks who spend their lives studying such things that consciousness is produced by a physical organ. It is a tangible thing with dimensions and shape. Those wise folks know what it does but not how it works.

  2. The animal brain is a self organizing system. It creates and organizes its structure and the information contained in that structure. It has absorbed language from it’s environment. Only it ‘knows’ what it perceives as red.

  3. The computer ‘brain’ is a list of instructions that were created by a programmer(s). It’s structure is not modified by experience. ‘Learned’ weights in a neural net are just more bits in memory. However it can be made to appear consciousness. The more skilled the programmer, the more conscious the computer will appear.

So, where in the structure of a computer could consciousness exist? If it is like animal consciousness there must be an organ, a dedicated active area of the computer that results in consciousness. An organ that can divide between real and imaginary. Where is it? What is it?

Do you contend that a computer that perfectly emulates a conscious human brain is not in itself conscious? Why must there be some dedicated active area of the computer, or an organ? I don’t really know what that means although I think there must be some persistence of states to maintain consciousness why couldn’t consciousness be spread around different parts of an organ or computer that are multi-purpose and only sometimes are used for consciousness?

This summary is quite out of date. Modern neural nets can do far more than just weight inputs and outputs, they can indeed self organize.

And the instructions that the programmers are writing concern only how the program self-organizes, not details of how it solves problems within a particular domain.

In the chess example I mentioned, only the rules of chess were taught to Alpha Zero, no strategy whatsoever (not even the piece values, which are of course not part of the actual rules). It became so good for self-learning, and in such a novel way, that we have had to do a very deep analysis to even know why certain moves are so effective (this was not the case with Deep Blue II, which simply played like a human with enhanced calculating ability).
It’s genuinely teaching us chess, not the other way round.

We don’t know this. We don’t know to what degree consciousness is spread throughout the brain or localized. It may be a meaningless question.

Thanks for the comments

mijn,

Good point, the software can create paths and interconnections to fit an application. However It does not alter the structure of the computer. The software neural net is not the equivalent of a biological neuron. It is a method of doing numerical regressions adaptively. Even though the process is stochastic, the input is structured by the programmer and the result is evaluated by the programmer.

I believe computers have taught us that chess is not a game of strategy. It is a matter of memorizing the board patterns. That’s why chess masters can simultaneously play numerous games. They are responding to the immediate board pattern. It’s a table look up.

Tripolar,

Excellent point. Could consciousness arise as an emergent property of computer complexity? If the computer appears to be conscious, is it? Do we need to identify the source of consciousness for it to exist?

Consciousness is either the product of physical properties within the computer or it is an ethereal property without specific instrumentation - ie a ‘soul’. Take your pick.

Some thoughts:
1 - As you say, neural nets and programs themselves can be evolved instead of written. I’ve done it myself and it works but it leads to an interesting question about what conditions in the environment lead/push the evolution of the artificial brains towards specific functions, which is not an easy question to answer. In nature there are multiple strategies for genetic/species survival, most of them don’t involve human like intelligence.

2 - Everything in #1 is referring to things that have a known output, meaning we know what it looks like to win at go, we know what it looks like for an artificial creature to survive. We can describe those things using some amount of math and logic. We can’t really describe consciousness so it’s tough to figure out how to get there.

RaftPeople,

It’s the old ‘defining the problem implies the solution’.

You’re most welcome.

Not really. Table look up happens during the opening, but then games diverge into something novel.

Now it’s true chess players look for certain patterns, you’re quite right about that, but I think it’s misleading to put that in contrast to strategy.
Because the patterns are not “look for a ring of pawns” they are abstract heuristics like “set up a pin that will ultimately force the weakening of the king side pawns”. Awareness of certain patterns like this and then coming up with a plan is the strategy.

Or, to put it another way, what’s the difference between this and coming up with a strategy to solve any kind of problem, out there in real worldz?

And, incidentally, this is why I, as a chess player, find the new deep learning chess programs so exciting. Because, for example, they will sacrifice material in a position where it would not be possible even for a computer to calculate the position out to checkmate, or the win back of material. They somehow have their own chess “gut” – that we didn’t program – that tells them that cramping the opponent is worth a knight here.

Mijin,

I’m an engineer with miniscule chess experience. The romance of the game and it’s strategies are immense. However, from an engineering standpoint each board situation is unique. The next move does not depend on some strategic history. There is a statistical best move that the computer will make. If there is a choice among moves of equal value then a ‘randomized’ selection process will take place. An expert observer may label the resulting game brilliantly strategic, but any strategy was chance combined with the skill of the programmer. The computer did not ‘know’ what it was ‘doing’.

In the instant when a computer is active, the only thing a computer can do is change the state of the bits in a single memory location. The only thing it can decide is the condition of it’s status register. That is the only time and place there is anything happening in a digital computer. How can that result in consciousness? If consciousness is not located in the adder then where might it be in the computer architecture? If it is a ‘thing’ then we should be able to locate it.

That’s not true at all, but even if it were it would be irrelevant. A computer can through a single thread of steps still emulate anything that a brain can do with multiple processes running at the same time.

So let’s say that our computer is not incredibly fast and takes longer to do the same things a brain does, does that then make the machine any less conscious? Is a person who thinks slowly less conscious than someone quicker of wit?

Why would consciousness be in an ‘adder’ or any other single part of the machine or brain? Is consciousness reside in just one neuron in a brain? Or just some distinct part of a brain?

Tripolar,

Excellent points.

During the execution of a single cycle what can a computer do besides alter the state of memory or test the status register?

Things that are seen as a result of the execution of a sequences of cycles are interpretations made by an observer. These may be wonderful and momentous but the computer is not aware of them. The only time the computer is active is during the execution of a single cycle.

What active element exists in a computer outside of the adder? What decision is made outside of the status register? What else could be conscious?

Computers can have multiple processors. Multiple general CPUs and plenty of other specialized processing units. So there can be a lot of things happening at once.

Again, this single cycle concept is not a constraint. Everything a brain does is the result of numerous sequential and concurrent processes and interpreted by observers in the same way. And the computer is far more aware of what it is doing than a brain is, though I think you mean the computer is not aware of the interpretations. The computer just like you is not aware of the interpretations of observers unless they communicate them back.

The status register decides what happens to the current code address. Just as a single neuron doesn’t make the decision of who you vote for neither does the status register make complex decisions. The brain has a system for making decisions and a computer can perform the same task.

You seem to be looking for a homunculus to explain consciousness.

No, not homunculus. As I said above, I am following the current view that consciousness occurs in an identifiable area of the brain. I will not presume to present any evidence as though I understand it. It is a brain activity that has a location.

The brain is a massively interconnected, parallel, electro-chemical computer. Electromagnetic signals provide evidence that brain activity occurs in waves. It is an analog computer.
You are correct. If a program could be written to model all types of neurons and all neuronal processes and to model all of the chemical processes that take place in the synapses and around the neuron and if we had a schematic of their interconnection then, when a computer large enough and fast enough is built, we could simulate the brain.

“Again, this single cycle concept is not a constraint. Everything a brain does is the result of numerous sequential and concurrent processes and interpreted by observers in the same way. And the computer is far more aware of what it is doing than a brain”

Could you elaborate on the above? As you point out the brain is concurrent, the computer is not. What part of the computer is aware of what it is doing. How is a computer more aware of what it is doing than the brain. How is the computer aware if it is not concurrent?

Chess against a human is very much a game of strategy.

If a given move has a 49% chance of winning but another move has a 47% chance of winning in theory but a 51% chance of winning against your particular opponent, you go with the latter.

This is why chess masters study the past games of their opponents. I mean, in a championship tournament they study and study and study. The state of the board is only part of the equation.

(Even a computer vs. computer match could involve one computer being fed a history of the past games of the other to give it an edge.)

Furthermore, no computer, not even Deep Blue, knows what the best next move is all the time. There’s formulas and searching and such, but those are limited and not perfect. If a computer could play a game perfectly right now then we would already know if White could always win the game or not. I.e., the formulas would say “This opening move has a 100% chance of winning.”

Thanks for the enlightenment.

What else is there besides the state of the board? Can any board state be reached by only one sequence of moves?

If chess is a game of strategy, then how can a chess master play many games at once and make the moves instantly?

Be careful of the “anything” statement. If consciousness is real, and the brain creates it, we still don’t know which attributes of the brain are the ones that create consciousness. There is a thread in GD started by HMHW with arguments (and a paper) about why consciousness is not created by computation.

Even the best statistical move is still dependent on a future sequence of events, and as ftg pointed out, the opponent has biases which will influence their play. The best move for a specific match when the board is in state X may not be the same best move for state X when the players are different.

If two computers are playing, each with the same complete set of statistical data about board states and best move, and they are not programmed to try to trick the opponent, then your are probably right, they should both always choose the best move and there is no strategy.

I guess this kind of implies strategy is dependent on incomplete or imperfect information.

Again, this is an extremely misleading characterization. Deep learning algorithms are generic learning algorithms, and when the computer makes a very novel move, we – all humans – don’t initially understand that move.
To say that’s the skill of the programmer…you may as well say everything I do is the skill of my mom.

As for whether the computer itself knows what it’s doing…that’s debatable, that’s why we’re at an interesting point in AI. Though we can’t reverse engineer Alpha Zero’s decisions, it’s conceivable that we could one day plug in a back end that can describe why it favors particular moves (and people are working on this of course. If deep learning algorithms could articulate their decisions that would have a massive impact on many fields, not just games).

I wouldn’t read too much into the play style of chess players during “simuls”.

They are normally pitched against a number of significantly weaker players. Playing someone 200-300 ELO points lower than me, I can normally play basically instantly too, because I basically know what they’re thinking and can predict very well what their next move is likely to be.
(In fact, playing someone much worse than me (500 points down say) is in some senses trickier because their moves look almost random)

Playing opponents within 100 rating points is much slower, and involves studying the position and trying to make smart plans, for computers and humans.

If it were just a lookup table (as you suggested earlier), then why does the AI need thinking time?

But we don’t know the answers to these questions for human brains.
So the logic of implying that because computers today are sequential (which…isn’t actually true, but anyway) they can’t be conscious, doesn’t follow. We don’t know if parallel processing is a requirement.

RaftPeople,

“I guess this kind of implies strategy is dependent on incomplete or imperfect information.”

Agreed, it’s fuzzy logic - an uncertain conclusion based on imprecise data.

Mijin,

I greatly admire your chess skills!

All current numerical computers are serial by instruction. Some have significant concurrent processing, but they are not parallel. What miniscule knowledge I have of quantum computers indicates they may fill the void. They seem to be an overlap between analog and digital.

Observing what computer programs produce today is awe inspiring. But, I believe there is value in remembering that the computer does not recognize faces, drive a car or search for recipes on line. All the computer does is get something from memory, do something to it, put it back and test the status register. It’s amazing what you can accomplish with a not very bright but tireless slave who follows instructions exactly.

I’m on the third read of** HMHW**'s paper. I don’t disagree with his conclusion and I love the methodology he
used to come to his conclusion.

Link to a thread with a great paper by Half Man Half Wit on the subject on consciousness and computing.

I’m still not finding non-computable processs to be a problem. I mentioned in his thread that the introduction of randomness into computable processes makes them non-computable, but I think that’s a distraction. The non-computable processes can just be hardwired rules used for the encoding of decoding within a model system. They would be polymorphic archetypes for new models that are then refined by trial and error to form new rules. Some reasonable number of different encoding/decoding rules can be combined in different ways to evolve a very large set of more complex rules that are used to create new models without a problem of regression.

I should get back to this in the other thread at some point.