What Is Consciousness?

Well, fine, then define computation in terms of Turing machines, partial recursive functions, lambda calculus, what have you—they’re all equivalent.

Not all computations have an input—a program that does nothing but print ‘hello world’ when executed is a perfectly fine computation (and of course, all inputs can always be considered part of the program, which is why one can use inputless finite state automata as a model for all computations that can be carried out in the physical world).

Not all computations have outputs, either—just take a program that enters into an infinite loop.

Finally, the information is not necessarily transformed—as I said, the identity function is also a computation, albeit a trivial one.

OK

Crane

G. L. Clapper (IBM) made a unique voice recognition system around 1960. The system consisted of a microphone and amplifier followed by a large number of measurement devices that processed the signal in various ways. The output of each processor, if great enough, turned on a lamp. The lamps were mounted on a board. There was no evident arrangement of the lamps. When you spoke some limited vocabulary (0-9) into the microphone, some of the lights went on in response. After going through the vocabulary a few times, you could easily recognize the pattern associated with each digit. It was quite reliable, but the machine did not have a single output for each digit.

We made an identical system and substituted a many channeled chart recorder for the lamps on the output. The result was the same. Anyone familiar with the system could look at the charts and tell which words were spoken.

All we had to do was come up with a Boolean equation that converted the states on the chart to discrete outputs for each digit input. The problem was that such an equation did not exist. We pondered this problem with some very high priced help, but couldn’t solve it. We could see the correct answer, but no logical system could.

The patterns output by the Clapper machine, for a given digit were always characteristic of each other, but they were not the same. They were shifted, or offset, or incomplete or noisy, but it was still easy to identify the digit. Some of the measurements were always hi or low for every digit. We removed these because they carried no information. That ruined the patterns. The background information somehow effected the output.

This may give some insight into consciousness. What we perceive is not an array of discrete states, like the pixels on a screen - an array that can be resolved by a logical system. What we perceive may be a blurred array of data from which we somehow filter out reality.

Crane

I can see the argument that an inert object could hold a close one-to-one correspondence with a non-inert object. For instance, a computer programming textbook will have a listing of code.

x = 0
for n = 1 to 10
x = x + n
next n
print x

This listing, here, is inert, but it is very similar to the actual code in the computer that actually runs. I can accept that this inert listing implies the result (“55”); the listing contains the same information that an actual run would contain.

But…at that point I agree with you. The listing doesn’t “run.” (Or compile or even interpret.) It’s a “picture” of an actual operation.

In my opinion (and I’m guessing you’re in agreement) the inert representation of a conscious process is not conscious, any more than a photograph of a volcano is hot.

You’re very kind, but, to be brutally honest, no, it wasn’t. I was thrashing around, not unlike a fish out of water. I don’t really have anything to contribute, not at Half Man Half Wit’s level. At this point, the best I can do is describe what I think, but cannot actually put it together properly or formally.

Trinopus,

You are too modest.

As to the listing, agreed - it’s the old thing about “the map is not the territory”

Crane

What’s cool is how close the map and the territory can become. You could easily implement the code I listed in your head. So…you are emulating a BASIC interpreter, at least for such an easy listing.

My belief is that consciousness could be “listed” in an (insanely long!) BASIC program. But it doesn’t actually become conscious until someone starts to run it.

It doesn’t matter (to me) how they run it. It could be a formal BASIC interpreter…or it could be some hyper-intelligent entity reading the listing and “running it” in his mind. The only real requirement is that they run it, and run it properly, not leaving out any steps.

Away over in another corner of human existence entirely… Many writers have said how eerie it is, sometimes, when you create fictional characters. They sometimes seem to develop “minds of their own.” I can attest to this sensation! A character will assert his role in the story, very much as if he were an actual independent person.

In practice, this is (I think) a runaway implementation of our empathic sense. We spend so much of our time trying to see the world the way others do (“Why is he looking at me like that? Do I have a spot of mustard on my shirt?”) that we exercise this talent even when not appropriate. This is how we see pictures in clouds, and why we sometimes think, “My computer hates me!”

Ultimately, I think this is what consciousness is: we study ourselves with the same attention to detail that we study others. Being right on top of the subject, we have a pretty good view.

But the most astonishing thing about consciousness is that we don’t have perfect awareness of ourselves. We can surprise ourselves!

That part is going to be hard to emulate computationally!

It’s fun to think about this problem, but when I step back and look at the amount of analysis that has been performed by very smart people with no clear answer, I’m left wondering if one of two things is true:

1 - This problem is beyond our brains (see the thread in IMHO about alien intelligence, maybe this is an example of a next step up in intelligence)

or

2 - The problem is quite solvable but only by doing, it’s too big and complex to just think about it and solve it. Kind of like trying to fully grasp the IRS’s computer systems would be too big for our brains.

But the point of the rock argument is that the rock does run the program, in exactly the same sense that any computer implementing it does. It’s not, as I have been at pains to point out, an inert object, so focusing on how inert objects don’t ‘run’ the program misses the mark.

A film of a volcano isn’t hot, either. Neither is a simulation of one.

This is one of the reasons I have suggested the problem might be reduced to simpler problems. It’s why much of AI research focused on image recognition as a lesser problem. If we can work out how a mind sees patterns, we might have a better idea of how it does more complicated cognitive tasks.

No one can possibly grasp the IRS’s computer systems…but we can look at parts of the massive system, and get a good idea of what they’re doing.

I wish I could say why this idea is so incredibly unsatisfying. I’m in the same position as Biffster rejecting the idea that Shakespeare is encoded in pi. I just can’t buy into it, even as, at the same time, I can’t rebut it.

However, if the rock actually is truly “running the program,” then the rock is conscious. There isn’t any way around this. I think the rock isn’t doing jack shit, and is either wholly inert (or is quietly and randomly eroding.) But if it really is running a program that emulates a thinking mind…then it is a thinking mind.

(For this to work, the rock also has to emulate the environment the mind operates within. It has to have simmed inputs to the mind, as analogues of sensory data, and it has to have some means of storing data, as an analogue of memory. A mind, alone in a vacuum, isn’t much of a mind: it has nothing worth thinking about. But, why not? If the rock can emulate a mind, it can also emulate a mind’s environment. And those two, together, make up conscious thought.)

The Chinese Room does understand Chinese.

I can’t have it both ways…but neither can anyone else.

Consciousness is the appearance of consciousness in a thing to some other conscious thing, or itself. The appearance of consciousness must include the appearance of determining that some other thing is conscious.

Pi can’t be conscious because it’s not a thing. An expression of pi cani be conscious though. It can appear to be conscious to some other thing that tries to determine it’s consciousness by examining the expression and finding what appears to be conscious. Some people would say it’s static in that it’s invariant in it’s total set of digits, but something else that doesn’t know that and examines the expression might find it to be conscious.

The result is that any thing can be conscious because something else conscious can’t tell if it’s just a rock, and can’t tell that it’s not determing if something else is conscious.

So anything can be conscious.

So now I am extremely tired, and I want to go lose consciousness.

Trinopus,

Yes, I believe we run Basic programs.

On NPR a while back there was a segment on a Dr. who was studying Idiot Savants. The subjects could tell him the day name for any date, past or future. He got the algorithm for calculating the day names of dates and checked their accuracy. After working with the algorithm for 2 or 3 years, he suddenly did not require it. His brain ran the algorithm outside of his consciousness. Just like his subjects.

We probably all do this as a by product of our work or hobbies. When I was working with digit sums I could look at a row of numbers and the digit sum just ‘came to mind’.

Crane

I don’t know if this is good science, or if it is good philosophy…but it is actually pretty good poetry. (Not meant as snark; I’m actually half joshin’ and half serious.)

If one were the only conscious thing in one’s entire environment – say, a guy lives in a big garden with no animals – would his own consciousness serve any purpose? He can’t use it to model the behavior of anything else: plants don’t have behaviors complex enough to be modeled. Over time, wouldn’t he just “go to sleep” in his own mind, going through more and more hours of each day in a kind of trance?

The extreme case of this is total sensory deprivation.

A more commonplace instance is a long weekend in Fresno.

There’s some science, and some philosophy, and maybe some poetry in there. It’s just one possible way to look at the question. I’ll have more later on, the same kind of mix of genres.

I don’t think the purpose of consciousness needs to be a factor. Just my opinion, the self references can be removed and I think it still holds up, for what it is.

Been to Fresno, good example.

ETA: But the question of whether a consciousness can determine it’s own consciousness is interesting.

If one is conscious, but no other being exists to recognize its consciousness, is it still conscious? Eg. If you were the last man alive on the planet, would your consciousness matter?

I concur. The rock isn’t doing Jack shit. Maybe that’s how the old people felt about the rock music I grew up with. They couldn’t relate to it, even if it was inspiring to me. Different consciousness.

Good point - if we cannot define it, we probably cannot recognize it.

Consciousness in a computer would have to develop just as it does in humans (animals):

  1. The system would have to develop it’s neuronal interconnections and weights in response to environmental stimuli.
  2. It would require a Boolean system as a subset, that drives it to seek stimuli from it’s environment.
  3. It would have to receive rewards, passivity, and punishment in response to it’s exploration.
  4. It would have to learn to communicate with humans the same as a child - face to face conversation.
  5. As it learned language it would be taught to associate the words with text.
  6. The project would require 2 machines - a subject and a teacher.
  7. It would require 2 forms of visual input - binocular vision video and direct electronic video feed.
  8. It would require sensory receptors for environmental perception and for sensual (pleasure).

There are 2 key points: you cannot design a machine to be conscious; it will become conscious, over time in tiny steps.

A starting point is Mitchie’s match box computer that plays Tic-Tac-Toe. The machine has a matchbox for each possible board situation ~300. Each matchbox starts with the same number of colored beads, one color for each possible move.

For the machine to play, the matchbox corresponding to the board situation is opened and a bead randomly selected. The color of the bead determines the move. At the end of the game, a loss results in the beads that were used being removed, a win causes a duplicate bead to be added to each matchbox used.

It does not take long for the machine to become very skilled. This operation is easy to program which allows the machine to play both sides. It can run through thousands of games in minutes after which it ‘knows’ how to play at a high level of skill.

It would be interesting to see how far you could go with such a machine using an input system like Arduino.

Crane

Speaking as a Vivaldi man…agreement!

I’d say, however, that we could probably emulate consciousness in a designed machine.

I remember when Martin Gardner introduced something like that in his “Mathematical Games” column in Scientific American. It was the “Hexapawn Educatable Robot” or “HER.” Even simpler, as it required only about twenty matchboxes. A very interesting simulation of “learning.”

I definitely agree with you that this is a very good way to “engineer” consciousness. We simply set up a large-scale “learning machine” and…let it learn.

Is it the only way? I don’t think so…but it is definitely an approach, and I think one that would (and will!) work.

I tried this (not consciousness, just some level of smarts) with an artificial life/neural network simulation (continuous evolution of neural network).

It works, but it’s not easy. The creatures kept reaching some level of being able to find food and avoid being eaten but it plateaus. It seems like the next step is to continue to shift the environment to trigger new skills/attributes, but it takes time to fine tune the selection criteria, survival alone isn’t enough (at least on the scale I was working).

Frequently less “intelligent” action would win out over what appeared to be good sense/reaction. For example, the creatures had eyes that were essentially X number of little radars that shot out a beam and would trigger a value in an eye neuron based on the distance before it intersected with something (further caused smaller value) and what type of item it contacted (food, creature, inorganic object like wall, etc.).

The creatures would learn to follow other moving creatures to get an opportunity to eat them, but then weird successful behaviors would start to take over, like just roving in a large circle. There were enough creatures that a circular motion was pretty likely to randomly intersect other creatures and allow it to take a bite/get some food.

So, to counter that, I had to tweak things to try to make that behavior less likely to be successful. Fewer creatures, more obstacles, etc. But then some other “unintelligent” approach would tend to win out over time. So more tweaks to try to require some level of intelligence.

My take away was that there is a fine balance in environmental conditions that allow more complex/advanced attributes to evolve and it’s harder than I assumed going in.

It might matter to me. And there could be other conscious things in the universe. I already think computers can be conscious, though at a low level compared to humans. I consider animals to be conscious in that way also.

I think if I am alone and isolated for long enough, without even a low level consciousness to communicate with I might start seeing consciousness in things I wouldn’t consider conscious now, but more likely I’d just kill myself.

That sounds like a really nifty experiment! Were you using “supercomputers” or something a little less data-intensive? i.e., how big was the environment, and (in rough terms) how much data did each individual critter take up?

(I never got beyond A.K. Dewdney’s “Wa-Tor” sims, of the sort any jasper might run on a PC using BASIC. But even those are fun and educational.)

My suspicion is that even the biggest supercomputers, today, aren’t enough for simulate/emulate a working ecology, even a very stripped-down simplified one. But…we’re getting closer all the time!