Artificial Intelligence and Consciousness (and dreams)

So guys, this might be a bit dense here but I’ve been wrestling with several issues here. When I was a teenager, I was interested in lucid dreaming. I read what was available on the internet at the time (1996 or so, so it was pretty slim pickings) and looked for anything interesting. I also ran across this…
Conscious Dreams and Controlled Hallucinations by Claude de Contrecoeur.

What this guy’s deal is that he used Lucid Dreaming to study the subconscious. I have had the pleasure of having had a few lucid dreams in my time, and also do have moments of lucidity in normal dreams where I can notice the things going on in his treatise.

So save you guys the trouble of reading it, but I’d recommend that you do, here is his basic premise. I’m trying the best I can to recount it here, but if its a bit hard to compile in a way that’s easy to understand.

We store memories in what he calls MHV. I am not sure what it stands for but I think he probably detailed it in French earlier at some point. Anyway, the way he describes the concept is pretty clear.

Imagine in a dream. You look at your watch, and then look away, when you look back your watch has changed. During dreams these areas are in a high metabolic state which “radiate” meaning that they radiate along commonalities that they have with other memories. It can be thought of like a domain. For instance, every watch you have ever seen is in the “watch” domain. And while you these radiate into not only every other watch possible but possibly into other ones you haven’t seen before.

Have you ever noticed this in dreams? The way things can sort of “morph” into other things? Well according to him, there is always some connection between the two objects, and our minds jump from one to the next through some sort of common characteristic. He gives another example of how he once saw an opening door morph into a crab opening it’s claw, which he later realized was linked due to the speed at which it happened.

Well, his subconscious explorations are one thing, but it really does provide a good model for consciousness and for what we as humans are all about. He uses this model to describe a lot of things. For instance that during exposure to cannabinoids we don’t suffer from amnesia, but rather hypermnesia. It is impossible to remember more than we already do because hypermnesia leads to confusion, and an inability to process what is going on.

So essentially what we as humans do is take our internal world and match it against the external world. The thing that stops us from hallucinating is the fact that when we look at real-life imagery we attenuate it to a specific MHV where it becomes solid. That is recognition. It is essentially what consciousness is.

So in this concept of an MHV domain again, going back to watches, we have every watch that we could ever see, and also watches that don’t exist. This ability to mix and match objects to create new ones is creativity in itself.

So that’s as best as I can do for the summation of his ideas. It’s not a very good one, in reality, but it’s the best I can do. Here’s what I got out of it:

Human intelligence is simply all about pattern recognition large and small. Consciousness is simply best described as a lack of confusion over what we are sensing. Every single thing that we do, we are comparing the real-world to what is in what Claude calls MHV domains. Every time you see something and recognize it, it is because it is linking up correctly with your memory areas. You can get new ideas by synthesizing two older ideas which then become a new slice of the MHV or maybe even another.
Problems with machine-based pattern recognition:
The main problem as I see it is that we have no possible way in which to reconcile the difference between human pattern recognition and machine pattern recognition. If you imagine that the only possible sensory input is a 100x100 black and white picture you can still come up with problems that would trip up a machine. Imagine this. A large triangle, and one half of its size being compared in successive slides. The human would instantly say, “It’s bigger” yet the machine (comparing basic pixel differences) would not notice that the two shapes are similar.

We could of course go in and code the computer to recognize the difference between objects of different size, but wouldn’t that be the wrong idea? On the other hand, maybe that’s how humans do it? Perhaps it is a hard-coded area of our brain (like graphics hardware in a computer) that translates imagery into useful ideas. So when we view a sheet of paper lying on the ground 10 feet away, this graphics card (passed down from evolution) can say, wait a minute, it’s not actually some weird parallelogram, but actually a rectangle.

There has to be some amount of genetic and some amount of learned behavior and any kind of genetic information would have to be hard-coded. This spatial converter thing could be an example.

As far as AI is concerned, I think this would be an interesting approach. I’m not entirely sure that I think this is even true or not, but it sounds like a good model for trying.

What is lacking is a good way for a computer to not only detect patterns, but to also to know where to even look for patterns. It would have to know where to look. Here’s an example. Imagine an even more basic data stream.

0000 1111 0000 1111

vs.

0000 0000 1111 1111

A computer can be programmed to determined when this string is on or off, but can a computer take this information and get something significant out of it? A human can look at it and say, “Well the first half is turned off while the second half is turned on” for the second string. How can you even begin to get a computer to recognize something like this on its own? I have no idea where to even start. It seems like in a human, what seems so natural is so wound up in our biology, and nature that the actual hard-coded part of our brain (the pattern recognition part) is what makes it so difficult. So the difficulty would be not only do we have far much more RAM than other animals, but we also have a lot more pattern recognition built into our genetics.

One problem with machine pattern recognition is that it has very little room for error. In order for machine pattern recognition to be as versatile as it is for humans, I think that it would need to become precisely that…human. This is also one of the reasons I believe that superhuman AI isn’t possible. The very ability which makes man a good pattern recognizer is the very fuzziness of the comparison. What makes us creative and able to adapt is precisely the part which makes us unable to find exact differences in strings of numbers.

compare these two strings:

00000000001111111111

00000000011111111111

You probably knew they would be different, but to our minds we think, “Close enough” unless they are right next to each other which essentially makes the one digit difference stand out.

A computer would figure out the difference through mod 2 division or whatnot and notice that one bit in the middle was off.

So I guess I’m trying to say that I think that the most difficult part of understanding human intelligence is understanding our abilities to recognize and compare patterns. This is probably aided with some heavy-duty hardware (probably evolved) which essentially helps us out along the way. It’s probably also the reason why it’s so difficult to be aware of because it is out of our normal consciousness. We can only be aware of our memories and how they interact with our senses to provide a picture of the world.

Any comments or insight about this?

I’d love to hear more about this line of thought. I realize that this Claude fellow seems odd, but I can’t help but appreciate his insight in a lot of ways which sort of makes a lot of sense. What he discusses essentially just a way of logically understanding some functions of the brain here and there but I find it fascinating. He also delves into various differences between schizophrenics and dreamers etc.

I think that you may find Steven Grossberg’s work very interesting. This article (warning pdf) in particular addresses some of what you touch upon in a highly developed and disciplined manner that addresses both from a machine computational POV and a neurobiologic one. I personally believe that he nails it amazingly well. Conscious states are circumstances in which there is a resonance, a positive feedback loop, between a top down pattern and a bottom up input. The article is no easy read but is well worth it.

Great! Thanks a lot for the info. I’m going to tear into it.

I’ll give an example of something I think would be a pretty good example if I could code it right.



Imagine a 4x4 grid of digits like so..

0000
0000
0000
0000

Just try to invision it as pixels where a 1 means it is black. So let's imagine that our program is an animal. He wants to identify food that is edible.

0000
0110          this is completely edible. Any time that one of the outer squares is changed to a 1
0110          it is poisonous to some degree...
0000

1111
1001             this is completely poisonous. It represents what the animal is trying to 
1001             avoid
1111


So the thought exercise for me here was to try to figure out a way to write a program that could somehow figure out that the the outside spaces are poison, without having to explicitly tell it. I have some ideas of how I might begin, but it’s still really difficult.

Can you think of a simpler example that still gets to the core of the problem? I guess I’m saying that I’d like the program to recognize that coordinate 1,4 is poison even if it has never tried to eat some food with that coordinate active before. Eventually with enough combinations you could probably get it to identify the good vs. bad squares, but as a human, we would figure it out very easily.

Thanks again for the article though…
ETA: The point of the program would be to try to operate in a similar manner as a human with memory. Store multiple iterations of past morsels of food and use them to make future judgements. The methods for judgement is obviously the hard part!

Yup. You’ll like the article.

Well, it may be easy for a human of a certain age (not sure what age, but it’s probably measured in years), which means the brain has been going through years of “programming” or learning.

In addition, consider that evolution has been working on our brain for 700 million years (I think I remember that as earliest nervous system) or more.

My point is that machine based learning and pattern recognition is only about 50 years old, evolution has a significant head start. I think we’ll get there but we shouldn’t expect it to be quick and easy.

While on the one hand, you’re absolutely correct in that humans are pattern recognizers beyond compare, on the other you’re overlooking vast expanses (and related issues) with what you’ve termed “consciousness”. As you know, we are more than our sensory perceptions and memories of such. The “magic” of consciousness is not simply that we’re amazing pattern recognizers – there’s “higher” thought, emotion, ethics, etc., etc. For a model to be useful, it must approximate some phenomenon to a satisfactory degree; nothing approaching a comprehensive model has been established. (You might want to look at Marvin Minsky’s most recent book – last I looked, he had it posted on his website for casual reading – as he often says that there are only two or three people in the world doing useful AI research.)

It’s not at all clear to me what you’re looking for here. Both the breadth and depth of any corner of this topic is overwhelming. As DSeid points out, ART neural networks are pretty neat (not having followed his link, but being somewhat familiar with his interests, I’m guessing the paper concerns ART). (Warning: lots of Amazon.com links follow.) I should think you’d find any thing by Douglas Hofstadter very interesting. That last link is relatively obscure, and concerns his attempts to program a computer to work with metaphor. Along similar lines (in that it deals with metaphor and concept formation), but broader reaching in the area of philosphy, you might enjoy Lakoff and Johnson’s Philosophy in the Flesh. Or, if you want to gain a cursory overview of a broad range of topics/techniques in AI, you’d do well to go with Russell and Norvig’s AI: A Modern Approach textbook.