The difficult part of that is building the robot though, rather than the behaviour.
Not that we’re there on the behaviour side, but we do have deep learning algorithms that can design constructable, effective web designs, for example. I don’t think we’re decades away from the cognitive part.
It’s not the vagueness of this definition, it’s the triviality that’s the issue.
Arguably AI already models other people and animals. Certainly there is AI that can succeed in games where guessing what other players know, and giving them false clues to throw them off, are necessary to best play. Whether it is internally modelling other agents is hard to say – generally deep learning algorithms are hard to reverse engineer once trained. But something equivalent to that must be happening.
I don’t think such algorithms are conscious though because they don’t encapsulate many of the other, more difficult to understand aspects of consciousness.
There is a parallel in the first commercial computers. When you pressed start on an IBM 704 it selected the card reader, read 24 cards and transferred control to address 0000. That was the extent of its’ ability. The 24 cards had to create a program that would load another program. That program would load a program that loads programs. Then you could load a program.
The computer reached out, learned to learn then learned. The brain is a self organizing system. It is a pattern matching system, not a logic engine. Like the cards in the computer, the brain contains only the patterns it gains from experience. Like the computer it must stimulate its’ environment and store the result. An infant stimulates its’ muscles and watches its’ fingers move. When I make drawing I am doing the same thing. My brain is stimulating the environment and observing the result. My ability to do this results from the patterns I have experienced over my lifetime.
What is the intent that drives the system. I believe it is best termed ‘desire’, an insatiable sense of need. The infant has no knowledge but it senses need and it feeds it with information.
So, my point above is that the brain is not observing in order to learn. It is actively projecting in order to create experiential patterns. The engine we are born with equates to the micro code that defines the instruction set in computer. There is a mechanism for for acquiring, storing and retrieving information. When the start button is pushed it begins building a brain.
This is a problem, and may not be completely solvable. We can never really be sure that other people, or other animals, or even robots experience qualia in the same way that we do. However I have confidence that we will eventually achieve some sort of ‘qualia-engineering technology’ where we can use, manipulate and manufacture qualia at will, even if we don’t fully understand them.
My thought experiment is thus; we know that the human brain is constructed of two hemispheres, which are connected by a very-high-bandwidth connection known as the corpus callosum. The corpus callosum allows these two hemispheres to act almost seamlessly as a single individual.
We also have examples where the corpus callosum is damaged, and this separates the brain into two entities which have only limited and degraded links between them, and the individual displays unusual dichotomies in behavior between the two hemispheres. My thought experiment is that it may (and probably will) become possible to repair or reconnect the two hemispheres of a split-brain patient using artificial methods.
Once that level of connectivity is possible, then it is just a conceptual step to joining two or more human individual connectomes together - and in theory this would allow us to experience each other’s qualia directly. By experiencing another’s point of view directly, we will be able to detect and differentiate between the different ways various individuals perceive the world. We will be able to directly feel other people’s pain, see their impressions of colours, experience their proprioception, and so on. There is some evidence that certain (but not all) conjoined twins share sensations, if not directly share the same qualia. But this method would allow our universe of experience to be potentially shared with other individuals.
One probable difficulty is that we may all have completely different experiences of the world; at least, in a split-brain patient, both halves of the cortex have developed from the same genetic template, so they may be expected to be moderately compatible with each other. We all construct a new biological ‘operating system’ from the von Neumann replicating plan encoded in our zygote form, and these biological-OS systems may be widely different from person to person, and mutually unintelligible.
But I think this is just another problem for information technology to solve, and eventually we will all be sharing qualia and various levels of consciousness with each other, and with the machines that we construct. This will not really solve the problem of qualia entirely, but it will demonstrate that we all share the capacity to experience qualia, and also indicate the extent to which these experiences are different.
Who knows; maybe some people are true p-zombies, and don’t experience qualia. That would be interesting.
It’s obviously fine to talk that way metaphorically, but if you take it too literally, it quickly becomes nonsensical. If we observe the world by means of observing an internal screen that (imperfectly) represents it, then how does observing that internal screen work?
That’s why there are theories of mind that don’t feature any sort of internal representations at all, like enactivism, where cognition is a part of the organism’s action within the world, or Dennett’s intentional stance, where having mental content is simply a useful approximation to explain an entity’s behavior.
This also doesn’t really make sense, if you drill down a bit. For whose benefit are these details ‘filled in’? Why would the brain go through the trouble of telling itself a story it must already know in order to tell it?
A parallel would be if, in order to perform a computation, a computer were to construct a computer to perform the computation for it; which then, to perform the computation, constructs another computer… And so on. The point is that homuncular theories of ‘internal screens’ or some functional equivalent attempt to explain a capacity by appealing to that same capacity—say, explain viewing a certain scene by constructing an internal representation of it to be viewed in turn.
I agree with the rest of your post but arguably this part goes a bit far.
It’s not so much that the brain is adding details to a story it already has; it’s that it needs to extrapolate and embellish to make a coherent story in the first place.
The simplest examples include things like our vision, where the brain has to infer movement from a set of static images, and indeed construct a simultaneous snapshot of the world, when our eyes only take in a little on the world at a time.
(Aside: I once experienced tunnel vision. And it was not that I only saw a circle of vision surrounded by black. In fact, I could still see everything…but everything in my peripheral vision was static. If I looked at myself in the mirror doing “jazz hands”, I could only see one hand moving at a time; the one I was looking at. So my brain was “filling in” part of my view from memory).
All our perception works like this. The brain must not merely anticipate the future but also infer the present.
But a lot of what seems like filling in really isn’t. Dennett makes the point regarding the ‘color phi’ phenomenon: two points separated by a certain distance flashing in succession seem, to us, like a single point moving. Fine: the brain is ‘extrapolating and embellishing’ movement where there is none, because typically, things have spatiotemporal continuity, so this sort of ‘best guess’ makes sense as a completion of the incomplete information we have received—it’s a safe-bet heuristic.
But when the second dot changes color, what we see is a single dot moving and appearing to change color midway through. This isn’t something the brain can anticipate for filling in: the information isn’t present when the color change is ‘extrapolated’. What happens here is not that we represent in the brain a blob changing color, we just represent that there is a blob of changing color—in the same way the previous sentence does so, without being anything like a representation of a blob of changing color.
It’s the same with blind spots: you don’t see a discontinuity in your field of vision simply because there’s nothing there that represents such a discontinuity. Or with one of those ‘10 differences’ images: they look alike not because somehow, you build up an internal representation of both as being the same, but because you don’t have any representation of their differences (before you spot them). There’s simply no need for anything beyond that, any internally edited picture (or movie) to be shown to some internal audience.
I’m not sure if you are disagreeing with me? Your response contains “but” a couple of times, but I would have no dispute with the brain performing different kinds of representation, nor with the brain not always, in every context, attempting to fill in data.
The point remains that the brain does indeed frequently extrapolate and infer, and contrary to post #105, this absolutely makes sense.
That post doesn’t argue against the brain extrapolating, but against ‘filling in’ some particular kind of representation with self-generated data to ‘paper over’ any holes you’d otherwise perceive. That story is erroneous.
Hmm; anyone who has studied the history of UFO/UAP sightings (and the current drone flap) will know that the human mind fills in gaps in experience with plenty of spurious data in everyday life.
Bandwidth. Your eyes can’t look at everything all at once; your brain forms a representation of the world that seems like it’s a complete picture.
Some of what it fills in is based on fragments seen in previous moments, some of it is just made up. That’s how we can be looking for our keys and not see them when they are in plain sight. We’re not looking at the world directly; we’re looking at a collage that exists in our heads.
In fact, bandwidth—or cognitive capacity—would rather be a reason that none of this happens: it would consume mental effort in creating a fake representation when it could just not do that at all. Again, the two pictures with 10 differences don’t initially look the same because the brain creates some spurious internal representation to then compare, but because the differences are not (yet) represented. It’s not that anything missing is filled in, it’s that it’s never represented as missing in the first place.
I have pondered heavily on why I know who I am, and my awareness as Linda so and so (my real name), and what is within me that is programmed to be this Linda person …and not someone else. Are all of us just very complicated results of zillions of years of evolution, and concentrated into one person, bc it is like in cooking, when you boil things down into the flavor you want? (I know that sounds simpleminded, I am just of above average intelligence).I am so unable to express my meaning here…but I will read this thread with great interest. I have thought about this since I was quite young, and it would just make my brain spin.
Either way, we’re not just looking out of a window at actual reality. We’re perceiving a representation of the world that is curated according to the limitations and optimisations and whatever else, of the perceptual system.
Indeed. I’m not sure that we could ever claim to have access to actual reality, only a representation of it, which may be extremely different from the real base reality. But that is all we can ever hope to get. So long as our representation is useful, it doesn’t really matter if it is accurate.
OK, just because I think it’s really an important issue (see the avatar), I’ll be a stickler one more time and then I swear I’ll shut up about it: you don’t perceive a representation; the representation is the means by which you perceive the world. There can’t be any further perceiving of the representation going on without the whole thing lapsing into incoherent circularity. I disagree with Dennett philosophically more than I agree, but I think he was right on track in decrying the perniciousness of such ‘Cartesian theater’ imagery as regards the philosophy of mind.
The computer mechanism has none of the abilities of the program. It provides the adder, structure and storage. In order to function, a computer must populate its’ memory with data.
An animal embryo has the unpopulated structure of a brain. It populates itself by recursively creating experiences of incrementally increasing complexity, something you and I have done.
The unpopulated brain structure is built by RNA molecules following a DNA plan. This looks like intelligence to me but I am told that it is just chemistry. That breaks the link to the homunuculus fallacy. It does not require a brain to build one.
Because our perception is not reality. It can’t be. Your brain doesn’t see the light hitting your eyes. The light strikes the retina, which converts the signals into nerve pulses. Those nerve signals are what the brain receives to create the image you perceive.
Projected on a screen? A constructed reality that is them perceived? Clumsy wording, but I grasp your objection. The brain constructs an interpretation. That interpretation is the brain perceiving and processing the sensory signals.
But I have visual imagery that doesn’t come from reality. Some may be memories of past images, but it’s put together in new ways that don’t reflect anything I’ve ever seen. Dreams do this regularly. Imagination creates visual impressions of characters and settings I’ve never seen from the written word, i.e. stories. Those aren’t as clear as fresh data from my eyes, but they can be as sharp as memories.
Those representations occur in some manner. The process of thinking them may be visualizing them directly rather than some projection that is then perceived by my inner self, but either way, my inner self is accessing them.
Does that accurately represent your concern with the projection terminology?
That’s actually a fair representation. Evolution has shaped the genetic code that creates the biology that makes each of us. Somewhere in that process, an identity emerges. Each of us appears to have a unique identity. Why and how that is is the zillion dollar question.
We can think about brain wiring as being a unique construct that arises from the specific experiences mapping on the specific neural configuration. Neural network theory with computers gives us some insight to that process. But nothing really explains unique identity yet.
Thinking of cooking a dish is a reasonable metaphor. The recipe may be the same, but each time you create a dish, the individual differences in small quantities of ingredients or cooking time or heat can affect the outcome of the dish to be different.
I think you both mean the same thing to be honest, and the same is true for me.
I think we all mean: “what we call perception, what we experience as perception, is the brain’s representation of the world”
I was reluctant to give Dennett even this one, as it strikes me not so much as a misconception that needs correction, but just people speaking sloppily because of the imprecision of the English language.
But…his point is correct, and I just discovered that he passed away this year. So…good point, Daniel, RIP.