Is Computer Self-Awareness Possible

No. Text is a place holder for some portion of our internal model which is a place holder for our interactions with the real world.

Do you really think we can have text that just skips that chain of reference and goes straight to the real world?

They are only the same if you drop out their differences.

If I move my arm, will I receive text in the 3 languages that represent the arms interaction with the real world?

If not then you can’t say they are the same.

You imply that model building relies on understanding - I would say understanding is the RESULT of successful model building.

And success is defined by your ability to make predictions about interacting with the real world that generally matches actual interactions with the real world.

There exists a mapping T(x) = t that associates a string of text t in some language to a set of events x, the description of those events. The same is true for sensory inputs.

Well, not in those precise languages, and not in three of them, but that’s the general idea, yes – different sensors pick up on different changes of the arm’s state, code that into an electrochemical signal (where the signal s and the state of the arm a are connected by a mapping S(a) = s), and relay it, via your neural pathways, to the brain.

And if you can, using these messages, successfully build up a model of the outside world, then somebody in a room, being given text in unknown languages, can just as well, and then, he can find out how the text relates to this model – and thereby, understand them.

This is what your language A, B and C are doing?

If your language A, B and C are identical to sensory input and map the real world then of course we can build a model from your 3 languages.

You seemed to be implying that they aren’t arriving exactly like sensory input, but rather as a conversation of some sort that we were listening in on and had to determine their initial meaning in our language or internal model.

“Being given text” is very different from interaction in which the person initiates actions and gets feedback.

Just having a sequence of text arriving that has no relation to anything, including his arm movements, etc. can’t be deciphered.

Interaction is key.

You’re making differences where none exist, and get tangled up in the resulting complications. So, how do you make a model out of my three languages? And why can’t the guy in the Chinese room do the same thing?

And what’s the difference between being given text and being given sensory input, exactly? Of course, you can send out probes, and have some different input come back; same as the guy in the room, he can (if you think this is important) send out strings of symbols, and get other strings of symbols back. Do you think he can now learn the language(s) he is presented with?

@Chronos, who said “Of course it’s possible. There’s nothing a meat brain can do that’s inherently impossible for a silicon brain.” That is a statement of faith. Quantum effects may be at play, various near and possibly far field effects, possible hologram-like data storage and handling, so many synergistic combinations of discrete and non-discrete processes–and however many things we’ve not yet imagined or stumbled onto–fact is, you don’t know this, nobody does yet. Further, there is no evidence that our brains operate as a Turing machine, nor that a Turing machine can operate as do our brains. I’m not saying that there is no chance that you will turn out to be right, merely that there’s not a hint of “of course” that’s warranted at this stage.

It’s unfortunate that “what is self-awareness” is such a huge problem, and that the proof of the existence of self-awareness is not philosophically tenable, let alone practicable. But this doesn’t have to stop us. OP, I think we get the idea of the question, I do. We can stipulate, as an imperfect field expedient, “that the simulated mind is as self aware as it is convinced that it is self aware in the sense that we recognize ourselves to be” and also, if you like, “to the extent that it can convince others in the sense that we are generally convinced by others that they are self aware”.

Currently, what you suggest is impossible. And, all in all, it’s not yet possible to determine for sure whether or not it’s going to be possible to do what you suggest. There’s little or no evidence that suggests it will become possible, but many scientists have faith in the notion that it will. There’s little or no evidence to suggest it can never become possible, but many have equal faith that it’s not. It’s all pure faith among the “certain” of both sides. Sometimes we just have to admit that these are very, very early days for science. At some point, one presumes, either someone will prove that it can’t ever be done along theoretical lines (until which time it’s sort of provisionally possible in a weak sense) or they will do it in the lab and get it working.

Until we have radically increased our knowledge in this field, Dissonance’s “Divine Spark” notion and terminology is just as good or bad as anything we so far have on tap scientifically.

Similarities:

  1. They are both input
  2. They both represent something (they aren’t random)

Differences:

  1. Text is a place holder that refers to information gathered and stored through sensory input
    The foundation of our language is a model of the world around us based on sensory input and interaction with our environment. Language is built on top of other information. We don’t start with TEXT and then after we’ve built a model of the world with TEXT learn how sensory information maps to that TEXT based model.
    This is a critical section:
    When we build a model based on sensory input, we don’t need to go any further than that for it to be useful. The model based on our interaction with the environment and sensory responses is valuable information. This information allows us to accomplish something and that is to navigate our way through our world. “Meaning” and “understanding” are simply this ability to successfully navigate.

When we learn a language, say our primary language, we are learning how spoken and written words map to this model of the world built on sensory interaction. “Meaning” and “understanding” from the perspective of the language relates to the accuracy with which we map spoken and written words to our previously built model.

When we try to learn a new language (A, B or C), the success with which we can say we “understand” the language is the extent to which it also maps into our model, just like our primary language.

Because language and TEXT, as we use them, refer to our internal model of the world, if there is no method of connecting them to either our primary language or through trial and error and feedback to our internal model, then the new language can not be “understood” (meaning it can not be mapped to our internal model).

Summary:
We start with a somewhat blank slate, but with a mechanism for building/storing an internal model based on sensory input.
Interaction with the world and it’s feedback in the form of sensory input allow us to build out this model.
“Text” contains words that refer to portions of the internal model.
“Understanding” the “text” relates to mapping the text to the internal model.
If the mechanism to discover the mapping from “text” to the EXISTING internal model does not exist, then the “text” can not be “understood”

The only way that “text” and sensory input can be considered the same is if the text acted exactly like sensory input based on our interaction with the real world.

Because “text” is based on and refers to our internal model, the only way “text” can be understood is if there is some mechanism for testing/learning the mapping from “text” to internal model.

You simply can’t ignore the fact that “text” is based on our internal model and then say “they are the same because they are both inputs and they both represent something.” That is such a gross simplification that it would be like me saying an apple and a car are the same thing because they both involve atoms.

Text isn’t based on any internal model. If that were the case, for instance, computers couldn’t generate text, and neither could the participant in the Chinese room experiment. Text is a code with some content, exactly as sensory experiences are.

Your usage of the term “text” seemed to coincide with your usage of “language A, B and C”, which led me to believe “text” was the written representation of one of those 3 languages. Is this not what you meant when you posted about “understanding” those languages?

If you remember, you gave an example of one of the languages kind of like “asjgsh kjljadlk ADajhhLAKADJ” which I assumed you meant was a language, like human langauges, in which the words refer to things we have experienced, like “dog”.

Did I misunderstand you? When you said “understand” language A, did you not intend the language to be similar to something like english?

That looks right to me, but I still don’t see how this is supposed to depend on any internal ‘model’. It’s just a symbolic codification of a certain state of affairs. It’s ‘caused by’ this state of affairs in the same sense that a sensory input is ‘caused by’ the external world.

In any case, what’s relevant is that in both cases, we have a string of symbols that stand for something else, and it is the task of the recipient to figure out what; the genesis of the string of symbols is of no significance.

It would be, for instance, also perfectly legitimate to call an apple similar to a car based on their both being made out of atoms when that is the defining characteristic relative to the property that is under scrutiny – if we want to determine whether or not something has mass, and being comprised of atoms is the necessary requirement for this, then an apple and a car are indeed sufficiently ‘the same thing’. You couldn’t argue, for instance, that the apple is also a fruit, and hence might not have mass – it’s a distinction, but an irrelevant one.

Let’s get back to basics: the man in the room is given, through various conduits, slips on which text in several – to him – unknown languages is written. He can himself write text on slips – i.e. copy the requisite symbols – and supply it to the outside world, and observe the reaction, presented to him again in the form of text on slips. He thus can build up connections between text bits, and observe causal relations – supplying string ‘x’ to the outside yields string ‘y’, string ‘z’ is correlated with string ‘a’, and so on. Over time, it is entirely probable that the man can acquire the same proficiency at ‘communicating’ with the outside world as the Chinese room has – yet I do not think that you would believe that he has any kind of understanding of what he is saying, and what is being said to him.

This is a completely general analogy to the self interacting with the outside world. In the place of strings of text in an unknown language, it is supplied with sensory data in an unknown code. It, too, can send out symbols – neural activation patterns – to the outside, and receive feedback, in the form of, again, coded sensory data. So it can build up the same connections the man in the room can, and only those. Again, over time, the self will be able to hold a reasonable ‘conversation’ with the outside world, which includes, for instance, moving the body around in some manner appropriate to the circumstances as well as taking part in a conversation.

Both situations are exactly analogous along the relevant dimensions (which again means mainly that in both cases, information is exchanged in the form of strings of symbols equipped with some unknown mapping – to what, exactly, is of no importance, as in both cases, the code is equally hard to crack). Yet in the latter, you still seem to believe in some transparency the self can exploit to sneak a glimpse at how the symbols are connected to their ‘meaning’ – by variously storing them to create a ‘model’ from them, or being given some special symbol string that wears its meaning on the sleeve, so to speak, such as a painful stimulus. But the model that can be created is just a huge tangle of symbol strings and interrelations, which is just as available in the man-in-the-room case; and special symbols don’t exist, even though to us the process of decoding the interactions with the outer world is so effortless as to not be noticed at all (but think of Magritte!).

The difference between the two scenarios is how “understanding” is judged.

In the case of sensory input, “understanding” is judged by us humans as to the effectiveness of the mapping to allow the human to navigate through the world. When we say “understand” in this context, this is what is meant, that is it and nothing more.
In the case of the man that successfully determines which output should follow which input, we humans judge “understanding” based on what the text refers to. When we bring the man outside and say “ok, now point to a xyrthgd” and he doesn’t point to a tree, then we humans say he doesn’t “understand” because in this case, we expect “understanding” to mean that he has accurately mapped the words to his internal model built up from experience which in turn allows him to point to a tree with his arm. The term “understanding” means something different in this context. We agree on a word mapping to something substantially similar in each of our brains and if the person doesn’t have that same mapping then we say he doesn’t “understand”.
Note #1: Please keep in mind that I am going along with the discussion of the man here even though you have altered the original setup of the Chinese room. The original setup was whether there could be computation in the room that understood Chinese and I say "yes’, but how that system was created is an entirely different story. You have been discussing whether text based interaction is enough to “understand” the content of that text even when the basis for the text is an internal model built on sensory input. That is very different form the Chinese room thought experiment.

Note #2: You said: “you still seem to believe in some transparency the self can exploit to sneak a glimpse at how the symbols are connected to their ‘meaning’”.

I have been repeatedly clear that there is no special meaning in the sensory input. In fact, it was you that initially mentioned the “real” meaning of the tree and I pointed out that it doesn’t make any sense to talk about that.

But this completely fails to provide for awareness – if understanding were defined in this way, then an unconscious automaton possesses understanding, as well. Then the Chinese room possesses understanding in this sense.

This isn’t something I would call ‘understanding’ at all – it’s simple stimulus-reaction mapping. I think we agree that this doesn’t subsume a human being’s mental content. We also understand – or think we do, at any rate – what the stimulus is, what it is about; when we experience pain, we understand that it is pain that we experience, we don’t have the sensation of some stimulus followed by some reaction, where some reaction here would amount to a cry of pain, and recoiling from the stimulus, perhaps. We have the sensation of feeling pain – the equivalent of knowing the meaning of a sentence we are presented with, rather than just the symbols it is coded in.

Perhaps feeling pain is a too abstract example. Consider witnessing a scene: a child plays ball in the street, and the ball rolls onto the road. The child runs blindly after it, and you see a car approaching. You yell: “Watch out!”

Now, this is explicable on a purely syntactic level, i.e. without you understanding what happens. You react to the stimuli you receive according to your programming. If your kind of ‘understanding’ were all there is, then this would be the complete picture.

But this isn’t a picture of our inner experience of what happens. Rather, we understand what’s going on – and this understanding is the same ‘kind’ of understanding the man in the room, well, lacks. You can see this by realizing that it can be turned into a bit of text for you to understand, a narrative – as I just have done. The scene being witnessed corresponds to a text being given to the man from the outside; the cry ‘watch out’ are the symbols he produces in response. Only if he understands the language, however, does he also get to know what happens; and only if the self understands the sensory input in the same way, does it get to know what happens. We doubtlessly do know what happens – but this knowledge is the same kind of knowledge the man in the room acquires only if he is able to translate the texts he gets.

In any case, I guess it’s already progress that you agree that understanding in the same sense that we would ask of the man if he were to claim he understood the sentences he is presented with is impossible in this case.

And yet, you referred to ‘pain’ as apparently being a special sort of ‘don’t do that’ input, by which the self can learn which interactions are appropriate towards the outside world.

Stimulus-response is what a jelly-fish does.

We certainly have some of that in us, but the “understanding” I am referring to, as I have said multiple times, relates to our ability to accurately predict our interactions with the environment and simulate alternatives regarding the world around us. We do this using our built in capabilities to do this coupled with our model built up over the years due to interaction with the environment.

Whether that is enough for “awareness” or even “understanding” from your perspective, I don’t know, but from my perspective I think it certainly represents “understanding” - “awareness” is a separate question and I don’t even know my position on it.

Maybe, maybe not. You are assuming more than I would assume, but I wouldn’t argue there is no way you could be correct - from my perspective the answer is not very knowable so I try to stick with what I think can be supported.

I certainly agree there is an impression of the sensory data being more than what I have described, but I’m not convinced that we can be accurate with our introspection because those “feelings” influence every view we can get at the low level operation. In the exact same way you argue we don’t actually draw a picture in our mind.

It was my position all along.

  1. Text mapping alone doesn’t = understanding
  2. Some form of computation does = understanding
  3. The “system” that understands Chinese will, most likely, have been built in the same way humans learn Chinese, interacting with environment, modeling and then learning a language that maps to the model

You assumed that meaning.

The input wasn’t special, what is special in the case of pain is that we have some internal mechanisms that are built in that help us avoid pain - this is very much in the stimulus-response arena, but like everything in our messy brain, this built-in reaction is also input to other processing that can override it, etc.

In a simple form, yes. But in a more refined version, it’s also what a computer does (‘input-output’), and all that’s necessary to generate responses based on data, sensory input, or what have you. It’s sufficient to enable a human to navigate through the world, but – which is very much the point of the Chinese room argument – appears insufficient to enable him to know what he’s doing, or what’s in the world, as well.

But in order to ‘simulate’ alternatives, first you’d have to know the state of the world as it is, no? And without an understanding of sensory input equivalent to the understanding of a sentence, or a text, or any general message, there’s no way to come to know about this state.

In fact, perhaps ‘message’ provides a more intuitive picture. The man in the room receives messages, the same way the self receives messages. Both messages have content: some conversational content, perhaps, in a case like the Chinese room, or content concerning the state of the world, in the case of the self receiving data from the senses.

In both cases, appropriate reactions to those messages appear in principle possible without knowing the messages’ content. This is the case of the Chinese room, generating responses based on a set of rules; or your picture of ‘understanding’ as judged by an agent’s capacity to navigate the world, react appropriately to certain stimuli, in other words.

But this isn’t the way things appear to us – we certainly do know the content of the messages we receive: we do know that there is a kid running into the street after his ball, and we do know there is a car approaching (or at least, things seem that way to us). Whether or not that knowledge is instrumental to our yelling a warning is of no importance; nevertheless, the existence of this knowledge demands an explanation. The model of understanding you provide fails to account for this, as in this case, we would be like the man in the Chinese room, shunting unknown symbols around. But if you agree that the kind of understanding necessary to access the content, rather than the mere form, of the messages is impossible to achieve in the representational ‘self getting data’ picture, then it seems that the model does not provide an adequate account of how we interact with the world.

You did not seem to like this much when I first brought it up, though – but in any case, this is kinda sorta what I was driving at. My actual belief (as I think I’ve already hinted at) is that both the self and the (appearance of) understanding (or modelling) emerge in a sense as two sides of the same ‘illusion’ (where one must not confuse illusion with irreality: all that counts in the mind are appearances after all, so that which merely appears to be and that which is are impossible to tell apart), in the sense that understanding/the existence of a model implies the existence of someone (or something) who understands, or who ‘views’ the model, and conversely, the existence of conscious states implies that there is something these states are about.

I disagree that you can say “it’s also what a computer does”.

Stimulus-response is a reaction to input that doesn’t allow for the higher level processing of interpreting the input based on model and state and then using that to predict the future and choose a course of action based on predictions.

I think a computer can do the higher level processing, but a jelly fish can’t (as far as we know).

I can’t say it enough times:

  1. Yes there is a way to understand the world.
  2. It’s not the same as understanding text.

Understanding of the world means we attempted to make predictions in the past, tested them and adjusted our model based on the results. Each of us has done this countless times. We generally know how our sensory input will change due to our actions.

Text is something we humans made up. We made up the “understanding” and it is based on and refers to our model of the world. We can’t determine that “understanding” of the text because there is nothing that allows us to test the connection to our model of the world.

We’ve added a level of indirection that makes the two examples different.

If you want to change the generally accepted definition of “understanding the text” to mean “understanding the text merely means understanding the relationship between the input of the text and the output of the text” then yes we understand it just like we understand the world.

You appeared to be saying that access to the “real” attributes of the tree was a requirement to “understand” the world, and I disagree with that, we understand it based on our successful interaction and that’s it.

Gotta go, can’t respond yet.

It’s precisely what a computer does – given an input, it maps it to an output (see the definitions I gave earlier). The input is the stimulus, the output the response.

That’s why I tried to introduce the ‘message-content’ metaphor as maybe a more immediate illustration. In both cases, there are messages that have a certain content; understanding, to me, is getting to know this content. So in your idea of understanding the world, does the self know the state of the world, i.e. that there is a kid running out into the street etc.? If so, it must be able to access the content of the messages it gets from the sensory organs. But, you seem to agree that it can’t. Yet it certainly (subjectively) seems to me that I do.

This isn’t relevant. Text is a particular kind of code, and coding is not something we made up. Coding is just an operation that, well, encodes something into a string of symbols; knowledge of the coding is necessary to access what it is that has been encoded. There’s no reference to human or human-derived concepts anywhere.

No. In both examples, you have a code that stands for something, a message with some content. In both examples, you need to know the coding scheme in order to access the content. In both examples, all you have available to you to discover the coding scheme are further examples of code, and certain relations between them (plus the ability to send out bits of ‘code’ yourself to gauge the reaction, which comes of course in the form of further code).

But understanding the relationship between sensory input and output does not equate to, or provide means of, getting to know the state of the world. If that were all your understanding consisted of, then you would never know about the child running into the road – rather, you would only know about symbols being met with appropriate other symbols, their meaning remaining wholly obscure to you.

No, I merely meant that in the outside world, we can point at a tree and go ‘tree’, to establish a new code – a verbal one – based on the code we already know, how to interpret the signals that arrive in our mind from our eyes and ears. That is how new codes are typically defined, with reference to old ones; this however isn’t a possibility available to the self in establishing the ‘ur-code’ that enables it to abstract knowledge about the state of the world from sensory data, since obviously, there you can’t rely on a pre-existing code.

I fully understand what you mean when you say it maps input to output. On one level of examination I would completely agree with you, and that is when you ignore how it arrives at it’s output.

However, for some reason, even from a debating standpoint, you seem unable to even consider or allow that there could be qualitative differences between the following two methods of computation:

  1. A jelly fish that contains little state and simply responds to external input according to a fairly hard coded set of rules that can’t take into account much state because it doesn’t have the mechanism for storing and manipulating it.

  2. A human or a computer that contains massive amounts of state such that it allows that system to model the environment it finds itself in, and make predictions that are far beyond the capabilities of the jelly-fish.

Normal discussions of stimulus-response (at least from my perspective) make a distinction between an immediate unthinking reaction and a more complex computed reaction (even if deterministic). If you want to say “but in both cases there is a stimulus and there is a response”, great, I can go along with that.

But don’t tell me that there is no computational difference - they aren’t the same thing and both styles of computation have pros and cons.

Of course we do.

The reason why we do is because we defined “kid” and “running” and “street” in a way that is consistent with our sensory interaction with the environment and as a group, we substantially refer to the same types of experiences when we use those words.

And again - you want to raise “understanding” to some level beyond that which I think is appropriate. That is the problem - you keep insisting there is some “deeper” meaning or understanding going on.

I don’t.

And you haven’t supplied any arguments to make me think there is.

You have tried to argue by analogy to understanding text, but that argument doesn’t work because our definition of “understanding text” implies that when we read a specific word, we access/refer to the storage of our sensory information that is associated with the same thing that the people that defined the text were referring to.

It may seem like it, but is there anything you can say that would argue for it other than a subjective feeling? I know what you mean, that the world feels real and my description of our interaction with it sounds very flat. I get that.

But what arguments can you provide that we do actually get some content that is beyond sensory interpretation?

Forget the text because it doesn’t argue anything about the senses or the statement you listed here. It’s an attempt at an analogy that you thought might expose a hole in my description, but it simply does not do that because I don’t believe in the hole you think you are exposing, I don’t think we are lacking something, I don’t see evidence for it.

You need to focus on the actual “stuff” that you think we are getting about the real world and show how my description creates problems for interacting with the world.

You don’t need to belabor the point about an abstract view of the situation.

I understood instantly what you meant in your first post regarding the analogy. I have thought the exact same type of thought about countless situations over the years and how you can simplify the intermediate operations and say the net effect is that they are the same.

The problem is that, despite the fact that I fully and completely understand the point you are trying to make, I disagree with a key aspect and I have explicitly stated why multiple times and you have seemed to gloss over my responses.

What do we mean when we say we “understand” sensory input?
We mean that we have learned how our interactions with the environment result in different sensory input and have built a model around this and can make accurate predictions.

What do we mean when we say we “understand” a language?
We mean that we have learned how that language maps to our internal world model which was built based on sensory input.

Again, I completely disagree and would ask you to show any evidence that this is correct.

So, again, forget the language analogy because it doesn’t show how there is additional data from the world that we have that isn’t just a sensory representation.
Summary:
I have the easy position - I think our sensory data is all we need to “understand” the world

You have the tough position - You think that’s not enough

Your challenge is to find some way in which my model is a problem when dealing with the real world. Don’t argue from analogy to text because it’s not working. If my model is wrong - where is there a contradiction or something that humans do that isn’t possible if my view is correct.
Note: I am not emotionally tied to my view, I really am not, and I enjoy the debate and I would be excited if you did find an example of something that is a problem with a simple sensory model.

The simple reason for that is that mathematically, they are wholly equivalent. Well, a jellyfish may not be computationally universal, but any two systems that are, perform the same kinds of computation, they have the same computational strength, no matter their internal states, primitive operations, programming, architecture etc. The simple reason for that is that for any two computationally universal systems, each can implement the other, i.e. become functionally indistinguishable to it. This holds even if one is a Turing machine equivalent in complexity to a human brain, and the other is a table that contains inputs in one row, and outputs in the other (of course, the table would have to be infinite in principle, though for any concrete computation – any halting computation, at least – a finite subset of it suffices).

No. I have clearly stated what I mean when I say ‘understanding’ – getting to know the content of a message. This is nothing mysterious. It’s exactly the same thing you do when you translate a sentence. It’s the same thing you do when you read these sentences, and know what is meant by them, rather than perceiving them as meaningless symbols. It’s simple, and clearly defined. This is the absolute least I could demand of the meaning of the word ‘understanding’ without doing violence to the concept.

I have tried to argue by analogy to understanding text because in that context it’s easy to define what understanding means. Yes, ultimately, contact needs to be made to our understanding of the world, but this level needs not be discussed at that point – if English is a language you understand, all I require of you to understand a bit of text is to be able to translate it to English. Find a mapping. Crack the code. This is the simple case.

The much harder case is the self understanding the sensory reports from the outside world. Even if it were simply the case of translating to a language the self speaks – ‘mentalese’ – it would be a challenge equivalent to cracking the code from being just given coded bits, equivalent to the challenge of translating into English what you are given in unknown languages.

But there is no mentalese. Or even if there were, at some point, the self must have acquired it somehow, just as you must have acquired English (which was only possible because your mind was already able to understand the data from the outside world). The problem thus gets harder – but paradoxically, you seem to be arguing that it gets simpler!

You seem to think that all these bits of code the self acquires – and fails to understand – can be assembled into some model; and in a sense, that’s true, just as much as it is true that one can acquire a collection of bits of language-code which one can use to determine an appropriate coded message to send out, in principle. But the understanding – which still has the simple meaning I gave it, of being able to decipher a message – is completely missing in this case. You could create a collection of data related to a particular structure and call it ‘tree’, but the word ‘tree’ would have no meaning to you – it would just be a string of letters, pure syntax, completely meaningless. You would not know what you mean when you say ‘tree’, just as the Chinese room proband does not know the meanings of the strings he arranges.

Even if there is ‘just’ the subjective feeling – which is the only option I consider viable – it still demands explanation! The problem doesn’t go away just by calling it subjective, as it is still a matter of fact that we feel as if it were real – and whether we only feel that way or it really is that way, any putative theory of the mind should be able to address it.

Well, all the argument I can provide is that we do actually know that there is a kid running out into the street – this is an objective fact (that the kid exists, not that we know of it, of course). If we merely had access to the ‘ununderstood’, uninterpreted sensory data, its syntax, rather than its semantics, we would not know that – we would not know anything beyond having received and as a response sent some symbols (arguably, we would not be conscious at all). We would be in the situation of the man in the Chinese room.

I’m not saying we’re lacking something (and I don’t quite see how you figure I did), I’m saying we have something – access to the content, rather than the form of the message – that the model doesn’t provide for!

How much more clearly can I state it? The stuff we get about the real world is a knowledge of objective facts, which can’t be transferred via just the form, just the syntax of the message, and requires access to the semantic content.

No (or at least that’s not what I mean). We mean that we have knowledge about the factual content of the outside world.

Again, I mean that I have knowledge about the factual content of the message (which may, for example, correspond to a description of the objective content of the outside world).

Well, you are claiming that from mere symbol-manipulations, the content of a message can be gleaned, which is kind of the more extraordinary claim here; my only claim is that such symbol manipulation does not suffice (which, again, you agree with in the Chinese room case).

Well, I think you just oversimplify things… I’m not choosing the ‘tough’ position just out of want for a challenge, in fact I used to hold your position, more or less, and thought that all those thought experiments, Mary and her qualia, Chinese rooms, nations, blockheads, bats, swampmen, zombies, and whatever else, were trivialities or overcomplicated ways to look at fundamentally simple issues – which is a common error when coming newly into a field and failing to appreciate its depth, like this guy.

I think I’ve done enough of that. I believe it’s on you to tell me how, in your model, I can ever come to know an objective fact about the world – say, that there’s a kid, running into the street.

If you are saying I am new to this issue or I fail to appreciate it’s depth, I will be polite and just say that you are making some assumptions.

I never said you could “know objective facts” in my model.

Why is it my job to try to argue for your completely unsupported claim within my model?

If you think we can “know objective facts” about the world, then you need to prove it, or at the very least make some sort of compelling argument that would be difficult to just dismiss. The only argument you have made is: “I think there are objective facts that we can know about the real world, therefore I am correct”.

If the knowledge of objective facts about the real world is required to successfully interact in some ways with the real world - what are those activities? And what would happen when we try to perform them if we only had my subjective sensory information and not the objective facts?

Would we not be able to bake a cake? Would our taxes come out incorrect? Give me something concrete I can at least consider instead of just asserting it.

You are right, that was uncalled for. I apologize. In my defence, both in this thread wrt Searle’s argument and in your earlier ‘Mary’ thread, it did seem to me that you expressed unfamiliarity with the arguments – which is of course nothing damnable, everybody encounters everything for the first time at some point.

But I still shouldn’t have said that.

Then I think your model fails. For one, it has to confront the basic problem of every theory that asserts the impossibility of knowledge in one way or the other: if it’s true that we can’t know anything objective, then on what grounds could one ever believe it? Because if true, it is certainly an objective fact of the world that we can’t know objective facts of the world; thus, any argument purporting to establish its truth (or falsity) ought to be regarded as spurious.

Well, truth be told, it is possibly a not entirely indefensible concept. We could certainly react appropriately, e.g. to dangers present in the outside world, if reaction merely involves knowledge of the syntactic level, as I’ve argued myself (though I don’t think I really believe this – it implies the possibility of philosophical zombies, and as I also argued earlier, I don’t think those are a coherent concept).

But it leads to a hyperbolic form of epiphenomenalism: not only are conscious states without effect on the world, they are also (at least in their content) without connection to it – everything I experience is just an elaborate hallucination of a very strong sort: it might not only be the case that I’m not right now sitting at a computer trying to wrap my head around the consequences of your philosophy, but rather being chewed on by a tiger in India, no, it might just as well be the case that there are no computers, tigers, India, or even philosophy, but rather things utterly alien to me, as distinct from concepts I am familiar with as 0 and 1 are from the planet Mars and HP Lovecraft’s fiction.

This, I think, is a bit too close to the brain in the vat and other forms of solipsism to me, and while it arguably does solve the problem of connecting inner and outer world – by effectively getting rid of the latter, and placing the sole source of experience (well, the source of the content of experience, at least) within the mind, within the inner world – it does so at too high a price. It necessitates a complication that beggars belief: the origination of the whole conceptual frame of the outside world solely by the mind, from atoms to galaxies (or whatever concepts your ‘inner world’ might contain). Just from a scientific viewpoint – whose applicability is not immediately clear --, this seems to disfavour the theory compared to those within which there is just an outside world, and the inside world is a more-or-less faithful reproduction of it, rather than an entirely original, complex creation in itself.

It also seems that it must be stupendously hard to maintain consistency – while I can imagine possible ‘non-contential’ mappings (i.e. mappings that work in some way so that there is not a 1:1 correspondence between concepts on the inside and concepts on the outside, as the existence of such a correspondence would merely mean that the inside is effectively the same as the outside, but in ‘another language’ so to speak – yet some mapping must exist, as the causal origin of inner concepts presumably are sensory inputs derived from the outside world) from outside to inside world that work (i.e. maintain self-consistency) for any finite amount of time, the proportion of them that work indefinitely ought to be vanishingly small, and thus, for any mapping that’s worked up to time t, one ought to expect that it ceases working – i.e. that what happens next does not fit anymore with the previously established context – at time t + 1, generically.

This would, I think, make our ability of successful prediction somewhat miraculous, or at least, I can’t immediately see how, if our concepts so far had no relation to the objective facts of the world, what happens next could be expected to have any relation to what happened before, and hence, to our conceptual context.

Another problem would be, why bother with consciousness at all? Is it actually necessary for anything? What’s the use of all that imagined inner baggage? Of course, nothing needs to have a use, but it’s in general a pretty good heuristic, in order to find out why something is there, to ask what it is good for.

Also, one could point to the ‘continuity of truth’ as a possible stumbling point. This is somewhat related to the problem of prediction, I think. Typically, something being true of something means that there is an x and a predicate P such that P(x) (is true). If we have a faithful mapping of the outside world on the inside, then this translates to there being an y and a predicate P’ such that P’(y), where both P’ and y follow from P and x via said mapping. The objective character of the truth ‘outside’ and the faithfulness of the mapping ensure the truth ‘inside’, and especially that if something is true at any one time, it remains true. But if there is no faithful mapping – i.e. no clearly delineated x and P for y and P’ to map to --, then there is no reason to expect continuity. What is true today may not necessarily true tomorrow – and if it is, it is only incidentally so. Yet this isn’t our experience – things that were established as true tend to stay true. (Or appear to do so! One could solve this and the prediction problem by introducing a kind of mental ‘last Thursdayism’: Our state of mind changes continuously, but we don’t notice, since our memory changes accordingly in a consistent manner, so if P’(y) is now false, it will seem to us as if it always was – rather Orwellian!)

And of course, if a correct response to the outside ever necessitates knowing the meaning rather than merely knowing the syntax – which I do think is the case, as I think p-zombies don’t work --, then the model falls short, as well. Maybe one can make (for once, good!) use of a Lucas-like Gödelian argument here: If some agent is described by a formal system F (which he is, if his reactions derive purely from the syntactical level), and if F is consistent (and suitably expressive), then there exists a sentence G, the truth or falsity of which the agent can’t decide. He could then encounter a decision that he is unable to make, something like ‘if G is true, go left; if G is false, go right’. Yet, a conscious being with access to G’s meaning – which is in the end nothing else but (something like) ‘I can’t be derived in system F’ --, can easily see the truth of G, and thus, make the correct decision to turn right. Since we can see the truth of G, we have access to the semantic level, not just to the syntactic one – we see meanings, objective truths about the world, and not just symbols.

This has some issues, though. One is that it is easy to craft G-like sentences for people, such as ‘RaftPeople can’t consistently assert this sentence’. Another is that it is unclear to what extent we can actually see the ‘truth’ of G, since given F and G, one can construct a new system F + ~G, where the negation of G is added as an axiom (and thus, G is trivially false), but the new system is consistent if F is consistent. But since this argument is still under discussion in philosophy, I’ll leave it at that for the moment…

I’m sorry, this has gotten somewhat rambling, but I don’t think I’ve ever seen anybody earnestly proposing something along these lines – that there exists an objective outside world (I take it you’re not a full-fledged solipsist?), yet that we can have no knowledge about it; so I have to collect my thoughts a little.

I have not been exposed to many arguments by philosophers and academics in many of these of areas, like Mary and qualia. Some areas I have done some reading, but not necessarily tracked who said what, and some of it forgotten due to time.

But the issue of our interaction with the environment and can/how we might construct something on the computer that performs similarly - that topic I have spent a considerable amount of time analyzing and for a very focused little area working on/testing (from a hobby perspective). When I have read other’s analysis in the same realm my own conclusions have either been similar or very defendable - and it’s from that perspective that I feel this isn’t a new topic to me.

I can’t respond yet to your post because I have to go, but I will return to it. I do think there is an objective outside world, I don’t think a tiger is eating my leg, and there are logical reasons for thinking that, but I’m not sure I could prove either.