You are right, that was uncalled for. I apologize. In my defence, both in this thread wrt Searle’s argument and in your earlier ‘Mary’ thread, it did seem to me that you expressed unfamiliarity with the arguments – which is of course nothing damnable, everybody encounters everything for the first time at some point.
But I still shouldn’t have said that.
Then I think your model fails. For one, it has to confront the basic problem of every theory that asserts the impossibility of knowledge in one way or the other: if it’s true that we can’t know anything objective, then on what grounds could one ever believe it? Because if true, it is certainly an objective fact of the world that we can’t know objective facts of the world; thus, any argument purporting to establish its truth (or falsity) ought to be regarded as spurious.
Well, truth be told, it is possibly a not entirely indefensible concept. We could certainly react appropriately, e.g. to dangers present in the outside world, if reaction merely involves knowledge of the syntactic level, as I’ve argued myself (though I don’t think I really believe this – it implies the possibility of philosophical zombies, and as I also argued earlier, I don’t think those are a coherent concept).
But it leads to a hyperbolic form of epiphenomenalism: not only are conscious states without effect on the world, they are also (at least in their content) without connection to it – everything I experience is just an elaborate hallucination of a very strong sort: it might not only be the case that I’m not right now sitting at a computer trying to wrap my head around the consequences of your philosophy, but rather being chewed on by a tiger in India, no, it might just as well be the case that there are no computers, tigers, India, or even philosophy, but rather things utterly alien to me, as distinct from concepts I am familiar with as 0 and 1 are from the planet Mars and HP Lovecraft’s fiction.
This, I think, is a bit too close to the brain in the vat and other forms of solipsism to me, and while it arguably does solve the problem of connecting inner and outer world – by effectively getting rid of the latter, and placing the sole source of experience (well, the source of the content of experience, at least) within the mind, within the inner world – it does so at too high a price. It necessitates a complication that beggars belief: the origination of the whole conceptual frame of the outside world solely by the mind, from atoms to galaxies (or whatever concepts your ‘inner world’ might contain). Just from a scientific viewpoint – whose applicability is not immediately clear --, this seems to disfavour the theory compared to those within which there is just an outside world, and the inside world is a more-or-less faithful reproduction of it, rather than an entirely original, complex creation in itself.
It also seems that it must be stupendously hard to maintain consistency – while I can imagine possible ‘non-contential’ mappings (i.e. mappings that work in some way so that there is not a 1:1 correspondence between concepts on the inside and concepts on the outside, as the existence of such a correspondence would merely mean that the inside is effectively the same as the outside, but in ‘another language’ so to speak – yet some mapping must exist, as the causal origin of inner concepts presumably are sensory inputs derived from the outside world) from outside to inside world that work (i.e. maintain self-consistency) for any finite amount of time, the proportion of them that work indefinitely ought to be vanishingly small, and thus, for any mapping that’s worked up to time t, one ought to expect that it ceases working – i.e. that what happens next does not fit anymore with the previously established context – at time t + 1, generically.
This would, I think, make our ability of successful prediction somewhat miraculous, or at least, I can’t immediately see how, if our concepts so far had no relation to the objective facts of the world, what happens next could be expected to have any relation to what happened before, and hence, to our conceptual context.
Another problem would be, why bother with consciousness at all? Is it actually necessary for anything? What’s the use of all that imagined inner baggage? Of course, nothing needs to have a use, but it’s in general a pretty good heuristic, in order to find out why something is there, to ask what it is good for.
Also, one could point to the ‘continuity of truth’ as a possible stumbling point. This is somewhat related to the problem of prediction, I think. Typically, something being true of something means that there is an x and a predicate P such that P(x) (is true). If we have a faithful mapping of the outside world on the inside, then this translates to there being an y and a predicate P’ such that P’(y), where both P’ and y follow from P and x via said mapping. The objective character of the truth ‘outside’ and the faithfulness of the mapping ensure the truth ‘inside’, and especially that if something is true at any one time, it remains true. But if there is no faithful mapping – i.e. no clearly delineated x and P for y and P’ to map to --, then there is no reason to expect continuity. What is true today may not necessarily true tomorrow – and if it is, it is only incidentally so. Yet this isn’t our experience – things that were established as true tend to stay true. (Or appear to do so! One could solve this and the prediction problem by introducing a kind of mental ‘last Thursdayism’: Our state of mind changes continuously, but we don’t notice, since our memory changes accordingly in a consistent manner, so if P’(y) is now false, it will seem to us as if it always was – rather Orwellian!)
And of course, if a correct response to the outside ever necessitates knowing the meaning rather than merely knowing the syntax – which I do think is the case, as I think p-zombies don’t work --, then the model falls short, as well. Maybe one can make (for once, good!) use of a Lucas-like Gödelian argument here: If some agent is described by a formal system F (which he is, if his reactions derive purely from the syntactical level), and if F is consistent (and suitably expressive), then there exists a sentence G, the truth or falsity of which the agent can’t decide. He could then encounter a decision that he is unable to make, something like ‘if G is true, go left; if G is false, go right’. Yet, a conscious being with access to G’s meaning – which is in the end nothing else but (something like) ‘I can’t be derived in system F’ --, can easily see the truth of G, and thus, make the correct decision to turn right. Since we can see the truth of G, we have access to the semantic level, not just to the syntactic one – we see meanings, objective truths about the world, and not just symbols.
This has some issues, though. One is that it is easy to craft G-like sentences for people, such as ‘RaftPeople can’t consistently assert this sentence’. Another is that it is unclear to what extent we can actually see the ‘truth’ of G, since given F and G, one can construct a new system F + ~G, where the negation of G is added as an axiom (and thus, G is trivially false), but the new system is consistent if F is consistent. But since this argument is still under discussion in philosophy, I’ll leave it at that for the moment…
I’m sorry, this has gotten somewhat rambling, but I don’t think I’ve ever seen anybody earnestly proposing something along these lines – that there exists an objective outside world (I take it you’re not a full-fledged solipsist?), yet that we can have no knowledge about it; so I have to collect my thoughts a little.