I certainly concur that it’s far from obvious that an AI can’t do these things. But if it’s true, and if I understand you correctly, this would not be an argument about emergence, it’s an argument about computability. Which is what I mean by ruling out “specific and limited” capabilities; enumerating certain functions that aren’t amenable to computational solutions tells us nothing at all about the potential emergence of high-level cognitive functions like self-awareness in computational systems.
I would look at it differently. Syntax is strictly about the structure of the symbols in some logical system. “2 + 2” is syntactically valid in arithmetic. “2 +& B” is not. There is a rudimentary semantics in that the symbols represent quantities and have positionally determined values, and operators represent certain well-defined operations on those values.
The point here is that the difference between understanding arithmetic and understanding the full scope of the semantics of language is one of degree, not of kind. Arithmetic is a simple closed domain whose syntax and semantics are completely defined by a few simple rules. The semantics of language is very much more complex because it involves an open-ended relationship with the real world. It is false, however, to say that AI lacks any such understanding, as for example in @Sam_Stone’s illustration of Bing Chat distinguishing the different context-dependent meanings of “bank”.
I gave similar examples earlier of sentences with technically ambiguous meanings that confounded early efforts at machine translation, but that a human would always interpret correctly based on real-world understanding. Early AI pioneers were almost despondent in concluding that such problems could only be solved by somehow imbuing the machines with human-level semantic understanding. ChatGPT handles such translations flawlessly. It’s clear that its model of the world is very incomplete, to put it mildly, but to imply that it operates only on syntax strikes me as an example of the goalpost-moving that we see so often in AI debates.