No it is not. Complexity and obfuscation/obscurity are not the same thing. You seem to have lumped them into a single category of stuff that makes things hard to understand.
That’s a valid category (both things do make stuff hard to understand), but it’s not useful for this discussion, in particular, because you’re trying to address arguments about complexity with responses about obfuscation.
Complexity is about richness of function; the ability for something to perform a sufficient range of functions and interactions so as to make a wide range of behaviours possible. DNA is complex, which has made it possible for biological systems to do a wide range of different things with it.
DNA could also be described as obfuscated (because it uses its own language and instruction set) - but here’s the point: even if we eventually break down the language barrier and completely understand how DNA works, in every minute detail, it will no longer be obscure, but it will always still be complex.
The human brain is complex in a way that the neural system of an earthworm is not. It is an entirely reasonable argument that complexity is required for sentience (although of course it is not inevitable that any and every complex toolkit will give rise to it, so please do not think I am asserting that).
We already have computer systems that are modelled (in simplified form) on biological brains, and they are capable of functions that the programmer did not explicitly anticipate or predefine. These systems are hard to understand because their internal state is very complex, not deliberately hidden.
Point is though, they do things that we didn’t pre-set them to do. Banks use neural net algorithms to detect and flag ‘suspicious’ transactions; these systems have learned what a suspicious transaction ‘feels’ like, in a strikingly similar fashion to how a human would learn to get a ‘feel’ for them.
But for any given flagged suspicious transaction, nobody can point to a line of code that says ‘If X>100 then print “suspicious transaction”’ It isn’t like that at all. The judgment is embodied in what can only reasonably be described as the experience of the system.
This is only one of several different approaches to AI (but I happen to think it’s one that is most likely to result in machines that truly have a ‘self’ in the way we believe we do - although we will never know that detail for sure because it’s philosophically impossible)
But the thrust of all this was this: if a machine acts in ways that appear sentient, that were not explicitly coded, predefined or hidden by the programmer - that is, if it learned to behave that way by itself - that (IMO) contributes a reason to consider that it could be the real deal (or else where did it come from?).