An operating systems professor of mine once made the comment, “There are no problems in the field of computing that aren’t amenable to another level of indirection.” To avoid overexplaining, consider that to mean replacing programmatic structure with (changeable) data. A simple example to get the point across: static webpages vs. those generated from data stored in a database. Bear with me here; hopefully the relevance will become clear shortly…
So, one difference between humans and the (theoretical) Turing machine of which you speak is, as Der Trihs puts it: humans have almost no awareness of the workings of [their] own brain, while a machine can (theoretically) be designed with that “awareness”. In other words, we can’t know why we make the decisions that we do, as that level of introspection is closed to us. Not so for a machine; the decision process itself might be an object of study, evaluation, and even possible replacement. In some sense, that would allow such an AI to be “super” human, as no part of its operation need be opaque to itself.
I have difficulty wrapping my mind around this; even given the above, a state machine must be in operation, and so it’s “turtles all the way down”. Intriguing as it may be, I don’t really see the idea changing the “free will” debate. And I further expect that “free will” will constistently fail as a measure for determining “humanity”, even if one could define it adequately. The same goes for “self-awareness”, “consciousness”, “sentience”, and a host of other terms.
Instead, there are other aspects of being human implied by the term “humanity”, not the least of which include emotion, culture, and morphology. ISTM that one’s consideration of a machine as “human-level” depends greatly on how much credence one gives to those aspects.
At its root, (non-)acceptance comes down to one’s formulation of “other”; as history shows, people are all too ready to classify some individual (or even group) of “other” as “inhuman”. For the most part, a functionalist (or behavioralist) definition seems most appropriate to me, so yes, I think that such a machine should be considered “human” for most purposes implied by the term “humanity”.
Of course, the most obvious counter to that is to use a biologist’s definition of “human”: a machine cannot mate with a human, therefore they are different species, therefore the machine isn’t human. And round and round we go…establish a defining set of characteristics for “humanity” and perhaps it would be fruitful conversation. But I doubt there’ll ever be a consensus…