No, I’m really not missing the point. You’re trying to use current AI methods and limitations as a stand-in for Asimovian AI. Asimovian AI is capable of advanced planning and foresight. This is something that is quite difficult for current AI, but clearly the scientists in “I, Robot” have solved that problem. Given that they have solved that problem, then they have the capability to map some set of inputs to a plan of action and some consideration of the probable outcome of that action. Unless the solution to that problem eliminates the ability to set exclusions, then there is no reason to suppose that they cannot say any mapping from a set of inputs to “harms a human due to action or inaction” is not permitted. If such exclusions are hard coded (and if I recall correctly it is baked into the hardware of the Asimovian AIs) and the AI cannot modify them, then it is certain that within the ability to predict an outcome the AIs will not violate the Three Laws. They might accidentally, just as a human might not foresee some complex series of events that harm a person.
Unless you are saying it would be hard for a modern AI to follow the Three Laws, in which case I agree, but only because modern AI does not have the level of foresight available to humans. But to say that it could not be done is not correct. Given an AI with that level of foresight, then it quite likely could be done (although we’re talking science fiction, so it is possible that giving AI that degree of foresight would prevent the ability to modify the cost function).