In terms of the real numbers and the standard interpretation of decimal strings into them, both 0.999… and 1.000… are representations of the same quantity, 1. One wouldn’t normally say either is any worse or less precise or less legitimate than the other. An algorithm which produces 0.999… is just as legitimate as any other. The decimal system comes with no claims that every real is given a unique representation; those which are multiples of powers of 10 have two representations, and the rest have one.
In a sense, this is topologically unavoidable: any continuous map from sequences of digits onto the real numbers must involve some doubling-up, or else there would be a discontinuity at the digit-changing border.
We can see the shadow of this phenomenon in another way: it’s not actually possible to totally computably perform arithmetic on infinite decimal strings, in the normal sense in which this claim would be interpreted, specifically because the computation will get hung up at the border where a digit changes its values, which is to say, in the space of ambiguity between the D999… and (D + 1)000… representations.
For example, consider the problem of adding a string starting with 0.444… with lots of 4s (not known ahead of time to be infinitely many, just observed so far to be a lot) to a string starting with 0.5555… with lots of 0s. If there ever comes a point where, say, the two strings suddenly simultaneously spit out 99, the result will be greater than 1. If there ever comes a point where, say, both strings spit out 00, then the result will be less than 1. But so long as you keep seeing 4s on the one and 5s on the other, you can’t rule out either of these scenarios as future possibilities. So you can’t, in finite time, decide even the first digit of your output (whether it should be 0 or 1).
[Other representations of infinite precision real arithmetic get around this problem (as the computable analogue of Dedekind cuts, or as Cauchy sequences with more freedom in representation); it’s just that decimal strings with the obligation to carry are actually horrendously awkward for defining arithmetic]
All this addresses the standard mathematical interpretation. As always, there other ways of interpreting the notation, such that 0.999… might be interpreted to denote some quantity actually strictly smaller than 1. But this would not be the standard.