Say rather, that such a strange parsing algorithm might occur (or something similar) in unknown many other places throughout the code. So fixing this one instance might fix just this one instance, but vaguely similar problems could still arise separately throughout the program.
Here’s my story: Did 1st and 2nd tier customer support for company with a software app. Very few prorammers I’ve ever met, including the ones here, understood that decimal fractions typically can’t be represented precisely in a floating-point variable, no matter what precision you use. Throughout the app there were cases of comparing floating-point numbers and getting the wrong comparison. Their solution was to convert the float to a character string, to the relevant number of decimal places, which the convert-to-string library function rounds, and then compare character strings.
But this was happening in many many places throughout the code, and they only fixed one instance at a time, as the bug reports trickled in over the years. And I wonder if OP will have a situation something like this.
Here’s a programming exercise every programmer should try at least once in his/her career:
Code and run this program in as many languages as you can, correcting the syntax as necessary for each language (C/C++, Java, JavaScript, PHP, T-SQL, bash, perl, Fortran, Algol, Cobol, Pascal, Visual Basic, even bare-bones assembly languages). Change the print statements to display the output any way that’s convenient. Compare the results. Do all compilers and interpreters produce the same result? Can you explain what is happening?
float tenth, unity, addEmUp ;
tenth = 0.1 ;
unity = 1.0 ;
addEmUp = tenth + tenth + tenth + tenth + tenth + tenth + tenth + tenth + tenth + tenth ;
if ( addEmUp == unity ) {
print “There is sanity in the world.” ;
}
else {
print “The gods must be crazy.” ;
}