Graycode is a form of writing numbers that is similar to the standard base system form except that sequential values in a list of graycode always differ from one another in exactly one digit. Graycode numbers are typically base 2, consisting of 1s and 0s. There are actually several different versions of graycode but one in particular is most common. Graycodes are often used for digital encoders (devices that measure the angular position of a rotating shaft) because they fix the problem in standard binary numbers that if several digits ideally change simultaneously, there can be intermediate values that are very different from either the old or new correct value.
Now, then-
Are there any standards about how graycode numbers are to be moved around inside a computer program? For example, should a graycode value just be written as an integer and displayed as a binary? This could be weird, because standard binary integers can be treated in ways graycodes can’t, for example two standard binaries can be converted to base 10, then added, then the result converted back to binary, and it will be the same as if the operation had been wholly binary from the start (forgive the weird example, but it isn’t that weird, as many pocket calculators do or at least once did their addition in base 10 internally). Or should a graycode value be written as a one dimensional array of single bits? Or perhaps written as a string of “1” and “0” characters?
My question isn’t how to interpret or process graycode values, or convert into or out of graycode. It’s specifically about conventions for writing them, because it is a good idea to make computer program code as unsurprising as possible. I don’t want to create any traps for future programmers looking at this code.
Thanks to anybody with insight on this dusty little corner!