Conventions for form of graycode values in programming?

Graycode is a form of writing numbers that is similar to the standard base system form except that sequential values in a list of graycode always differ from one another in exactly one digit. Graycode numbers are typically base 2, consisting of 1s and 0s. There are actually several different versions of graycode but one in particular is most common. Graycodes are often used for digital encoders (devices that measure the angular position of a rotating shaft) because they fix the problem in standard binary numbers that if several digits ideally change simultaneously, there can be intermediate values that are very different from either the old or new correct value.

Now, then-

Are there any standards about how graycode numbers are to be moved around inside a computer program? For example, should a graycode value just be written as an integer and displayed as a binary? This could be weird, because standard binary integers can be treated in ways graycodes can’t, for example two standard binaries can be converted to base 10, then added, then the result converted back to binary, and it will be the same as if the operation had been wholly binary from the start (forgive the weird example, but it isn’t that weird, as many pocket calculators do or at least once did their addition in base 10 internally). Or should a graycode value be written as a one dimensional array of single bits? Or perhaps written as a string of “1” and “0” characters?

My question isn’t how to interpret or process graycode values, or convert into or out of graycode. It’s specifically about conventions for writing them, because it is a good idea to make computer program code as unsurprising as possible. I don’t want to create any traps for future programmers looking at this code.

Thanks to anybody with insight on this dusty little corner!

I’ve never heard of Grey codes being used outside of optical scanning, IOW, they are scanned, interpreted and treated like any other number, i.e. as a string of bits. Perhaps you could tell us a little more about the program you are writing?

FWIW,
Rob

I think storing them as an array of bits would be sufficient. You shouldn’t be doing any arithmetic using greycode (at least, I can’t think of a reason you would be). I also doubt they would make sense as integers.

Is there a reason you cannot simply use integers? I’ve only ever used greycode in building state machines in school. Are you getting an input from somewhere that is in greycode, or just using it in your code?

Several encoders in our facility provide Grey code back to the drives. The ones with onboard programming have the feedback as a binary string and it’s tagged as Grey code in the comment line.

Thanks, folks!

I’m reading Gray code encoder data through a data acquisition system, sometimes one read at a time and sometimes reading based on a sampling clock and synchronized with other reads of other signals. Typically the Gray code values are on some but not all of the digital lines in several ports - that is, at the moment anyway, I read 4 ports of 8 bits each, and some of the 32 total bits are a 24 bit Gray code and the other 8 bits do other jobs. Not surprisingly, what makes logical sense as a functional grouping isn’t exactly the same as what makes electrical sense for aiming into one or another terminal block or cable bundle. My program has various modules doing different things, calling each other, passing values and references to values, et c. One module’s only function is converting from Gray code to a meaningful integer. So, the Gray codes actually appear in a variety of places throughout the program, especially where things fit together and the codes are passed as arguments or results.

We did have a spot of unpleasantness some time ago where a programmer broke 24 bit Gray code into three 8 bit words, converted these one at a time into decimal values, and concatenated the written version of the decimal values together. It is clear that Gray code is unanticipated by some.

So, I don’t want to make matters any worse than need be, and want to take advantage of any convention that clarifies things.

Right now I bring in 32 input bits, 24 of which are the Gray code version of a distance, and assemble Drygon’s array of bits from the portion that are Gray code, and send the remaining 8 bits elsewhere. I pass the Gray code version around as an array of bits that is labeled as Gray code. At the earliest place it makes sense, I convert them to an unsigned 32 bit integer, and never convert back to Gray code. As Drygon says, I shouldn’t be doing any arithmatic with them, or at least I can’t think of a reason why either. The fact that they originate as Gray code is what drives their only use in that form.

>they are scanned, interpreted and treated like any other number, i.e. as a string of bits
Rob: Where in this process do they change from Gray code to ordinary binary code? Is the conversion what you mean by “interpreted”? They are a string or series of bits at all stages, after all.

>have the feedback as a binary string
HongKongFooey, do you mean a text string of ASCII characters for the digits one and zero? like "3130303130"x?

It’s striking you all call them “Grey code” - aren’t they named after Frank Gray?

0’s and 1’s, I only see it when troubleshooting servomotors on some machines when checking the encoders.

Can’t speak for others but around here the colour is spelled grAy so I just spelled it wrong. I had long forgotten who it was named after. (has it really been 20 years since I learned this stuff? :eek: )

>0’s and 1’s

HongKong, I still don’t follow. Exactly what form are these 0’s and 1’s in? If you had 24 of them and were declaring a variable or variables to store them, what would you declare? Is it a text string containing the numeral characters? Is it an array of single Boolean values? Is it a 32 bit integer, with eight of the bits ignored (because nobody supports 24 bit integers AFAIK)? Is it a short range of memory in which you are looking up Booleans at 24 successive addresses? Not to bug you, but more because I’m curious…

Assuming you’re using a modern language I would never pass a graycode as anything except a dedicated graycode-typed object defined in my base library of application-specific classes.

Inside that class, it doesn’t matter whether you use a bit array or what.

Externally the class should expose a method to retrieve the corresponding conventional integer value and to do whatever other things you need. If, for example, you have a need to be able to add graycode values & return a graycode as a result, then define an Add() method that does the job correctly. That way nobody will ever try to add them as if they were ordinary integers.

What you don’t ever want to do is the kind of thing it seems your people have been doing, passing the value around as some other ordinary datatype and just hoping everybody knows to always apply the special graycode semantics.

30 years ago that’d have been the best we could do. But nowadays, use object oriented programming where possible. And this is a classic example of a situation where objects are uber-useful.

They’re also used in genetic algorithms so that incremental changes to genotype due to bit flipping (mutations etc.) represent incremental changes to phenotype.

I’m not a programmer so my input may not be much help to you but as an end user I see them more in this form. Our encoders are usually the 10-bit type described here, not 24, so maybe too different from your application to be helpful. In troubleshooting the programs I just see 10 0’s and 1’s, how the programmer gets them there I don’t know but they are always clearly labeled as Gray code in the comment field so even if I didn’t look at the encoder I would know what it was representing.

I’m 100% with LSLGuy. Unless you are for some really good reason restricted to a way-old environment, a custom object is the way to go. Build your GreyCode class, slap a big comment in there copy/pasted from Wikipedia, overload your operations, and done.

You seem to want to prevent novices from mishandling your data, and implementing all foreseeable operations yourself in advance is the best way.

HongKongFooey, thanks for humoring me. The discussion you link doesn’t say what form it is using, and doesn’t have to. It’s really aiming at a higher level discussion of principles and algorithms.

Nanoda and LSLGuy, thanks for making your point. I think you are right. I am going to do it this way. If I were better at this, I’d have thought of it myself!

Raftpeople, that is fascinating.

Speaking of other uses of Gray code, I’ve long thought that counters should use Gray code internally, so that they can latch results while counting continues, and not have fancy timing or sequencing to avoid funny values. But I don’t know enough about how they work - it’s just a rumination.

Quoth Napier:

But that wouldn’t have worked in plain old place-value binary, either. In fact, I’m hard-pressed to think of any sort of numerical encoding for which that would work… Possibly some sort of binary-coded decimal, but who actually uses that? One must wonder what exactly that programmer thought he was doing.

>One must wonder what exactly that programmer thought he was doing.

You’re absolutely right, of course. One must wonder. And, for a while, did. And asked even.

I remain hard pressed to find any thread of plausibility stretched however tenuously even partway through the process.

This is the sort of thing that makes me want to label it “Scary” rather than “Gray Code”.

BTW in posting on this question I realized I don’t know what to call the binary values that are not in Gray code. “Plain old place-value binary”? “Unreflected binary”? “Black and white code?”

I meant to say thanks for posting this question BTW. The Wikipedia article was a really interesting read. :slight_smile:

IBM added BCD support to the Power6. Used extensively in financial computing (mainframes, etc.).

>IBM added BCD support to the Power6. Used extensively in financial computing (mainframes, etc.).

Raft, what is this? Do you mean a hardware interface with IO lines for 1, 2, 4, 8, 10, 20, 40, 80, etc? Or is this a software thing like the (new?) 128 bit format that C# provides as a data type aimed at financial data processing, which format is a version of base 10 scientific notation where the actual exponent specifically means powers of ten (as opposed to ordinary floating point numbers where the IEEE spec describes a powers of two exponent but the display of the number is often base 10)?

It’s actually a decimal floating point ALU for performing decimal coded operations in hardware. I assume they added it so the mainframe can eventually use the Power processor like the other servers IBM makes. The C# thing is the same thing in software instead of at the hardware level.

Mainframes and other business servers (midrange, etc.) are typically used in financial environments (banking, etc.) where numbers have been traditionally stored and operated on in some sort of packed decimal format for various accuracy reasons related to rounding (binary FP round produces different results than rounding with decimal FP or packed/zoned decimal).

BCD in mainframes goes back to the 1950s. I don’t know anything about the Power6 implementation.

Summarizing like a madman, each 4-bit nibble was constrained to the values 0-9 and the ALU was able to perform the math directly on values stored that way & output another value in the same format.

For IO, numbers could be stored natively in the format just described.

There were also conversion instructions to take ordinary character data like “1234” as punched on 4 card columns or read from 4 bytes on disk and convert that into a two byte, four nibble value that was really 0x1234 but as a BCD value was still interpretted as one thousand two hundred thirty four when it came time to do math.

And after, say, you multiplied it by two, there were conversion instructions to take the result 0x2468 and convert that to the character string “2468” for writing to tape, disk, or punched card.

The sign was handled pretty weirdly as well.

Aaah, the memories. Sometimes there’s a warm glow, other times a burning pain.

BCD was very important when I/O devices were dumb, and CPU cycles were expensive, before the availability of cheap microprocessors. Many computers did not have hardware for doing integer multiply and divide instructions, it was done in (slow) software.