In programming, is it good style to use the same name for function arguments and parameters?

Sure, I was just simplifying to nake a point.

It may be bad, but it is very common practice for things that are arbitrary ids to just use ints or some such (e.g. uint64_t). Using individual types is better, but the problems of not doing that aren’t so frequent or high-profile that teams enforce the practice.

Our IDs are Guids. They’re all Guids. Because they’re IDs, not objects, and IDs are all coming straight out of the database with type Guid. I admit this does open up the possibility that one might pass in a roadway id where they meant to pass in a car id, but this is the sort of thing you’ll notice pretty quickly since there’s no possible way it’d even pretend to work. We’d never bother making an entire object to wrap an ID because passing variables representing freeways into functions that operate on cars is just not a thing that people are trying to do enough to take steps to stop it. Taking extra steps to prevent it is not worth the effort.

And while I can understand accidentally using one ID in place of another, adding ids together? That’s not a dumb error, that’s a “you have to be kidding me” error. There is never any reason to do math on any variable named ‘id’, and if there is a reason I have serious questions about your system design.

I like this, but do most of the actual work in SI, so I only append units when they’re non-SI units. This has the pleasing effect of making some code read like fundamental laws (if it’s an algebraic language of course, not stack based).

Another thing I like to apply is keeping all the available digits in parameters that are hardcoded, even when many of them go beyond the precision elsewhere in the system. Those surplus digits wind up being a fingerprint that, more than once, has helped me later figure out where I got the value from.

If you’re using the Windows definition, then a GUID is a struct containing 128 bits of ID data. That actually solves half the problem; it prevents you from doing anything dumb like accidentally using arithmetic ops or assigning on to an int. You have to go out of your way to misuse them.

Typos happen. Suppose your ID is a plain int and you reserve 0 for the null case. Then, this is a perfectly sensible thing to write:
if (vehicleId && roadwayId) { // do stuff }

But that could lead to typos like the following:
if (vehicleId & roadwayId) { // do stuff }

Oops–now you’ve done a bitwise and instead of a logical and. And it’s the kind of bug that may not be apparent immediately, since it sort of works much of the time.

Using the standard Windows GUID struct prevents this particular bug. You could still run into issues where you mix up IDs, but maybe that’s not too likely in your case.

You’re sort of implying here that making an object is a heavyweight operation. At least in C/C++, it isn’t. Structs and classes are exactly the same thing internally (the only difference is in default access permissions). If you have an int that you want to prevent int-like operations on, there is no overhead at all in just wrapping it with a struct.

At work, we had a need to deal with 64-bit addresses on a different device. We had just been using a 64-bit unsigned int, but it allowed too much freedom. I wrapped it in a class and overloaded only those operators that make sense. For instance, you can add an offset to an address, but you can’t add two addresses together. You can subtract two addresses, but you get an offset back, not an address. And so on.

Worked well and uncovered several bugs in the process of writing it. Good design often constrains what you can do, limiting the available operations to only those that make sense. As for performance and memory usage, it was absolutely identical to the 64-bit unsigned int. All the math boiled down to a single assembly instruction, just as before.

Yeah, pretty much. Though I lied a bit; we do have a few tables that use integers as keys. Still haven’t had any problem with them.

In my experience we tend to gate for bad IDs pretty quickly, so before we get to the point where you’d be writing that, you’ve already bailed. (Basically we pull the data for a thing, and then instantly check if we got good data back, because if you didn’t usually you’re done.)

Which is not to say that typos are impossible, just that the risk here seems quite low, even when you have integer keys.

It’s not heavyweight from a processing standpoint, it just leads to additional code, either to overload the operation or to extract the data. Maybe not much additional code, but again, the risk here also seems very low. Low cost, lower benefit. We don’t bother.

If your shop was having problems with gross key abuse, then by all means, use a little extra code to mitigate the problem. Each shop is different, and each group of programmers are different. Me personally, I see the mere naming of it as an ID a pretty good deterrent to misuse, because it’s a frikking key, don’t misuse it. And, of course, all the Guids we have around make this even less of an issue for us. But, as noted, mileage may vary.

(It also should be noted that none of our keys are addresses, in the sense you’re talking about. We’re a c++ shop mostly, at least the stuff I’m working on. So you might do math on a memory address, but not on a key. So there’s virually no circumstances where you get even close to a type which could cause a problem - you pretty much listed the only example, which can be mitigated by checking for bad data early and often.)

Blast, brain fart. I meant to say that we’re as C# shop, not a C++ shop. We left direct memory manipulation behind a while ago.

As an aside, one convention I always use is to separate logical operations with spaces, and bitwise or arithmetic operations without:

if (x == y)
vs
x=y

Res=a&b;
vs
Res=(a && b);

I think I first came across this style in Code Complete from Microsoft Press. It has avoided a lot of typos.

C# isn’t my primary language, but IIRC it would prevent this error anyhow by not treating ints as bools. You would have to write “(a & b) != 0”, which isn’t likely to happen by accident. Well, if not C# then at least some languages impose this constraint.

Yes, in our case the risk is much higher and can lead to a hardware fault. Really, the idea isn’t much different than how pointers already work–you can’t add two pointers together. It’s just that these pointers end up on a different set of hardware with a totally different architecture. We can’t use the built-in pointer stuff, so we have to write our own.

I probably should have been a bit more gentle in exclaiming “bad design!” I do think it’s important to, as much as is reasonable, architect things such that it’s impossible to make certain errors, but there are limits. Simplicity is a virtue as well.

I can think of one context. In some online games, there exist ways of exploiting bugs for “duping” (creating copies of) valuable in-game items. This is obviously something that the game companies don’t like, as it messes with the in-game economy. One way to combat it is by giving every item a unique “fingerprint”, a bitstring long enough that it’s unlikely that any two would ever be identical by chance, and then every so often running a process that compares the fingerprints of all of the items in the game economy, deleting any matches (which are presumed to be illicit dupes).

But in many games, there exist ways of combining two or more items to create some other item. For instance, in Diablo II, there exist valuable items called “Gul runes” and “Vex runes”, and it’s possible to combine two Guls to make one Vex. How should the fingerprint of that new Vex be established? If it’s randomly-assigned, then a duper could take a pair of duped Gul runes (of different parentage), and combine them into a new, “clean” Vex rune (whose fingerprint won’t match any other item). So you want the new fingerprint to be generated deterministically in some way from the fingerprints of the ingredients. It could just be a concatenation, but in some games, very long such crafting chains can exist (there’s something like 40 different runes in Diablo II that can be combined into each other), which would quickly lead to fingerprints long enough to be impractical. So you need some other sort of deterministic combination, which is likely to involve some sort of arithmetic on the fingerprints.

I used a similar technique on a project a while back. I wanted to determine if a given program had internal segments that produced identical intermediate values.

The program was not expressed as a nice graph; it was just a sequence of instructions. And besides, even if you do have a graph, it’s not that easy to determine if two graphs are identical.

So instead of that, I generated a sequence of fingerprints that each instruction generated. If the instruction was “ADD dest, A, B”, I would “mix” the two source args A and B with an extra fingerprint based on the operation. The mixing wasn’t a plain arithmetic op, but it was composed of them (mostly bitshifts and xors). In addition, I would sort the fingerprints for A and B, so that the result of “ADD dest, A, B” would be the same as “ADD dest, B, A”. This gave a nice deterministic result regardless of ordering, the actual locations of A and B, and so on. In some cases, I could just copy the values, such as if the op was “ADD dest, A, 0” or “MUL dest, A, 1”.

Once the fingerprints after each instruction were determined, it was easy to match them up with others at different points in the program, and store the duplicate values in temporaries for reuse.

Worked well for a time, but eventually the programs got too complex with branching and stuff and the technique broke down.

Your game example is pretty much the same thing, where you have some inputs that are combined in a particular way, and then the results combined further, etc.