Plus, think of the time savings in using just two operations rather than three!
I know 3 XOR’s works, not sure if it works any other way.
But, aside from being something that should be avoided in the first place, this “a=a+b, b=a-b, a=a-b” fails in some cases.
That having been said, here is how I would approach the problem:
A) Express the integers A and B in base 9
B) Generate and solve a random Sudoku board
C) For every i, replace the i-th digit of A by the value of the Sudoku board at column given by the i-th digit of A and row given by the i-th digit of B
D) For every i, replace the i-th digit of B by the column of the Sudoku board whose value at the row given by the i-th digit of B is the i-th digit of A
E) For every i, replace the i-th digit of A by the row of the Sudoku board whose value at the column given by the i-th digit of B is the i-th digit of A
No it doesn’t (at least for integer arithmetic with either unbounded integers or modular wraparound).
For what it’s worth, there is essentially just one form a solution can take, and that is “Pass from (a, b) to (F(a, b), b) to (F(a, b), a) to (b, a)”, for any binary operation F which is injective in each argument individually. [The only variant is the mirror image (a, b) -> (a, F(a, b)) -> …]
The Sudoku thing is an example of this (all that really matters about the Sudoku board is that the columns and rows have no repetition), as is addition or (bitwise) XOR (addition modulo 2).
But the fact that the guy messed it up is an example of what crappy code it is, as you said; it was just some memorized trick for the guy, not something clear enough to be readily understood, and accordingly he was liable to mismemorize it.
Would it be fair to classify the Sudoku board as a temporary variable?
Ah, I should have said “Generate the Sudoku board ahead of time and hardcode it into your code.”
Now try it for floating point numbers. Using the Boolean operations, even that works (once you get the actual code right :rolleyes: ) – demonstrating once again what “abominations” you can do with “leaky interfaces”, in this case, the ability to do bit-wise booleans with floats. But ordinary arithmetic, which “ought” to work fine with floats, is a problem: a=a+b, b=a-b, a=a-b is likely to fail badly.
You could even do it with arbitrary objects! Well, if object variables are really just pointers to the actual objects, and if your language lets you get at the actual pointers in those variables to handle them like ints or bit-wise (like you could do in C or C++).
That was a crappy programmer they hired in any case. You can too do it, even with strings! Assuming you have the len() and substr() functions:
(and assuming here that string positions begin with 1 not 0, and + is the concatenation operator)
A = A + B
B = substr( A, 1, len(A) - len(B) )
A = substr( A, len(B) + 1, len(A) - len(B) )
ETA: I can’t belieeeeeeeve how active this thread is! In the time I typed the above, six (6!) other new posts got posted!
I don’t believe VBA allows unbounded integers (or any computer, always a physical limit) and/or modular wraparound.
But you do bring up a good point about integer, floating point probably has more conditions in which it wouldn’t work.
Indeed! I first learned this trick back in the day, when we did shit like that in assembly language all the time! Did you know that full-word integer operations took 5 machine cycles, whereas full-word boolean operations took only 3 machine cycles? (This was on a machine where a cycle was some small number of nanoseconds.) I mean, we really got obsessed with stuff like that way back then!
ETA: RaftPeople: “But, aside from being something that should be avoided in the first place, this “a=a+b, b=a-b, a=a-b” fails in some cases.”
Should work fine for ints, as far as I know – for machines with normal 2’s complement notation. For one’s complement machines, there could be a few exceptions. For floats, fergeddabouddit.
XOR works, my primary objections were:
-
“a=a+b, b=a-b, a=a-b” is not a general solution in that it fails under some conditions
-
Swapping numbers without using a temporary variable isn’t a good practice (except for some very rare instances these days) - it’s not clear, is more likely to introduce errors, it really gains nothing
Would the Ackermann function fit those criteria?
Because I’m having some really evil thoughts.
But that doesn’t really alter the fact that it is additional storage aiding in the swapping of the numbers.
Most languages and environments will throw an overflow exception in that case.
Back when I was writing assembler I wrapped values during calcs for a variety of reasons. But even then, if I was swapping vars, I used temp storage.
Well, the recursive nature might end up using a few more memory cells than you would save, and ultimately might take up a few more machine cycles (give or take a few million years). But, hey, go for it!
Maybe you could come up with a “Boolean version” of Ackerman’s function, thus saving a few machine cycles per operation.
Is it really possible? Given that you munge together your original A and B into one f(A, B) function, can you then extract the original A and B values out of that, using some inverse of Ackerman’s function?
Well, then you might as well complain that ALL code is additional storage aiding in the swapping of the numbers, since space is used up in storing the code…
It’s true that there are some “tricks” for swapping two numbers which really just use the instruction pointer as additional storage, but that’s not what’s going on here. One can imagine a theoretical architecture on which the relevant Sudoku-board operations are primitive operations of the machine, requiring no extra storage to compute (indeed, XOR is just the case using the particular 2 by 2 Pseudoku board
1 0
0 1
and + is just the case using the particular omega by omega Pseudoku board
0 1 2 3 ...
1 2 3 4 ...
2 3 4 5 ...
3 4 5 6 ...
...
)
People use the term “Ackermann function” to describe various different things; the original idea [independently invented by many a schoolchild] is a 3-ary function, A(m, n, p), such that A(m, n, 0) = m + n, A(m, n, 1) = m * n, A(m, n, 2) = m ^ n, A(m, n, 3) = m^(m^(m^(m^… [n times], and so on in continuation of that pattern. (I am lying here slightly, but only because the original Ackermann function was slightly stupid… it actually used A(m, n, 3) = m^(m^(m^(m^… [n - 1] times, and so on in continuation of that detoured pattern)
There are various ways of collapsing this into a two-argument function by some kind of argument duplication or specialization. These are generally injective in each argument, and thus would indeed work.
Two points:
-
Agreed it’s a proxy-xor (I was coming to that conclusion after I posted). The board is part of the function which means it’s not really a temp variable.
-
You may have a hard time avoiding a temp var in steps A) through E) in VBA. I’m curious to see how many of those steps you could do without a temp var (I assume “temp var” to mean any variable other than A and B, but maybe that is too strict an interpretation of the poor interview question).
As an aside, I don’t think there’s any way to do a swap on general strings without some auxiliary space, since general strings can be different sizes. At the very least, I think you’ll end up having to pad the shorter string to the same length as the longer one, at some point.
Probably not. But specialized instructions for string operations are becoming more common and advanced, and if there’s any real need for a string swap, it would be done without the use of externally exposed storage locations. Memory management functions to provide sufficient space to accomodate different string lengths would probably be handled seperately.
There isn’t any real need for these swap algoriths for anything that would be coded in a high level language. At a low level, the simple memory model isn’t really the same. Many CPUs had exchange instructions. The Intel 8080 had an XTHL instruction to exchange the value on the top of the stack with the HL register pair, and this was not the earliest exchange instruction at all. But unaddressable internal registers acted as temporary storage in these cases anyway.
Getting back to earlier subjects, if you want to test for people who might be good programmers, you should ask if the topic of swapping values without intermediate storage is interesting. If you answer is yes, you might be a programmer.
Fair enough. I was thinking in slightly more general terms, where B might be an expression (like a function call), and it’s important to emphasize that the expression needs to be evaluated before the assignment can take place. But in this case it makes no difference.
I hadn’t seen that article before, but it’s pretty good. Abstractions are a great thing, and enable just about every modern computing technology. But as you say–let down your guard and they bite you in the ass. They are just a facade, and some more rickety than others.
In my experience, the most universal way in which abstractions fail is in terms of performance. Things seem to be going well–until they don’t. Your computer seems to have infinite memory, except that eventually you hit swap and run 100,000 times slower. Your virtual machine seems to be exactly like a normal machine, except that when another virtual machine on the same physical machine is running, you get uneven performance. Your sort algorithm has a perfect O(nlog(n)) time bound, until you feed it a reverse-sorted array and find that it takes O(nn). And so on.
Abstractions make coding more efficient, since you get to hide most of the redundant stuff. But they don’t allow programmers to get stupider, at least not without sacrificing something else, because when the abstraction inevitably leaks, you need to know both the behavior of the abstraction layer and the underlying implementation.
Unless, of course, you have no access to the underlying implementation, and in that case you just throw up your hands and live with the bug. Of course, this is a major component of why software often sucks so much.
As I was mentioning before, security researchers have some of the most fun in this area. One of my favorite techniques has a goal of extracting an encryption key from a hardened, sealed chip. In principle, these chips are perfect black boxes: you feed them a string to encrypt, you receive the encrypted data as output, and there is no way to inspect the inner workings of the chip.
The method is to feed in particular bit strings, and measure the power usage of the chip under different conditions. Because some bit patterns require more power than others (say, 1 may require more juice than 0), the researchers can perform a differential analysis and eventually reverse-engineer the encryption key. The chip wasn’t a perfect black box after all.
Of course, chipmakers fight back and use various techniques to thwart the researchers, so eventually they’ll solve the problem (if they haven’t already). The researchers will have to find another leak in the abstraction.