This is possibly a strange concept, but bear with me. I don’t know if this would have any potential purpose, or if there would be a reason to do this, in fact forming a mathematical basis for these functions would be tricky, but do people perform higher maths on computer logic functions? By that I mean are there mathematicians that take d/dx(e[sup]x[/sup] NAND x) or prove that XORs of complex numbers exist? Are there trigonometric identities like (random one that doesn’t exist) sin^3(theta) AND cos(theta) = pi[sup]theta[/sup]?
My gut reaction is no, because a lot of these things make little sense to perform these operations on, but you can represent all numbers in binary in some form or another just as you can represent it in hexadecimal, octal, or decimal, and it at least somewhat follows that if you can represent it that way you can put it through a logic gate, and it’s not like mathematicians don’t prove completely arbitrary out there functions for the hell of it anyway. I’ve tried searching online, but not for lack of trying I’ve yet to find a paper, or even a graph of y = x XOR 2 for that matter (not that it’s that hard to draw one up).
Propositional logic has a few things to say about logical connectives, but that’s not at all related to calculus or trigonometry. If it’s out there, I’ve never seen it. (For the record, nobody works with completely out there or arbitrary functions; everything is done because it’s related to something else.)
Well, assuming by “put it through a logic gate” you mean something like “Convert the number into a stream of bits (in the usual base 2 fashion), manipulate this stream bit-wise with a logic gate, then convert the result back into a number”, one problem is that many numbers have more than one representation as a stream of bits (the 1 = 0.999… problem, or, in binary, the 1 = 0.11111… problem), thus making digit-wise logic-gate manipulation not strictly coherent (e.g., should 1 AND X = the last bit of X before the decimal point or should 1 AND X = everything in X after the decimal point?).
[Which isn’t to say that nothing like this could work or be interesting. Just that you’d need to look at some system other than the standard real numbers manipulated according to their possibly ambiguous binary representations as above.]
There are two flavors of logical functions: the classical ones that operate on and return only TRUE or FALSE, and the bitwise ones that operate on the binary representations of integers. In neither case do they operate on the real numbers, so calculus techniques don’t really apply: you can’t take the derivative of something that only works on integers. And you can’t really integrate it either: that’s just addition.
I assume by “computer logic functions” you mean logical operators like AND, OR, XOR, NOT, and the like. These were old concepts long before computers used them. They have long been called Boolean operators, and a branch of philosophy called “predicate calculus” deals with them in the context of arguments and proofs. There’s even a game called “WFF ‘n’ Proof”, in which WFF stands for “well formulated formulation”, the point of which is manipulating these things.
I think these get included in various higher mathematics. They can be interesting added or multiplied into things, interesting for their discontinuities, and so on.
The Heaviside function, a step function, is the integral of the Dirac delta function, which is a perfectly narrow but tall peak function. These could certainly be represented by, or mingled with, Booleans.
To point out the obvious: the people who implement functions like cos/exp as transistors must think about such things. But the various representations of floats must be a stumbling block for generalization.
Yet what I’ve wondered is: XOR is like addition, and AND is like multiplication. At least if do it one bit at a time. Can this observation be scaled up to have a deeper meaning?
(assuming we stick to integers) This would be a sort of jagged line. It’d go 2, 3, 0, 1, 6, 7, 4, 5, 10, 11, 8, 9.
actually, you can probably extend binary to reals in a natural way… using what is called “fixed-point representation.” It sounds fancy… but basically it amounts to exactly how we normally write our decimals (ie, not scientific notation). In this case, the graph of y = x XOR 2 would take on a more full shape as two lines stitched together. Its derivative would jump to infinity regularly.
Sure; bitwise operations on words of a fixed length forms a Boolean algebra, and every Boolean algebra can be represented as a Boolean ring (a structure with addition, subtraction, and multiplication operations satisfying most of the usual high school identities (commutativity, associativity, distributivity, identity, and so on), but also such that x^2 = x for each x), by taking XOR as addition and AND as multiplication, as you note. Conversely, from any Boolean ring, we can define a Boolean algebra. In fact, homomorphisms between Boolean algebras correspond to homomorphisms between Boolean rings (i.e., this gives a natural equivalence of categories). I don’t know what exactly you mean by “deeper meaning”, but this is presumably a start.
Or it may be interpreted that in the mathematical system that includes the function f(x) = x AND 1, that 0.999… and 1 will be distinct. This is not an unreasonable proposition. Our past treatment of infinitesimals may have been incomplete.
I’m all for studying systems of numbers that include infinitesimals. Trying to make one solely by keeping 1 and 0.9999… distinct is extraordinarily ugly, though. What would be the number halfway inbetween 1 and 0.9999…, for example? If numbers are made to correspond exactly with decimal representations so that there is none, then why would 1 have the property that there is a largest number below it, while most other numbers (e.g., 0.9999… itself) would not? Would 10 * 0.9999… = 9.999999… = 9 + 0.999999… still be true? Would this still prove that (10 - 1) * 0.99999… = 9, and, if so, why wouldn’t 0.9999… = 9/(10 - 1) = 1?
In decimal representations, 0.9999… and 1 are the same number, even if you introduce infinitesimals. There’s no way around it without coming up with a completely new way of representing numbers.
To include infinitesimals would mean, unavoidably and justifiably, that equality takes on a relative meaning. That is, in fact, the essence of the idea. In our “normal plane” infinitesimals all equal each other, and so do infinities. Yet in a different plane, the infinities become distinct (and so do the infinitesimals), with those differences determining what value a function like infinity/infinity will give. In this plane, numbers that we called 1 might no longer equal each other (without additional information about their infinitesimal components).
The AND function might be a bridge between the two worlds.
There are many, many well-studied systems of arithmetic containing infinitesimals. This is not anymore a difficult subject to formalize; there’s Robinson-style non-standard analysis using the compactness theorem and the transfer principle, and Lawvere-style smooth infinitesimal analysis using specially constructed topoi with intuitionistic logic, and there’s working with dual numbers, or simply taking the germs of some arithmetically closed class of functions at infinity, or more generally quotienting functions on some domain by the Frechet filter, and so forth.
But no clean such system implements infinitesimals simply by trying to maintain a unique decimal representation for each real number. That’s extraordinarily ugly for the reasons I mentioned above, and not particularly well-motivated either.