any math uses for signed zero (like: x^0 = 1, x^-0=something else?)

So I was reading about signed zero. Apparently it’s more computer thing, or science thing.

Mathematically it seems like negative zero would have exactly the same result as unsigned and positive zero, except for one case I can think of, maybe.

x^0 is defined as 1. Is there a value for -0?
Any other uses for it?

It is mostly an artifact of representing numbers. In general it is a pain in the but because there are two ways of representing zero and if they are not handled correctly they can cause erroneous results. I however would not be surprised if in some cases people make good use of the knowledge that calculations reached zero from the positive or negative.

Well, a^(-b) is 1/(a^b), so x^(-0) would be 1/(x^0) which is still 1/1 or plain old 1.

Precisely zero can’t have a sign, because the sign tells you on which side of zero a number is.

However there are cases where a function is not analytic at zero, meaning the value of the function is not “smooth” and continuous as the argument goes to zero. (“Smooth” has a precise mathematical definition, but you probably know what I mean intuitively.)

In these cases, it is entirely possible for a function to approach different values in the limit that the argument approaches zero that are different depending on whether you approach zero from the positive or negative side. In this case, in mathematical notation, it is helpful to put a sign on the zero, usually after it, e.g. 0- or 0+ to indicate from which side of zero the limit is taken.

A simple example of this is the function 1/x, which approaches +infinity as x -> 0+, and -infinity as x -> 0-.

An introductory course on abstract algebra will give an exercise to show that the additive identity is unique in any ring, so if you want to have two distinct zero elements, you’d better be prepared to give up something you might like.

This is the only thing that I could think of. Of course that effect is not unique to the number 0. There is also 1- and 1+ for the function 1/(x-1)

On our old UNISYS computer we had an extra bit, the sign bit, on each byte. It’s been a while, maybe it was on each word. Anyway, this meant that you could have values of -0 and +0 . If I remember correctly we could use the sign to tell us something about the sign of the orginal number.

+50 - 50 = +0

-50 + 50 = -0

Something like that. It’s been a long time.

For any word size, there is an even number of binary representations, but for any finite set of numbers containing zero and having the property that if x is representable, so is -x, there is an odd number of numbers to be represented. So you either have to have unused representations, representable numbers with non-representable inverses, or multiple representations of a number.

  1. Since -0 = 0… x^0 = x^-0. There’s really no such thing as -0 anyway, 0 is considered to always be both positive AND negative.

  2. x^0 is not always “defined as 1”. 0^0 is undefined.

See ultrafilter’s response above. It’s the most mathematically correct. Though, also as noted above, in a CS context a negative 0 can make sense.

There’s no reason you can’t have a -0 in mathematics, as long as you are willing to give up some of the other nice properties we’re used to. You’d simply have to lose some things in ‘everyday’ arithmetic we take for granted (though ‘everyday’ arithmetic probably involves fields, rather than rings).

I have used such artifacts in programming to indicate if the value is a default/initialization, or a entered/computed/measured result.

Pasta?

Don’t tell me I have to give up pasta.

I’m talking about modern, western, formalized mathematics. Zero is always both positive and negative by definition, it’s not something where we can all have an opinion on it. You can ascribe your own rules to things if you wish and they may be very logical ones, but I’m just talking about the agreed mathematics of the modern era.

Isn’t it neither positive nor negative, by the definitions of those terms?

This is precisely why integers are represented with the two’s complement instead of the one’s complement in binary - there’s only one way to represent 0, so you can’t make strange mistakes like (50 - 50 != -50 + 50).

Positive and negative are defined in relation to zero, not the other way around. So, zero is not both positive and negative by definition. It is neither.

“Modern, western, formalized mathematics” is exactly the context in which you can define a -0 in a meaningful way. I’m not going on about some ancient Greek math or some mystic Vedic stuff. It’s abstract algebra. Proving some things about how nice a unique 0 is to rings and fields are typically given as homework assignments.

It just means you have to operate outside the usual context of rings, which would include fields. Fields are great. They’re what we typically think about when we consider arithmetic. So, our normal arithmetic is basically operating on one particular example of a field. The normal “properties” of addition and multiplication (commutativity, associativity, identity elements, inverse elements, distributive property) are actually the properties of a field.

In an abstract algebra sense, what happens is that you can have more than 1 “zero”. But then you probably lose commutativity. And you possibly lose some other things, like uniqueness of identity elements and such. And you’d have to be careful on how you define your group (group in an algebra sense, not the vernacular).

It’s more a matter of convention than anything else, but in maths these days the positive and negative numbers are defined so that they do not include zero. When we wish, for example, to speak of the set consisting of the positive numbers and zero, we speak of the non-negative numbers.

What ultrafilter and Great Antibobare talking about are generalizations of algebraic structures which also contain generalizations of the notion of zero. Even in these generalizations (even beyond ring infact, even in a semiring -0 = 0) zero is essentially signless. Perhaps if were to drop some of the requirements we could meaningfully define 0, such that -0 ≠ 0.

The obvious “problem” with having a “signed” zero is, what do you get when you add +0 and -0?

Let A = +0, B = -0, and C = A + B
If C = +0, then, since C - A = B, you get (+0) - (+0) = (-0)
If C = -0, then, since C - B = A, you get (-0) - (-0) = (+0)

Two’s complement is used because normal unsigned adder logic works for adding together positive and negative numbers. The only one representation for zeros is a happy additional benefit.

Well my old math book said it was both positive and negative, maybe it’s changed… or my memory is faulty. Whatever the case, you can’t have negative zero and different from that… positive zero.

Even if so, the generalizations aren’t a part of conventional math notation which is what OP is talking about. You can say 0^+ and 0^- all you want in limits, that is the orthodox way of treating it. However -0 and +0 aren’t recognized as different in conventional mathematics, even if by shorthand it’s done in certain problems. It would be like allowing 0/0, which can make sense in a lot of circumstances (once I even argued it should be allowed in certain situations), but it’s not formally regarded as possible. This isn’t a logical problem, it’s one of notation… it’s depending on how they’re defined.