Potentially stupid question about signed vs unsigned variables

So I’m going through my entire C++ book again… 'cause I can. When I got to the variable section early on I decided to do a size test of several of the default variables, just using the typical sizeof() command and printing it out to the console.

So I get the size of an int is 4 bytes, short is 2, char is 1 etc etc. I then decide hey why not add in unsigned (of ALL of them) for kicks?

int is 4 both ways, fair enough. I go this way through char (I can only vaguely understand a signed char, simply because I think “1” counts as a char so -1 might as well, and an ASCII character is only 7bits leaving one open to be a sign bit). But then I get something odd:

sizeof(double) is 8 bytes, sizeof(unsigned double) is only 4. Any reason for this? Can it not process strings of numbers large enough to take up 8 bytes fully positive or what? Even weirder was this:

sizeof(bool) 1 byte.
sizeof(unsigned bool) 4 bytes.

Never mind how you can sign a boolean in the first place (okay, on second thought, feel free to mind that, does a -1 bool make something poof out of existence or something?), but 4 bytes for something which has a sole purpose of telling you the state of 1 or 0? That is… odd, and I can’t muster an explanation.

If it helps I have a 64-bit OS (Vista Ultimate, to be precise), and I’m using Microsoft’s Visual Studio to compile/link.

I’m fairly certain that there’s technically no such type as “unsigned double” or “unsigned bool” in C++. (I know that g++ doesn’t compile either declaration.)

My guess, and this is just a guess, is that your compiler is seeing “unsigned double”, saying “huh?” and instead of returning an error is somehow defaulting to “unsigned integer”, which is four bytes on your machine.

That would make sense, I knew they didn’t exist, but it still struck me as odd first go-round. I expected a compiler error or something but it just prints “4 bytes” and I go “WHAT!?”

Orbifold is correct. If you’re using Microsoft’s C++ compiler, you should be getting a warning:

warning C4076: ‘unsigned’ : can not be used with type ‘double’

Same goes for bool. Aside from issuing the warning, it appears to interpret the type as an unsigned int.

I completely forgot to check the warnings. :smack:

I guess that’s 30 lashes for me, even if I was just doing it for fun…

Thanks, hey at least the thread title had a disclaimer about potential stupidity!

“signed char” makes perfect sense in C, once you realize that “char” isn’t primarily a textual type, but a numeric type that just happens to be capable of representing text. After all, in C, it’s perfectly legitimate to write “char c = ‘A’ + ‘B’; if(c > ‘X’) then…”, and so on, even though this makes little conceptual sense if ‘A’, ‘B’, and ‘X’ are just letters.

The “char” type is just a numeric type of a certain bitlength, with some functions interpreting particular bit-patterns of that length as certain glyphs (a la ASCII) and with some corresponding convenient shorthand for expressing those bitpatterns. Being just a numeric type of a certain bitlength, signed and unsigned interpretation is possible in the standard manner.

Given the assumption (true in ASCII), that the characters ‘A’-‘Z’ and ‘a’-‘z’ are in order and continuous, doing mathematical operations on them makes text comparisons and conversions easier. Like


(if c >= 'a' && c <= 'z')
c += 'A'-'a';


to convert text to all caps.

For what it’s worth, GCC flags this sort of declaration as an error (and won’t compile the code). That seems like the more rational behavior to me.

The reason ‘unsigned double’ isn’t cromulent is because floating point arithmetic is done in specialized hardware* on all modern desktop, server, and mainframe computers, and floating-point hardware (as per IEEE standard) always allocates one bit to hold the sign of the number (0 for positive, 1 for negative). This allows somewhat odd constructions like positive and negative infinity (useful for positive and negative overflow) and the chimerical negative zero (probably useful for something, but numerical analysis isn’t my thing).

*(Back in the 1970s and 1980s, before floating-point hardware was common on low- and mid-range computers, floating point math was done slowly by special software shifting bits around and doing integer math using the much cheaper integer hardware. Did I mention it was slow?)

Everything you say there is true, and interesting — but it’s also true that even writing ‘signed double’ is an error. I believe it violates the C standard. Basically, you can’t mix either the ‘signed’ or ‘unsigned’ qualifiers with a base type that isn’t integral.

That only treats ASCII characters as a torsor over some numeric space (i.e., you can take the difference of two characters to obtain an interval, and add intervals to character to obtain new characters). It wouldn’t explain how ‘A’ + ‘B’ would make any sense (i.e., what adding two characters directly would mean). Nor, for that matter, what ‘B’‘B’ - 4’A’*‘C’ would mean, and all the other fun you can have.

It’s used for basically the same purpose as ±infinity. If you have an underflow, it’s sometimes useful to know whether the number approached zero from above or below.

Hmmm. If I divided the BEL character by 10, would I get a decibel?