Why doen't my computer have a button for cents ?

All the typewriters I’ve ever had, had one, but the computer doesn’t have a button for cents? It has one for dollars, and when I need to write something using the cents sign I just use a lower case c and draw a line through it later, after it’s printed. It’s really annoying.

I too have wondered this. Also, didn’t the period used to be both in upper and lower case?

Is Alt+0162 close enough?

¢

The answer I’ve been given was that, primarily, the computer keyboards were designed by computer programmers. The caret was something they used much more often than the cents sign, so they replaced the ¢ with a ^ .

Both the comma and the period used to work both upper and lower case. However, the programmers needed to use > < (if only as the “greater than” or “less than” sign) and decided to remove the redundancy. Since most computer Caps locks stick with the lower of the characters, the upper case version was no longer needed.

Yeah, and what about degrees, y’know, the little circle at the top right-hand side of a number?

Degrees ° is ALT+(keypad)176 or ALT+(keypad)186. 186 is a little lower than 176. (Here’s 186 vs. 176: º°) I suspect one is the degree symbol and the other is “to the zero power”, but I don’t know which is which.

Boy, Wouldn’t it just be easier to have a cents and degrees button ?

Easier for whom? The very rare person who needs those symbols on a regular basis or the manufacturer of keyboards who is trying to make his product as inexpensively as possible in order to stay in business?

Haj

I sue Alt+(keypad)155 for the ¢ sign.

Just my 2¢.

If you find youself using these symbols often, you can set them to specific keys on your own through the Symbol menu in Word (these symbols are also in there, by the way, as well as in many other word processors). I think I currently have the Insert key as the greek “mu” and Page Up as “alpha”. I just reset them as I need them, and just scribble them down on a piece of paper (or once I stuck little stickers to my keys telling me what was what (I had a LOT of preset keys)).

So just figure out what keys you don’t really use (find out what they do in your word processor - these settings are specific to the program, not the computer as a whole) and change what they do. It isn’t that hard. I’d post more detailed instructions, but I don’t have Word right now.

Because it wasn’t one of the codes in the original 128 character ASCII set, which sounds perilously like “well, just because it doesn’t”. Here’s a piece on the evolution of the ASCII character set:

http://www.wps.com/texts/codes/

I suspect that background might bore most of the readers here to tears. Suffice it to say that character set standardization was actually an early battle ground in the computer industry, as various factions fought to get their characters included in the space allowed by various encoding schemes.

As for designing the character set for programmer’s use, it was actually something of a two way street. You will see in that history that ASCII deliberately retained characters used in COBOL to reach its present form, recognizing the significance of COBOL at that point (in the early1960’s).

But languages and other various syntax continued to evolve long after the character sets (at least the non-extended portions of them) had long been standardized. Many of the syntax usages come from people using the characters that were available on the keyboard as best they could. Ken Iverson tried to redesign the character set to fit his language (APL), but he never really got much industrial support for it.

Maybe if you had a computer that spoke EBCDIC (rather than ASCII), your keyboard would have a cent sign.

I know that when I worked with IBM mainframes back in the Eighties, the 3278 terminals I used had cent signs on the keyboard.

No idea what mainframers use for keyboards these days, though (I sort of assume they all use terminal emulation from PCs, but I could certainly be wrong).

According to the character map on the win2K system at my school, alt-176 is the degree sign, while alt-186 is the “masculine ordinal indicator”, whatever that means.

Most non-block mode terminals are ASCII, and produce similar characters to your familiar IBM PC keyboard, though there are many physical layouts and function key arrangements. EBCDIC was used mainly on IBM block mode teminals, AFAIK, as well as other peripheral devices such as line printers.

Block mode is a whole 'nother subject. And the EBCDIC set had its oddities, too. For instance, the codes for the alphabetic characters of one case were not consecutive, but it DID have a “cents” character:

http://www.cticomm.com/table-ebcdic.htm

My guess would be that means for Spanish, or perhaps other languages that I don’t know, you use that for the 1º abreviation for primero (first) and other “masculine” ordinal numbers. Compare to the english 1st, which I can’t seem to superscript…
And as opposed to the feminine one, 2ª. (I think it’s segunda and not segundo, but my spanish is a bit rusty).

It can be segundo or segunda, depending on the gender of the word it refers to, i.e.:

segundo hombre/segunda mujer

I can’t find the ALT key on my Mac.

And ALT+176 and ALT+186 don’t do anything on my Sun workstation, either.

:wink:

Which is why J was invented later on: Same as APL, but with ASCII characters.

As for EBCDIC: It’s primarily used on ancient IBM machines, and it’s a lot less convenient than ASCII. With ASCII, alphanumeric characters are sequential. Specifically, 0x30 to 0x39 (numbers), 0x41 to 0x5a (uppercase letters), and 0x61 to 0x7a (lowercase letters). This means that to convert uppercase letters to lowercase letters, for example, just add 0x20. Simple. Standardized. Universal.

EBCDIC has none of that. IBM created no fewer than 57 national encoding schemes and an IRV (International Reference Version). Documentation was officially unavailable, and an official conversion system was never defined, but those who had to work with it developed various translation schemes to convert their flavor to ASCII. One of those tables has become a de facto EBCDIC, as it had all of the characters ASCII has (which not all EBCDICs did) and none of the characters ASCII doesn’t (which some EBCDICs do).

Of course, ASCII is useless for anything other than English, and can’t even completely represent that (as this thread shows). So the Powers that Be (the nonprofit consortium Unicode, Inc., to be specific) are working on an Encoding System to End All Encoding Systems called Unicode. Unicode will be multi-byte and fully ASCII-compatible, not to mention including all European, Asian, African, and Elvish (:D) glyphs. It will be great when it’s fully defined.

http://www.unicode.org/unicode/standard/WhatIsUnicode.html

Well, I certainly agree with Derleth that EBCDIC is definitely a lot “quirkier” than ASCII. On the whole, I’m very glad I don’t have to work with it anymore, but I do remember when I was and how conversion to uppercase was so simple: you just ORed a letter with the SPACE character (0x40 in EBCDIC). No need to use any arithmetic and slow down your assembler code by a few cycles to do an add. Just a simple logical OR.

Man, working with mainframes for those few years really warped me…

** Olentzero**, pal, I don’t know, maybe it’s my capitalist leanings, but I don’t usually sue for ¢. :wink: