Asterisk and slash

Dear Folks,

Can anyone please let me know the specific reason why the asterisk and slash are used in place of the multiplication and division signs on computer keyboards.

Looking forward to some inputs.


With Best Wishes,
Yours Sincerely,
Col PK Nair (retd)
India

For a horrible second I thought this thread was going to be about fan fiction in which Asterix and Obelix get it on. :smiley:

I know that in most modern computer languages, the * and / were chosen as standard symbols for those mathematical operations.
I think the alternative choices for multiplication of ‘x’ (lower case X) and ‘.’ (but as a centered dot) were too confusing when reading code.
And since we were pretty much stuck with the IBM Selectric keyboard back in the old days, the standard business symbols available were repurposed.
After all, the slash character can be used for many purposes, but the ÷ symbol (which is called obelus) is only used for division these days.
So with a limited number of keys available, they had to use what was available.

Agreed. Computer keyboards obviously derived from those in use for other purposes, and even today the use of a single symbol for multiplication or division is limited to a minority of users. So it made perfect sense to press existing symbols into service.

Also those symbols are relatively less ambiguous.

÷ looks an awful lot like a +, especially on a low-quality 80’s computer printout.

X for multiplication could be confused with x the variable in algebraic equations.

Similarly the centered dot for multiplication could be confused with a decimal.

Partly because the keyboard goes back to the days of the typewriter. With that, if you wanted × you typed x, and if you wanted ÷ you typed - (hyphen), back spaced, and typed : (colon).

And the typewriter keyboard, like the computer keyboard, had limited space, so you didn’t include all the possible special symbols.

When computers first started encoding letters and symbols, storage space was extremely expensive compared to now (when you can buy a terabyte of storage for USD100), so the minimum number of bits were used for each letter or symbol. Eventually a standard emerged, called ASCII, which used 7 bits, and so was limited to 127 different codes, including digits, upper and lower case Latin letters, and 32 control characters. After subtracting 32, 10 and 52, you can only encode 33 symbols, including punctuation, and you’ll find those 33 symbols on the standard computer keyboard (including the space character on the space bar).

I dunno but I seem to remember these symbols being used before computers were commonplace - early to mid 70s. Even simple calculators were novel. The slash is just a modified “divided by” sign used in fractions. I don’t know about the * but I have a vague recollection of seeing it for the first time and thinking, for some reason, that it was cool.

The original terminals were teletype-like machines using standard typewriter keyboards. SO they had the standard characters that business used.

Division is easy. Slash has always been sued for division, as in 1/2 for half. A divided-by symbol is almost never used in business, and slash was an adequate substitute. Also, the divided-by would be too easy to mistake for a plus on a crappy 5x7 dot matrix printer. Heck, some didn’t even have descenders, making a g or q or p look weird - moved up a few dots to fit the descender on the line.

Asterix I’m sure is just a substitute for “.” or “x” because - a dot in the middle of the line was not a normal character, a dot at the bootom of the line was already taken by decimal point; “x” as mentioned is confusing. What if your variable was called “X”, or “PXY”? A lot of old printers, and especially old terminals and IBM punch cards (those were the days), typically only used upper case so “X” vs “x” was libale to be confusing.

For programming, it came a little late. COBOL, the main business language, for example originally used text:
MULTIPLY PRINCIPAL BY RATE GIVING INTEREST

It was only later that COBOL allowed:
INTEREST = PRINCIPAL * RATE

This was done on punch cards, then fed into a hopper. Most puch machines only did capitals and had a limited set of characters. If you find old print-outs of the era, before about 1985, odds are they are all capitals. I recall for printing special refenece lists (Library listings) the computer operator had to take the belt with the print characcters off the big IBM printer and put in a special belt which included lower case letters (but the printer ran at half the speed, because it took longer to print all the different characters…)

So the short answer was - they did the best with what they could.

It’s all because no one had the cajones to embrace APL.

Great minds…so did I. What a sick world we live in, eh? :slight_smile:

So who came up with ^ for exponentiation and % for modular arithmetic in programming languages? I’m guessing these came in with C++ (FORTRAN used ** and MOD as I recall…)

APL is great. A lot of problems in the world would never exist if the barrier to entry for programming involved buying a special keyboard.

Also, Perl, the evermore polymorphically perverse programming language, has an ‘x’ operator. It is awesome. And it doesn’t do multiplication.

In the case of division, the question was already answered by past usage. The fraction equivalent to 0.6666… was written 2/3 – and there was no real difference b etween that and 2 divided by 3. More generally, any fraction can be written as x/y – and the meaning is that of x divided by y, whatever they may stand for.

The common arithmetic operators used in many languages today originate with B or BCPL, the ancestors to C (and thence C++). C (and children) use ^ for bitwize XOR. Perl and some other languages inherit ** from FORTRAN for exponentiation (I don’t think C has an exponent operator, but it’s been a while.)

% for modulus is a bit of a mystery to me, although I know that symbol has been used on calculators too since at least the late 70s, so it must have been around for quite a while.

That’s correct.

Before calculators, the percent sign was a very common symbol on typewriter keyboards. From there it was adopted into ASCII, which is more likely what influenced the syntax choices of C.

The percent sign visually resembles a slash, and that would be my guess as to why they, the authors of C, picked it to mean modulus. The modulus operation is related to division, after all. But, that’s just my speculation. Other languages contemporary with C didn’t use the percent sign that way.

Others have already answered this well, but in case you’re too young to remember the world before Unicode, this is the sort of character set that many computers were limited to up through about the early 1980s. There weren’t that many symbols to choose from.

After posting, I noticed Giles already made this point, complete with link. So never mind.

Some background on the development of the ASCII character set here (albeit in very obnoxious green on black text - perhaps fitting, as some early terminal manufacturers insisted on promulgating that choice):

ASCII owes its origins less to the choices of the typewriter keyboard than to telegraphy standards and the military FIELDATA specification. The ASCII standard was issued in 1963, and revised in 1967. Battles raged over what to include or not include in the limited 7 bit space, some of which are described there.

Have you heard about the new object-oriented version of COBOL? It’s called “ADD ONE TO COBOL”.

Have you heard about object-oriented COBOL? It’s called ADD 1 TO COBOL.

Have you heard about Java? It makes you type the same shit over and over.