You’d want to be careful about extending the digit-set to include multiple alphabets, since many of the letters look the same. Can you tell the difference between o and omicron, or between Rho and P?
And it should also be noted that on old Unix installations, the command you’d use to show the hex sequence of a binary file was called “od”, for “octal dump”. Apparently that was the standard at the time that command was written.
And, of course, we’ve retained octal in many modern contexts for representing ASCII and unicode character codes, which I find mildly annoying sometimes. I don’t see the advantage of thinking of, say, the percent sign, as octal 45 instead of hex 25. Particularly as one place the character codes tend to turn up - URL escaping - DOES use hex. When EBCDIC was more commonly seen, it tended to be represented in hex.
And why does somebody want to look at a dump of “oOO” patterns instead of “XX”?
ETA:
And why did somebody then decide that character entities specified by code in XML/HTML would use decimal?
AFAIK, that’s the command on new Unix installations as well, although it still defaults to octal. Type ‘od -x’ to get hex. (Or better yet ‘od -t x1’ to avoid the insane byte-swapping default on Little-Endian machines.)
BTW, I’m still surprised that pople think arithmetic in base-16 (or base-32 :smack: ) is “easy.” I’m not ashamed to admit that when adding 0A to 0B, I think “ten plus eleven” … and I was top of my 3rd-grade arithmetic class!
UNIX was first developed on octal machines.
Octal was a pain on pdp 11’s, since you had the sign bit left over.
Now the LGP-21 I used in high school (from 1962 but based on the even older LGP-30) used hex. It was a 32 bit machine. The hex was non-standard. Instead of
0-9ABCDEF
it used
0-9FGJKQW
Why? Because none of these were used for opcodes. The opcode for unconditional branch, U was 1010. The non-ASCII code for U was 101001.
The non-ASCII code for F was 101010. So it was trivial to convert the character coding for either to their underlying value.