Does base-10 seem natural only because we have ten fingers?

Or could we become equally fluent in other bases if we were sufficiently trained/indoctrinated early enough?

Would binary prefixes (mebi, gibi), for example, be as easy to work as their base-10 counterparts if we grew up using them, or is there something inherently mathematically simpler about powers of 10 compared to powers of 2 or any other number?

Yes.

Not by a long shot. If you want something that makes it easy to do arithmetic, both 36 and 18 have more divisors, so base 36 would give you fewer irreducible fractions to deal with. Other bases likely have similar properties; the point is, base 10 is indeed a historical accident and, had we developed four metacarpals per hand instead of five, we might have developed computers sooner.

Mathematically, all bases work the same way, and none is simpler or more complicated than any other. But there are practical considerations. For example, base 2 is inconvenient for counting because you have to add new places frequently, so a relatively small number like 256 requires nine digits. 1024 requires ten digits, etc. All those extra digits are a pain to write and waste space. So we probably want a system with more symbols.

On the other hand, a system with too many symbols (say, the base-60 system used by the Babylonians, from which we get our 60-divided time units) can also be difficult. There are a lot of symbols to learn, and there are only so many ways to differentiate symbols on a clay tablet without a lot of them looking similar, leading to mistakes. And actually, although the Babylonian system was positional, their “digits” were really composed of smaller symbols.

So base 10 is a pretty good compromise. It has a small enough catalog of symbols which can be easily learned, and which are sufficiently different from one another. It has enough symbols to take advantage of positional counting and arithmetic. And the fact that we have ten fingers makes learning it especially easy.

But if I had my way, we’d all count in base 16. :slight_smile:
ETA: Darleth’s point about divisors is a really good one. It’s one of the reasons why English measurement units and time units continue to persist. Feet and hours can both be divided into haves, thirds, fourths, and sixths very easily; meters and metric hours can’t be divided as many ways.

Now, there are also tradeoffs involving the size of the base. Binary has few digits to remember, but fairly long representations to express a given number. Base 36 would go the other way - numbers would take significantly fewer digits than in base 10, but there’s more different digits to keep track of. But since we can keep track of 26 letters in language, base 36 math is something that we could probably handle. We could even combine the arabic digits and the latin letters to come up with a digit set that would be fairly easy to remember the order of, as long as we remembered that A comes after 9, or whatever.

(Waves to friedo.)

There are several languages that use bases other than 10, either as an additional system or for their entire system. I’m too lazy to read all the articles, but the sidebar on numeral systems on wikipedia is a good start.

Sure.
Lots of computer programmers can do math in Hexadecimal (base 16) nearly as fast as they do decimal.

Oh, how I wish we would have been born with 6 digits on each hand. Then we’d have base 12, which is evenly divisible by 2, 3, 4, and 6. Instead we have base 10, only divisible by 2 and 5.

I’m reading a book about algebra and was surprised at how long it took for our current numbering system to take hold. The notion of 10 digits and each place representing a power of 10 is quite powerful and makes math much simpler. Although first created between the 1st and 5th centuries by Indian mathematicians , it wasn’t known to the western world until Fibonacci popularized it in the 13th century. It’s remarkable that so much great math was done before that with very complicated numbering systems and they never discovered the current way we write numbers.

I think base a moderate power of 2 is the best tradeoff here: it’s essentially the same as base 2, and thus has all the advantages that minimality brings it (e.g., in terms of specifying algorithms for basic arithmetic, one only has to consider the cases 0 and 1), while also overlaying some sort of superficial notational shorthand on top, which can easily be translated into and out of for groups of however-many-bits at a time.

Thus, base 2[sup]3[/sup] or 2[sup]4[/sup] seem quite good.

(Although, ultimately, the best thing is to be comfortable with whatever descriptions of numbers are most well-suited to the particular situation, regardless of what base one has settled upon for standardized communication. If a particular problem is best understood by thinking of a number as sums of powers of 3, then go ahead and think about it that way.)

I’m confused. If you represent a fraction as a/b, it doesn’t matter what base a & b are in, there is only one reduced form. 18/9 in decimal gives us 2, just like 10/9 in base-18 does.

Did you mean that a “decimal” representation of a rational would be less likely to require a repeating representation? If you want to represent a rational as “a.b”, then the more divisors there are in the base, the less likely it is that b will need to be a repeated figure.

I’m reasonably fluent in base 8 (from my PDP-11 work) and hex, from everything else. Hex is much better since the number of bits needed to represent a hex digit is a power of two also.

10 is about the worse even base I can think of, except, maybe, base 6. I doubt we’d ever use base 10 if we did not have 10 fingers.

I like the way the Mayans did it: They have a base 20 place-value system (I guess Mayan scholars went barefoot) that’s constructed out of only three symbols. Each digit is constructed out of those symbols in a simple, intuitive way.

Indeed. I once worked on a project where I literally found it simplest to work in base eleven. Which, of course, would be an absolutely horrid choice of base for most situations.

And the number of bits needed to represent the number of bits needed to represent a hex digit is a power of two also.

(And, heck, the number of bits needed to represent the number of bits needed to represent the number of bits needed to represent a hex digit is a power of two also!)

Interesting. Yes you still see them today, but how late were Roman Numerals in wide use?

This thread is a great example of why I love the Dope.

Roman numerals began falling out of favor in Europe during the dark ages (except for formal stuff like Church proclamations and government legislation.) The literate merchant classes and such started using Arabic numerals fairly early on.

But the concise symbols that we use for expressing math were not invented until much later. The equals sign was not invented until the 16th century! A lot of the history of algebra consists of people solving equations like “three times a quantity squared minus twice the same quantity plus two is equal to twenty-three.” A trivial problem for an modern algebra student, but imagine doing it without the benefit of symbols for variables, arithmetic operations, and equality.

Isn’t that called Nigel Tufnel math?

Thread I started about 18 months ago.

It’s fun to think about but it’s unlikely to go anywhere.

Don’t you mean 16 months ago? :wink:

You wouldn’t be laughing if you had to write code to deal with the fact that putting two bytes together with values 377 and 377 does not yield 377377 but rather 177777.

Dumb, dumb, dumb!

You’re right, of course; I got a bit confused on whether fractions are simplified or decimals are, and it’s obvious that base 36 gives us fewer repeating decimals created from common fractions.

Side note: Floating-point math, i.e. how computer people have turned the field of the real numbers into something utterly bizarre, is currently done in base 2, so something the equivalent of 0.1 is 1/2, and 0.01 is 1/4. Now, in that representation, one-tenth is a bizarre repeating decimal which frequently doesn’t round quite right. Therefore, in a computer, 10 * 0.1 may or may not exactly equal 1.

The average application programmer approaches floating-point-intensive code with the same reverent awe and voodoo incantiation you might find in a Santeria ritual.