# Quick Translation (for someone too stupid to comprehend scientific notation)

What is this number?:

1.0e+799
Please do not respond that the number 1.0e+799 is, in fact, the number 1.0e+799. I know that but that doesn’t help.
I understand nomenclature like x times 10 to the 799th power, or even “e” times something to the something’th power, if it is explained in rather careful terms.

(“e” is the natural whatchamacallit and its value by itself is, like, approximately 2-and-change, right? remind me again what it’s good for and why it would be used here?)

A short but patient remedial primer that unpacks “1.0e+799” nomenclature would be useful here.

it’s just 1.0 times 10 elevated to the 799th power, that is: a 1 with 799 zeroes. (similarly, 1.39e+02 is a way of writing 1.39*100 = 139). The e here is just for exponential, it does not denote the base of the natural logarithm here (I think).

Švejk is right - the “e” in this case is just short for “exponent.”

Another important thing about scientific notation is that it tells you how many significant digits are in the number. It is important in science to say how accurate a measurement is - this is often done by stating how many of the digits in the number really mean something. The example given by the OP, 1.0e+799, has two digits before the “e”, so that’s how many significant digits there are - it is accurate to two places. 1.0000e+799 has five significant digits, so it represents a greater level of accuracy for the same number.

This is something that is hard to express using non-scientific notation. Take, for example, the number 3.2e+6 - written in non-scientific notation it’s 3200000. The scientific notation form indicates that there are two significant digits (because that’s how many digits are shown), while in non-scientific notation it’s unclear whether the five zeros after the digit two are significant or not.

BTW, 1.0 e+799 is a very, very big number - more than 700 orders of magnitude bigger than all the atoms in the universe.

Cool! OK I dunno why the display would not format that as 1.0 * 10[sup]799[/sup], which I would have immediately understood, but I can adapt.

I’m boning up for my FileMaker certification exam tomorrow. (Omitting rant about the irrelevancy and uselessness of cert exams). Actually trying to snag the largest number (or, technically, largest floating point number of sig digits) FM is designed to accomodate in a number field. Surprisingly, it’s not well documented.
FileMaker can display the number that is one less than 1.0e+800 in conventional non-scientific notation (as a huge ungainly string of 9s, with or without commas) and it will accept 1.0e+799 as input and subtract from it, but that’s apparently the ceiling; adding 1 to the number one less than 1.0e+800 results in a perplexed “?”; manually typing in 1.03+800 (and it will let you do that, or for that matter 1.0e+80000) and trying to subtract from it also yields “?”.

It comes from when writing reports with superscripts involved some strange calisthenics with the typewriter. Normally the E is capital, specifically to avoid confusions with the number e.

I always write it without spaces, for that matter. Thus, 3.5e6 so that it looks like a scientific number instead of looking like the number e.

Hmm, I used to work doing technical QA at Filemaker. Version 6 IIRC. It was a while ago now, although engineering was well into V7 when V6 was released. Where are they at now?

Have you tried simply asking Filemaker support what the largest number is?

I had a similar role once at Quickbooks as well. Don’t get me started there on internal storage - don’t know if its still this way or not (this was V5, even before Filemaker). But years were stored as two digit offsets from like, 1928 I think (Better be prepared for y2k+28 I guess :). And when we were designing the initial Japanese version, it was totally FUBARed because there was no way to account for Japanese currency amounts being two orders of magnitude greater than US, and having no decimal fractions.

Here’s the Wikipedia entry on it. It doesn’t give the origin of the notation. I suspect Fortran, but it could be earlier.

It’s confusing that different devices/programs display scientific notation in different ways.

1.0 x 10[sup]799[/sup] is the way it would appear in print.

The OP’s “1.0e+799” is how some software, including the calculator that’s been included with Windows since at least Windows 3.1, would display scientific notation.

Here and here are a couple of examples of scientific notation as shown on calculator displays.

I’d suspect sometime around the time of Newton’s Principia.

I suspect not. There would be no reason to use E-notation in manually typeset documents. The more familiar superscript notation has a long history.

10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

It bugs me no end when I’m working with a student and they’ll say that number as “one times e to the 799”. Or (worse, because it actually affects the result they get) when they have a number like 10[sup]-7[/sup], and enter it into their calculators as 10e-7 .

That is persuasive. Hmm off to my old Unicode manuals to see if there is a clue…

Once in great while I have brain fart with a crappy calculator and do that myself :smack:

It bugs me even no endier when I say it. I understand and use this notation all the time, but when I idly read it out loud, I often misspeak. It seems very easy to miss, like “Once Upon a a Time” if the line breaks between the two a’s.

Never heard the “e” should be capitalized; I’ve seen it written both ways and all the software I can think of that accepts either form also accepts the other.

Also never heard it shouldn’t appear this way in print. Writing it with the superscript is, in my opinion, bad practice, because there are so many ways in this modern world that the text can become corrupted by losing the intrinsically unprintable escape character that signals the start of superscripting. In this way, a million becomes a hundred and six. I have found dozens of examples of this in web resources, for example in some sales literature I read a couple weeks ago for a certain kind of sensor. In that case, the units were very obscure and to me their magnitude unfamiliar, and a Greek mu had also gone missing, so the result looked plausible but was wrong by an enormous factor.

True, I remember a thread here once where some newb insisted that there were 1066 particles in the Universe or some such, and had cites to prove it. It took about two pages to explain to him why he was wrong.

[irrelevant hijack]
Well, I passed. I am now a certified FileMaker 10 developer.

It is in a great many ways a crock of shit — the difficulty of many of the questions being an artifact of obscurity. I agreed not to reveal contents of their test as part of the testing procedure, but by analogy: if the same folks wrote a test for Certified SDMB Forum Users, a typical question might read:

**On the index pages of the Straight Dope Archive of the forums In My Humble Opinion and Mundane Pointless Stuff I Must Share (MPSIMS), how many distinct pages appear at the top as links?

a) 256 and 384, respectively
b) 370 and 490, respectively
c) 325 and 490, respectively
d) 265 and 425, respectively
**