I saw somebody using notation like O(1) or O(6) to mean “of the order of magnitude of one or a million”. That’s a capital letter “O”.
Is this common notation? Is it only common in some contexts or crowds or places? I saw it used in a discussion of computational fluid dynamics by a professor in Italy.
I’ve been a technical professional since 1973 and I wonder how I managed to miss this until yesterday.
O(f(n)) is common notation that means something, but “order of magnitude of n” would then be O(log n) in the sense you are talking about. Also log(1) = 0, not 1.
I thought of that, and in that case I’ve never encountered it and not sure why someone discussing computation would use big-O notation in a different way than the audience would automatically understand. If I wanted to describe “approximately” a million I would write 10[sup]6[/sup] or exp(6) or something of that nature. But introducing your own notation is cool as long as you define it in the beginning, maybe the author really needed those approximate powers of 10 instead of the usual exponential function.
To me, saying that a function f is O(3) merely means that f(x)/3 is bounded, which is pretty weak. It is the same as saying it is O(1) or O(1000) for that matter. Or that the function is bounded. Much more informative would be to say that f ~ 3, which would mean that lim f(x)/3 = 1, or simply that lim f(x) =3, the limits as x goes to infinity. But these notations (along with o()) are used mainly to compare two functions.
E.g. We say that pi(x) ~ x/ln x, where pi(x) is the number of primes less than x.
I have been using O notation for decades, but, as others have said, my usage matches the wikipedia version and not the OP’s version. O(some constant) to me would be a function that always takes a constant time to execute, such as a lookup table.
It’s usage is very common in electrical engineering and computer science type applications. When written O(n) it’s usually pronounced “order n”, so O(n[sup]2[/sup]) would be pronounced “order n squared”.
The classic example is a shift operation on an x86 processor. On a 286 processor, if you want to bit shift left by 5 bits, the processor goes through 5 loops (1 clock cycle per loop) of shifting the value 1 bit at a time. On a 386 processor, the entire shift of 5 bits is done in a single clock cycle. So a bit shift on a 286 is O(n) but a bit shift on a 386 is O(1). It doesn’t matter if you are shifting by 1 bit or 7 bits on a 386, the shift operation in the ALU always executes in one clock cycle.
In EE/CS stuff you want to avoid O(n[sup]2[/sup]) stuff like the plague since it slows down like a snail stuck in molasses as your data sets get bigger and bigger. This is a big deal in real time control because you need to make sure that your system degrades in a controlled and hopefully linear O(n) fashion as it gets overwhelmed, instead of just falling off of a cliff due to O(n[sup]2[/sup]) type processing. It’s also important when designing the system so that it can meet it’s schedule requirements.
The OP’s usage sounds like someone who misunderstood how big O notation is supposed to work, and confused “order of” with “order of magnitude”. In any event, no matter how it originated, O(3) to mean “order of magnitude of 1,000” would be a bit confusing to me.
Eh, there are some applications for which CS folks would be head over heels to find an O(n^2) algorithm. Or even an O(n^793) algorithm.
But I’m another one who’s never seen the notation described by the OP. I would have interpreted it as a mistake by the writer, and if I had to deduce the intent behind the mistake, I would have guessed it meant “approximately equal to 3”, as, for instance, “pi is O(3)”.
First of all, I embarrass myself. There is no way of interpreting any of this to attach the meaning “1” to “O(1)”. Oops! Ten, then.
So, here is the quote:
“Then, a general rule to guarantee a sufficient resolution of the grid is to ensure a Reynolds cell number = O(1). In a buoyancy-driven case, the Re number is substitute by the Rayleigh number. For smooth laminar flow a grid with O(10^6) nodes should be sufficient to get a good solution at moderate Re/Ra number.”
For those not particularly into CFD, Reynolds number is a dimensionless metric that scales with tendency toward turbulent flow rather than smooth invariant flow. Rayleigh number is a metric that scales with the tendency for heated fluid to rise, relative to its other behaviors. Reynolds cell number is the Reynolds number calculated for one cell or node in the mesh network that divides the entire flow field into finite elements or finite volumes that get treated like indivisible units, using the cell size as the length dimension.
I think by O(1) he may mean ten (not one, again). I think that’s a reasonable value in this context. I think by O(10^6) he likely meant 1,000,000 and probably meant to write either O(6) or 10^6, not suggesting exponentiation twice. And 1,000,000 is a reasonable number in this context.
He or she misjudged by not defining his or her non-standard notation if you, the target audience, are not able to understand it.
I don’t see why they didn’t just write out “on the order of 10[sup]6[/sup] nodes” and “Reynolds number on the order of unity” if this usage only occurs once or twice. I’ll check a couple of other books just in case.
OK, and now we have a problem. You can tell from context that O(10^6) means “approximately a million”. If he’s using his nonstandard notation consistently, then O(1) would mean “approximately 1”. But you can’t necessarily tell, from context, the difference between “approximately 1” and the “approximately 10” you’re interpreting it to mean.
The best bet here, if possible, is to contact the author and ask for clarification. If that’s not possible, then I think that you have to assume that he’s using his notation consistently.
I did. He answered:
“O()” stands for order of magnitude
I think I’m to understand O(x) means “of the same order of magnitude as x”. I think I guessed wrong earlier and O(10^6) is consistent usage meaning what would be spoken “the same order of magnitude as a million”.
But I’m not so much just wanting to understand what he meant. I want to understand if this is a common way of expressing something, and to be sure I have it’s meaning correct in general, especially in case I ever use it.
I’m still not quite hearing that this writer’s usage is a standard usage. The only people here who have posted about anything like a standard usage are speaking about a distinctly different one, I think inapplicable here. Nobody’s saying “ah, yes, I recognize that”.
Did the writer misjudge for me as a specific target audience, or for posting technical stuff online generally? Who here would have been sure they recognized the usage and understood the sentence I quoted?
After the explanation, I take it to mean that you need about 1 cell in your simulation (for each output point), or 1,000,000 cells, depending on what you are trying to do.
Which are perfectly cromulent instances of O(n), but not (in English) something I’ve previously seen in Electronic/Medical/Computational textbooks.
If that’s NOT what it means, then I’m still lost, even after the explanation.
(And 10~6 computation points per output point is a f-ing HUGE ratio for Electronics – is fluid dynamics really that different?)
After the explanation, I take it to mean that you need about 1 cell in your simulation (for each output point), or 1,000,000 cells, depending on what you are trying to do.
Which are perfectly cromulent instances of O(n), but not (in English) something I’ve previously seen in Electronic/Medical/Computational textbooks.
If that’s NOT what it means, then I’m still lost, even after the explanation.
(And 10^6 computation points per output point is a f-ing HUGE ratio for communications electronics – I assume this is 3 dimensional?)
The guy’s clueless. It’s like using “log X” to mean “write down X”. Using a commonly accepted notation in an incorrect way is … incorrect.
O(1) and order O(1000000) represent the exact same sets. The whole point is to compare things to within a constant factor. The size of the constant doesn’t matter.
I don’t do anything high tech. And it’s all almost linear. I only need 10-for-1, in one dimension. 100-to-1 per dimension doesn’t match my mental model of the world
The notation O() has several distinct usages. Each is standard in its appropriate context or field.
[1] The perhaps most familiar usage is Big-O notation, where something like O(log n) or O(n!) gives the leading-order scaling time of an algorithm or operation.
[2] The notation of interest in this thread doesn’t have a specific name that I’m aware of, but O(x) indeed translates as “a number on the order of x” or more verbosely “a number of the order of magnitude of x”. It is usually read “order x”, and grammatically it acts like a number. I suspect its etymology traces back to usage [1], but it has a very different meaning.
[3] The next most common usage is probably from group theory, where O(n) means “orthogonal group of dimension n”.
The “O” in these three usages is not always stylized the same (calligraphy vs. italics vs. Roman), but this isn’t a hard rule at all.
I encounter all three of these regularly, and I use all three myself from time to time. Usage [2] appears in casual communication more often than formal writing, but it certainly occurs in both.
Made up example sentences:
“We estimate a human could be killed by O(1000) angry ducks.”
“This problem would take O(10[sup]7[/sup]) CPU-years to solve on a commodity CPU.”