IIRC this was a joke too from my computational math class.
Basically, when working in variables (x, y, a, b, you know…) you can approximate the results.
As x approaches infinity, consider:
(x)(x+1) = x^2+x
The squared term becomes so much larger (“dominates”) that it really doesn’t matter what the other terms are; if I offered you x^2 +x pennies versus x^2 how badly would you care about the difference?
x=1 penny - 2 versus 1
x=10 - $1.00 vs. 1.10
x=100 -$100.00 vs $101.00
x=1000 $10,000.00 vs $10,010.00 - at this point, who wants to wait while they count out the last ten dollars?
x=1 million; One trillion pennies, or $10B versus $10B plus $10,000.00
So you can see, “for sufficiently large values of X, we can say that X^2 is approximately equal to X^2+X”. This sort of approximation becomes a cliche in that field of math.
Why is it useful? The highest term (largest exponent) dominates. This sets the shape of the graph of the function f(x). When you scale really large, that’s the number you need to know about.
For example, searching the google database for a keyword; if you have X entries, and look at them one at a time, your time will be X. If the list is in order, and you can hop back and forth by half the remaining list each time (binary sort) your search time is limited by log(X)
or you are searching a face database for facial recognition; or a fingerprint database, etc. The classic exercise for orders of magnitude in processing is teh sort algorithm. Anything from x^2 to xlog(length(x))
For a computer that does millions of calculations a second, that does not sound like much. But when there are billions or trillions of items to search, it CAN take a while. This is where the dominant term is important.
So the cliche was “x=x+1 for sufficiently large values of x”. The joke that “2+2=5 for sufficiently large values of 2” is just geek humor at work. It’s not pretty.