I don’t know that you could define e that way. In order to define what “raising to a complex number” means you at least need to have already defined the exponential function, which I suppose is technically not the same as defining “e”, but it leans on it pretty heavily.
To expand a little on those definitions, the first one is found in Bernoulli’s work on compound interest. Bernoulli is usually given credit for discovering the constant.
In that context it’s the limit of what you get if you have 100% interest for a year and you credit that in n intervals. Say you pay out 50% every six months and the interest is reinvested. If you invest 1 at the start of the year, you’d have 2 at the end if interest was credited annually, and 2.25 if it was credited every 6 months. If it’s credited n times a year, you get (1 + 1/n)^(1/n) and if n is infinite, that comes out to e.
The function one is essential to do integration and derivation for exponential and logarithmic functions. The derivative of a^x = ln(a)*a^x
You can figure out the derivative without knowing e, but the result would include a limit that would define e.
Could you define exponentiation as repeated multiplication, and extend the domain to real numbers for exponents, without explicitly defining “e” first?
You can go ahead and do it, sure. I mean, it’s all meaningless picayune distinctions in teasing out what’s been defined before what, and what’s been implicitly defined if not explicitly defined, etc, but:
For any positive value b, there’s a unique complex-differentiable function from complex numbers to complex numbers which matches x |-> b^x on real values x. This defines raising a positive real number to a complex number.
There’s then a unique positive real e such that e^(n * i * pi) = -1 for all odd integers n. (There are multiple values b with the property that b^(i * pi) = -1; specifically, any b of the form e^n for odd n.)
Not the most natural way to go about telling someone for the first time about the web of concepts named e, probably; not the most clarifying one. But it’s available, if you want.
Here is my favorite way to define e. It’s not the only way, but it seems to me the way that is most clarifying about what is usually of primary important in the web of concepts that gets bundled under the name “e”:
Let b be any base of exponentiation, and consider the limit as h goes to 0 of (b^h - 1)/h; i.e., the invariant ratio of (d/dx b^x) to b^x. We call this ln(b).
There is a unique base e such that ln(e) = 1. This defines e.
Does this not feel hands-on enough? Do you want something that looks more like e = some formula? Well, directly from the definition, we can solve: if the limit as h goes to 0 of (e^h - 1)/h = 1, then e = the limit as h goes to 0 of (1 + h)^(1/h). This amounts to the “classical” definition that e is the limit as n goes to infinity of (1 + n)^(1/n).
Another way to look at it is that, since ln(b^x) = x ln(b), we find that b^(1/ln(b)) = e no matter which b you start with.
But at any rate, e is the unique base such that ln(e) = 1. (And more generally by the above reasoning, we find that the ln function is invertible and its inverse is x |-> e^x.)
You can also define e in other ways; one popular one is as the sum of reciprocal factorials, and a cool further out one is as the limiting value as n grows large of the n-th root of the least common multiple of the first n positive integers. It’s useful to understand the web of connections relating all of these and more.
But for whatever sense “favorite mathematical definition” has, the above is my default starting point to the theory of e (taking ln as defined above as the primary concept, and e as just another way of talking about this same thing). Sometimes circumstances are such that some other description of it is more directly relevant, but this is my default personal conceptualization.
For me, what is of primary importance is the natural logarithm: it is called natural because it has such a natural and naturally useful definition. ln(b) is the constant ratio between b^x’s derivative and b^x itself.
[In case it’s not obvious that this ratio should be constant, note that translating an exponential function just multiplies it by a constant (since exponentiations turn additions of powers into multiplications), and thus just multiplies its derivative by that same constant, and thus the ratio between them remains the same at all times]
e then arises as a corollary to the natural logarithm. It doesn’t have to be seen the other way around; you don’t have to think of e as defined first as some weird number of unclear importance and logarithms to its base considered for weird reasons, only to turn out to be useful later.
(Indeed, often things are phrased in terms of e which are really just roundabout phrasings that have nothing to do with any interest in the value of e = 2.718… as such; they’re just meant to be ways of talking about natural logarithms or exponential functions in general.)
There is a function exp(x) which has the following properties
Where when you plot the curve exp(x) the tangent line to the curve at the point (x,exp(x)) has slope equal to exp(x), and the area under the curve of exp(x) from minus infinity to x, is equal to exp(x).
Personally, if I were going to define ln() before exp(), I’d define it as the integral of 1/x . But I don’t think that’s the most intuitive way, since it requires the concept of integration (just as your method requires the concept of differentiation).
My preferred method is to start by defining exponentiation (to a natural-number power) as repeated multiplication, use the properties of exponents to then identify negative exponents with inverses and inverse exponents with roots, combine inverse exponents with integer exponents to define exponentiation to rational exponents, and then define real exponents as a limit of rational exponents.
I’d then show how to find arbitrary exponentiation in terms of the exp() function, and show that exp(x) is equal to some constant to the x power. If calculus is available, here’s where I would point out that exp(x) is equal to its own derivative. And then I’d introduce the power-series form of the function, and show that it has all of the properties we need exp() to have (while also giving a convenient and efficient way of calculating exp() to whatever precision we need).
Then I’d define ln() as the inverse of exp(), then show that the derivative of ln(x) is 1/x.
But yes, as you say, we can take any of these as the definitions, and then use those definitions to prove all of the other properties, and it ultimately doesn’t matter which order we go in.
As a matter of history, the natural logarithm came before the exponential function. Napier defined a closely related function by essentially letting ln(x) describe the solution to f’(x) = 1/x. It was not quite that simple and the function he defined was, more or less 10^7*ln(10^-7x). Possibly his most important contribution was a brilliant scheme for calculating this function by hand (lacking any computational aids such as logarithms). Then Briggs came along and realized that for computation (I think this was all in aid of navigational computation), base 10 logs were more useful and he and Napier worked together to convert Napier’s tables to common logs. Napier died and Briggs completed the project. Then e could be defined by ln(e) = 1.
As an aside, my own personal introduction to e: I had a scientific calculator, but didn’t know what all of the buttons did, so I experimented. I realized that the ln() button behaved like some sort of logarithm, and figured out the base by trial-and-error and binary search (ln(2) < 1, ln(3) > 1, ln(2.5) < 1, ln(2.75) > 1 but only barely, ln(2.7) < 1 but even closer, etc.). Once I had enough digits for the number to be recognizable, I asked my math teacher “What is the number 2.718, and why does my calculator have a button for taking logs to that base?”.
I’m not sure why I never realized that [2nd] on that same key would give me the inverse function.
In my old calculus book the “origin story” of e was the log base for the function whose derivative is 1/x (and which was 0 at 1). Yeah, another way of stating the definite integral of 1/x, but this was early in the book. The authors were able to show how ln x was the unique function with the right properties.
In Concrete Mathematics by Graham, Knuth, and Patashnik, they show how a lovely “LR” sequence in the Stern-Brocot tree leads to e.
This leads to the fact that e is an irrational number that can be expressed with remarkably little information. Which makes π look quite “wordy” by comparison.
Surely “π/4 = 1/1 - 1/3 + 1/5 - 1/7 + 1/9 - 1/11… [the alternating sum of reciprocal odd naturals]” is as simple or simpler than “e = 1 : ( 0 : ( 1 : ( 1 : ( 2 : ( 1 : ( 1 : ( 4 : ( 1 : ( 1 : ( 6 : ( 1 : … [the composition of 1 : ( n : ( 1 : over all even naturals], where x : y = x + 1/y”, which is what the referenced Stern-Brocot LR sequence amounts to.
(If you know about continued fractions, that thing I’ve perhaps notated awkwardly is just describing a simple continued fraction for e. But I still think the alternating sum description of π is equally simple or simpler; hardly that “wordy” in comparison.)
The constant shows up in a lot of related scenarios, namely whenever you want to predict the amount of something over time where the rate of change depends upon the current amount. For example, to determine the amount over time of:
Water draining from a container (drain rate eventually slows when there’s less water left)