I know you’re joking, but let me point out to everyone else, the commonly given Taylor series proof of Euler’s theorem is an unfortunate example of starting with wonderfully clear intuitions, burying them in a lengthy path of myriad technical results, and then actually working backwards from these results only to recover the original ones in obfuscated form, the underlying ideas now seen as through a glass darkly.
What I mean is, how does one actually discover the Taylor series for e^x, cos(x) and sin(x)? Well, one uses the fact that their derivatives are, respectively, e^x, -sin(x), and cos(x), iterating from these rules to get the higher derivatives, and then constructing the series accordingly. Then, all intuition finally having been lost, one can purely formally substitute in ix for x in the first Taylor series, sees that it is the appropriate linear combination of the latter two Taylor series, and conclude the result [that e^(ix) = cos(x) + i*sin(x)].
But where do those starting differentiation rules come from in the first place? For the first one, we have it as the defining property of e^x, that it should be its own derivative. And as for the latter two, there are various paths to discovery, but perhaps the best is from the fact that <cos(x), sin(x)> describes rotation of x radians round the unit circle; the tangents to this rotating motion will be of unit magnitude and perpendicular to the circle, and thus the derivative will be <cos(x), sin(x)> rotated 90 degrees (and since the positive part of the X-axis rotates into the positive part of the Y-axis, and this in turn rotates into the negative part of the X-axis, this comes out to <-sin(x), cos(x)>).
All of this is necessarily established first, before there can be any development of the Taylor series for these functions. Yet, one discovered, these sources of the differentiation rules already directly give us Euler’s theorem, without needing to grind them into opaque infinite polynomials prior to making the final conclusion. The parenthetical above about the effect of rotation on the axes essentially is the observation that multiplication by i is 90 degree rotation. The substitution of ix for x into the remarks on e^x yield that e^(ix) has derivative equal to itself rotated 90 degrees. And, in deducing the derivatives of cosine and sine, we made the observation that this same differential equation holds of “rotate by x radians”, and that this is corresponded with <cos(x), sin(x)>. But this is already to note that e^(ix), “rotate by x radians”, and cos(x) + i*sin(x) are all equal, the desired result.
Proving that e^(ix) = cos(x) + i*sin(x) from Taylor series expansions is like proving that cosine is its own fourth derivative from its Taylor series expansion: yes, you could do it that way, but it would be reflective of having forgotten how you’ve gotten to where you are; by the time you’ve gotten to the stage where you can write out the Taylor series, you should already have discovered the theorem, without having to try reading it off from such series.