A couple weeks ago when I was snowed in, I recorded my furnace duty cycle time vs deltaT between inside/outside temp. I took the two most extreme points with outside temps varying between 2 and 19 degf. Took my current temp of 41 and calculated the duty cycle within 20%. 48 actual vs 59 calculated. Not bad.
I was trying to find the length of a triangle leg opposite a 135 degree angle. I used to know how to do this (do I use tangent and secant or logarithms; definitely not the Pythagorean theorem) but was having no luck. On a whim, I Googled it as a question and got The Triangle Calculator page. SHAZAM!
I’m less stupid by being stupider.
Law of Sines!
You mean like this: xkcd: Los Alamos?
I sure with the preview box still worked on xkcd, but Munroe’s webmaster nerfed it; probably inadvertently.
A while back I saw a video on taking e to the power of strange things like matrices, and also some weird applications of the derivative. I thought: what if I take e to the derivative power? Not the derivative of e^x, but e^\frac{d}{dx}?
Sounds strange, but it’s not so bad. I’ll use the letter D to represent the operator \frac{d}{dx}.
e^x is of course 1 + x + \frac{x^2}{2} + \frac{x^3}{3!} + ..., which I’ll write a little more consistently as \frac{x^0}{0!} + \frac{x^1}{1!} + \frac{x^2}{2!} + \frac{x^3}{3!} + .... So e^D is just \frac{D^0}{0!} + \frac{D^1}{1!} + \frac{D^2}{2!} + \frac{D^3}{3!} + ..., which just means it’s a weighted average of the zeroth derivative, the first derivative, etc.
Suppose we apply that to x^N. Since the derivative is a linear operator, if we figure out what it does to that, we know what it does to any polynomial, and really any analytic function. If we work through it, we get:
x^N + \frac{N}{1!}x^{N-1} + \frac{N(N-1)}{2!}x^{N-2} + \frac{N(N-1)(N-2)}{3!}x^{N-3} + ...
or:
x^N + \frac{N!}{1!(N-1)!}x^{N-1} + \frac{N!}{2!(N-2)!}x^{N-2} + \frac{N!}{3!(N-3)!}x^{N-3} + ...
Hey, that looks sorta familiar… those are just the binomial coefficients! I.e.:
{N \choose k} = \frac{N!}{k!(N-k)!}
So we’re just doing:
\sum_{k=0}^{N} {N \choose k}x^k, which is of course just (x+1)^N.
That’s pretty neat! Tidying up, we have e^D x^N = (x+1)^N. Which means that e^D f(x) = f(x+1) for a wide variety of functions. Almost all of them, really. Our e^D operator just slides the function to the left. If we work a little more on it, we can discover that e^{aD} f(x) = f(x + a), for an arbitrary slide amount.
Anyway, I certainly wasn’t the first to discover this, and it turns out that it’s useful in quantum mechanics and is known as the translation operator. Not to mention other areas of math.
There’s another fun way to derive this, starting from the “interest rate” definition of e, which basically works by taking tiny steps along the function, so that when you’ve stepped by a distance of 1 you’ve added up all the differences in y, so that you get the same f(x+1) in the end.
A few years ago I learned this cool way to derive the trigonometric double angle formulas from Euler’s formula (e^{ix} = \cos x + i \sin x). One might think that Euler’s formula is only useful for rather abstract calculations involving complex numbers, but it can be used to derive the very practical double angle formulas, using nothing but a couple of steps of simple algebra:
First take Euler’s formula and substitute x = 2\theta:
(1) e^{i 2\theta} = \cos {2\theta} + i \sin{2\theta}
Now take Euler’s formula again, substitute x=\theta, and square both sides:
(2) (e^{i\theta})^2 = (\cos \theta + i \sin \theta)^2 = \cos^2 \theta +2i \cos \theta \sin \theta - \sin^2 \theta
(The negative sign appears in the last term because i^2 = -1.)
Now observe that the left hand side of (1) and (2) are equal to each other. Therefore the right hand sides must also be equal. If two complex numbers are equal, the real parts must be equal and the imaginary parts must be equal. So:
Real part of the right side of (1) equals real part of the right side of (2):
\cos 2\theta =\cos^2 \theta - \sin^2 \theta
Imaginary part of the right side of (1) equals imaginary part of the right side of (2):
\sin 2\theta = 2 \cos \theta \sin \theta
QED. Those are the double angle formulas.
That is awesome!
I was thinking about boolean functions a few months back. There are 4 boolean functions of 1 variable (always 1, always 0, A and not A), and there are 16 boolean functions of 2 variables - and in general, 2^(2^N) boolean functions of N variables. There are 2^N ways that N boolean variables can be arranged (for N=2 the four ways are 00, 01, 10, and 11 of course). So if you want to know what the result when you put the i-th arrangement of variables into the j-th function, the answer is the i-th binary digit of j.
There’s a nice real-world application of the same principle. A number of computer architectures support a universal 3-input boolean instruction. The function selector is just a single byte, so the hardware converts the inputs into a 3-bit index from 0-7, and picks that bit of the function byte. Also supports all 2-input subsets by duplicating the upper and lower 4 bits (which work the same way as the 2-bit version you came up with).
Nice
Just the other day, I calculated the exponential of the cross product.
Cool. I might have to noodle around with that myself now that you’ve suggested it.
The exponential is a remarkable function. One day I’m going to try to learn about Lie theory and its exponential map.
A fun little proof that I didn’t quite come up with, but I generalized from another one.
Pick some positive integer n that has at least one prime factor with an odd exponent. That is to say, it’s not a perfect square.
Also pick two positive integers a and b. Whatever you like. Now consider the numbers nb^2 and a^2. These can’t be the same number, since nb^2 has (like n) at least one prime factor with an odd exponent, while all factors in a^2 have even exponents. Since they’re different integers, we can then write:
|{nb^2 - a^2}| \ge 1
Or:
|{n - \frac{a^2}{b^2}}| \ge \frac{1}{b^2}
So the square of any rational number \frac{a}{b} must differ from a non-perfect-square by at least \frac{1}{b^2}. The only remaining numbers that the square root could be are, by definition, irrational.
I like this one because it isn’t a proof-by-contradiction, and it works for all numbers at once.
Getting back to the OP, the law of cosines says that
c^2=a^2+b^2-2ab\cos\theta. In this case \cos(135^o) =-\sqrt{2}/2 so
c^2=a^2+b^2+\sqrt{2}ab.
Someone mentioned above exponentiating the cross product. I am not sure that makes any sense since the cross product is not commutative and, much worse, not associative.
I plugged my numbers into that equation and came up with the actual 48 vs my calculated 59. But the units seem to be in planck constants.
It makes sense in this limited context I think.
e^(k x) v= (I + k x v + (k x (k x v))/2 …)
gave me
(with the angle of rotation being one radian)
Matrices are supposed to be strange?
Hi @Hari_Seldon ,
I think what is meant is that \mathbb{R}^3 with the so-called cross product forms a real Lie algebra \mathrm{so}(3). Therefore any element of this Lie algebra can be exponentiated to give an element of \mathrm{SO}(3), quite explicitly if you write it out in coordinates.
Incidentally (vaguely related once you start computing exponentials) there is (e.g.) the Baker–Campbell–Hausdorff formula which gives an explicit series
when X and Y do not commute.
Hey man, even e^{i \pi} is still super cool and weird. Quaternions, matrices, operators, etc. are weirder yet as powers. “Multiply the base by itself matrix times.” It makes no sense, and yet it does.
Of course, if you restrict yourself to certain kinds of functions, \frac{d}{dx} can be represented by a matrix. E.g. in the space of polynomials of order 3 represented by their coefficients, the derivative operator is:
[0 1 0 0]
[0 0 2 0]
[0 0 0 3]
[0 0 0 0]
So derivative exponentials are in a way no weirder than matrix exponentials.
While we’re at it, everyone knows about first derivatives, and second derivatives, and so on, and you can call integrals negative-order derivatives… but how about fractional-order derivatives?
All you need to do is take your function, and do a Fourier analysis on it, so it’s now a sum of a bunch of sines. Well, taking the derivative of a sine function just shifts its phase by pi/2 to the left. So if you want to take the half-derivative, you just shift it over by pi/4 to the left. Or any other real-valued derivative order.
I like how this went from Y=mX+b to Fourier transforms in just 18 posts
In the case of d/dx perhaps the intuitive way to think about it is geometrically: d/dx is the vector field corresponding to translation in the positive x-direction (directional derivative), i.e. it generates the flow \varphi_t(x)=x+t, so for a smooth function e^{td/dx}f(x)=f(x+t).