Pi

Presumably from +1 to -1.

The second third of the proof:

The gist:

But, for any given x, we have that |fsup/sup| ~ |fsup/sup| = N!/(2N)! as N grows large. This decreases towards zero (without actually hitting zero) super-exponentially.

Short further explanation:
This can be seen from the easy Taylor series expansion of fsup/sup; the = is immediate, and the ~ follows from each term dominating the next as N grows large (if you’re worried about commuting limits here, it’s justified by dominated convergence).

Longer further explanation:
Recall that cos’’ = -cos, and cos(0) = 1 while cos’(0) = 0. This means the sequence of values of cossup/sup is 1, 0, -1, 0, repeating. This means the Taylor series of cos(x) is 1 - x[sup]2[/sup]/2! + x[sup]4[/sup]/4! - x[sup]6[/sup]/6! + …, whose degree m term has coefficient zero if m is odd, and (-1)[sup]m/2[/sup]/m! if m is even.

And therefore, the Taylor series of f(x) = cos(sqrt(x)) is 1 - x[sup]1[/sup]/2! + x[sup]2[/sup]/4! - x[sup]3[/sup]/6! + …, whose degree m term has coefficient (-1)[sup]m[/sup]/(2m)!.

And from this Taylor series, we can re-extract the values of fsup/sup; it must be N! * the degree N coefficient in the previous series, which is to say, it must be (-1)[sup]N[/sup] * N!/(2N)!.

This tells us what fsup/sup is, and we can also note that this has magnitude N!/(2N)!, which decreases towards zero (without actually hitting zero) “super-exponentially” (in the sense that it approaches zero faster than exponentiation with any fixed base; that is, the N-th root of its value at N eventually becomes smaller than any fixed positive value. You can see this by noting that N!/(2N)! = the reciprocal of (N + 1) * (N + 2) * … * 2N with N many factors, and thus is at least as small as 1/N^N, and thus its N-th root is at least as small as 1/N, which can be made arbitrarily small.)

Now we know that the magnitude of fsup/sup decreases towards zero (without actually hitting zero) super-exponentially. But we want to know the same fact about fsup/sup instead. How do we transform our knowledge about fsup/sup to knowledge about fsup/sup?

Well, we’ll show that the ratio between the two approaches 1 as N grows large; this will show that fsup/sup also must decrease at the same “asymptotic rate”.

Ok. What is the ratio between fsup/sup and fsup/sup?

…Ah, oh, god, it’s past 3:30 AM for me here. I’ll finish this post and the expanded post about the last third of the proof tomorrow. Sorry folks; good night!

An explanation of why it’s useful isn’t hard. If you imagine a number line, you can grasp that it’s only possible to move up and down it - any given number has its place on the line, and it’s either bigger or smaller than any other given number on the line.

There are some quantities, though, for which this isn’t enough. The sine wave is the classic example - as well as a magnitude value (how big it is) we also have a phase value - where it is in its cycle (at a peak, for instance, or on the down slope). So, a value at any given point on a sine wave can be given by how big it is and where it is in the cycle.

This can’t be mapped onto a number line, but can be mapped onto a number plane. You have the standard value first, and then what we call the complex part after - that part is multiplied by the square root of -1,and that allows us to go sideways on the number plane. So, 5 + i7 means 5 along the line, and 7 ‘sideways’ to the line. If you imagine drawing this on a graph, you’ll get a diagonal line, with an angle up from your number line. This angle can be used to represent the phase angle of a sine wave, and this can be used to add sinewaves together. This is very useful in electronics.

Why the square root of - 1 is specifically useful for this is more than I can explain on my phone keyboard without looking up details!

Here, I’ll give a quick explanation before writing the more involved post later: If you have an integer polynomial of degree k evaluated at n/d, then it looks like integer * n^k/d^k + integer * n^(k - 1)/d^(k - 1) + … + integer * n/d + integer. We can rescale all these terms to have a common denominator of d^k, so the whole thing looks like some integer/d^k. What’s the smallest nonzero size that “some integer/d^k” can have? It’s 1/d^k. It can’t get any smaller than that.

And if k(N) is growing larger linearly as a function of N, then 1/d^k(N) is getting smaller and approaching zero exponentially as a function of N. It can’t be approaching zero any faster than that. In particular, it can’t be approaching zero as fast as N!/(2N)! approaches zero.

Complex numbers are useful in far more, and in far more everyday scenarios, than just electrical engineering… They describe the geometry we all live in and reason with great familiarity about every day, ordinary plane rotations.

But I’ve discussed this so many times I tire of explaining it again (or at least tire of explaining it as I tire in general, approaching 4 AM); I’ll let someone else discuss it this time.

:cool:

I think I kinda see. If we were limited to the regular number line, that would be okay to represent the ‘magnitude’ of a sine wave (i.e. how big) but does not allow us to express its phase in terms of where it is in relation to 0, right? So, we use sqrt 1 * (i?*) to permit us to move (in another dimension almost) to map this. Is 5 + i7 what they call a ‘complex number’ because it comprises real numbers and imaginary ones?

Wonderful, thank you. :cool:

Anyone struggling with the idea of complex numbers, or how they are used to represent rotation, might benefit from looking at the article on them at Better Explained.

(And, in case we’ve gotten too far from the thread topic, here’s Better Explained’s look at how Archimedes approximated pi.)

Square root of -1.

It does actually link in with this thread (which is probably why it was mentioned). We are unable to write out all the digits of Pi’s decimal expansion, but it’s a useful number nonetheless. It’s hard to even imagine the square root of -1 (for me, at least) - it’s the number which, when multiplied by itself, results in -1. We can’t write that down in the standard notation, other than to say "SQRT -1 ". When we square a number on the number line - any of them, positive or negative, rational or irrational - it always results in a positive number. But when talking about complex numbers, we talk about a number which, when squared, results in -1.

I’ve had a quick glance through Thudlow Boink’s link to Better Explained, and it’s very good, although it does give me flashbacks to college.

“…but does not allow us to express its phase in terms of where it is in relation to 0, right?”

I could’ve put that better. I meant it allows us to express its positive and negative phases in terms of unique positions on the number plane.

Too bad no one made these points back about Post #106. :rolleyes:

I did a graph a couple of days ago in Excel, where I took the first 1500 numbers for the numerator, found the denominator that would make the fraction closest to pi, figured the error, and then graphed 1/error for each of the numerators. That way, if it was off by 1%, the number being graphed would be 100. If it was off by 0.01%, the number would be 10,000.

The graph was pretty much just three spikes, which were 355, two times that, and three times that. There were no other ratios that registered more than a pixel or two at the bottom of the graph. It’s pretty astounding how good that ratio is.

Going from +1 to -1 is a rotation of 180 degrees. From +1 to i is 90 degrees.

And you need i to make e[sup]i*pi[/sup] equal to -1. It’s worth it just for that.

In other words: Multiplying by -1 means reversing direction. Multiplying by -1, and then multiplying by -1 again, means reverse direction and then reverse again, so you end up back where you started. So “the square root of -1” means something that, if you do it twice, results in you reversing direction. And what’s that? Why, of course, it’s just turning 90º.

I’ll have to do that myself - cool idea.

septimus suggested the existence of a such a good rational approximation to pi with a particularly small denominator is not a coincidence, but has something to do with Heegner numbers or with Ramanujan-type infinite series converging to π, but I don’t see that (the partial sums of that series are not rational numbers, for one thing).

I mean, you have that series, and it rapidly converges to 1/π, but is the fact that a large number occurs near the beginning of the continued fraction expansion of π something other than random chance?

(Not talking to Chronos, just following on from him…)

And here we have an ambiguity: There are two different ways to turn 90°, clockwise and counterclockwise, both of which will result in reversing direction if you repeat them.

Now, the idea of a quadratic equation having two distinct solutions is familiar to most of us; x[sup]2[/sup] = 4 has solutions x = 2 and x = -2, after all. So we get to the nub, which is: Which direction should be considered positive? That’s not something you can derive from the rest of mathematics; it is inherently arbitrary, and is therefore called a convention.

This convention is the sign convention, and both math and physics agree that counterclockwise is positive. Therefore, on a standard analog clock, a positive 90° rotation takes you from 3 o’clock to 12 o’clock, and, if you put your right hand on the clock, thumb pointing outwards from the clock face, the rest of your fingers will be able to curl in the positive direction. That’s one reason it’s called the right-hand rule.

I agree that the “unusual” early 292 in pi’s CF form [3;7,15,1,292,1,…] is probably just a random coincidence — if such terminology makes sense about an invariant math constant :smack: — and most probably has no relationship to anything like Heegner numbers … but there’s so much in math that seems weird to me, I had to wonder.

BTW, the largish “15” in the CF, occurring before the “292”, is also “against the odds” and explains fairish approximation 22/7.

Just wanted to say thanks for the elaboration here and in your other posts. Haven’t had the time to really dig into them yet but will do so over the weekend. Even after a quick skim I have a much clearer picture of the proof.

Glad they helped! (I forgot to go back and finish off the elaboration posts, but will do so tonight or this weekend as well.)