Only for those who don't understand pi

The OP is very good explanation.

I have won many beers on the golf course with this premise. When a putt comes up 5 inches short of the hole, I say all it needed was one more revolution. A golf ball’s diameter is 1.68" so a full revolution is ~5.25".

People tend to think a revolution is about the same as its diameter. We get into the 19th hole and we measure a full revolution and they can’t believe it is over 5 inches.

Haha, that’s actually a really clever illustration.

I’ve had that thought for a long time. Radii are more fundamental than diameters.

Also, I recall seeing a lot of 2 pi i’s in Modern Physics equations. We could do without all the 2 times’. (Two timing?)

But a question has just come up in my mind. For all the basic trig functions, 2 pi may just as well be 0. I’m wonder whether this would create a problem. Another way of looking at it is that with pi as it is, we can take the arccos of -1 and get pi. (The inverse secant as well.) This ties in with the famous Euler equation, which is supposed to be obvious on the face of it, to any true mathematician: E to the (pi X i) = -1


  • Jack***

No more than the fact that π may just as well be 0 as far as tangent and cotangent are concerned.

Sure. But what one really gets, treating the inverse trig functions as fully multivalued, is π + N * 2π, for any integer N. And, indeed, arccosine of any value will be defined up to a multiple of 2π (so, in a sense, these are taking values in the reals modulo 2π). So even here, 2π is playing a fundamental role.

Ah, but is not the fact that 2π is the period of e[sup]x i[/sup] (and thus e[sup]2πi[/sup] = 1) even nicer? In any case, of course, we’re just looking at particular special cases of the simple (indeed, “obvious” is in some sense correct) fact that e[sup]x i[/sup] is rotation by x radians. Whether you think sending π to -1 or 2π to 1 is more interesting/fundamental/whatever is just whether you find rotation halfway around a circle or all the way around the circle more interesting/fundamental/whatever.

Not that it’s necessarily the most fruitful or particularly principled thing to argue about which mathematical objects are, either intensionally or extensionally, more “fundamental” than which other ones; it’s obviously not a well-defined objective notion. But still, the above is the reaction of my own aesthetic sense.

To answer this, I’m going to ask you to draw two parallel lines. Now, draw a zigzag line between your two lines, so that each segment of the zigzag goes from one line to the other. Which of the two straight lines is the zigzag line closer to? Why, it’s equally close to both, of course.

But now, make the segments of the zigzag line curved. Let’s say, for instance, that the curves are concave downwards. Now, the curvy zigzag line is closer to the top line, overall, than it is to the bottom line. There is less area in between the top line and the zigzag than there is between the zigzag and the bottom line.

When you draw polygons inside and outside a circle, the polygons outside the circle are like the top line in my example, and the polygons inside the circle are like the bottom line in my example. So the error from using the outside polygons is smaller than the error from using the inside ones.

Bonus information time: If your polygons have enough sides, then the error from the outside polygons is approximately half the size of the error from the inside polygons (this approximation gets better and better, the more sides you have). So if you take a weighted average, (2*outside perimeter + inside perimeter)/3, then you’ll get a really good estimate for the area of the circle, significantly better than using either the inside or outside polygons by themselves.

That makes sense - thanks!

Right, I was aware of that as I wrote. I was focused on the fact that all 6 of the trignometric functions are back to zero at 2 pi.

Yes, interesting, a blast from the past.

I seem to recall, though, that there appeared in some books a capitalized Sin x and Cos x. These may have had domains only in the first quadrant, that is, from 0 to pi/2. You could take inverses of them and still have functions, instead of relations. Actually, the first may have been defined from - pi/2, with the second carefully defined as extending forward from pi/2 to pi. This would complete the very useful single valued inverse-trig f’s of each.

I’m somewhat confused or maybe making a minor nitpick on your use of “function” when there is no single f(x). I suppose that we could still use function instead of relation, though, if the y in y=f(x) is really a set of values, including the possibilty of infinite sets.

((( BTW, some time I’ll have to bring up the question of irrationals as powers and chain exponentiation in connection with the question. One very argumentative fellow once tried to tell me that anything raised to an irrational power produces asymptotic (non-)results. The only thing I was able to find in a text back at my old Alma Mater library was that irrational to irrational power was “multivalued.” While I would expect that when complex numbers are involved, I couldn’t get my mind quite around that. In any event I’m very skeptical about what that fellow said, not only for saying, very probably wrongly, “asymptotic,” but because he also said that 2 to the e would not be defined, in the course of his argument. This seems clearly wrong to me, because I see no continuity in such a function, if you can’t use e as an argument. But that’s a whole 'nother thread, in which I shall start at the beginning of the disagreement. It’s an interesting tale, although a bit nauseating to me to recall dealing with that [del]jerk[/del] chap. )))

You’ve got my head spinning in circles. And my face is… :wink: I only hope that I’m just too tired this evening and have too much on my mind to follow. (Maybe after I’m through inserting comments inside your text in the reply button, I can focus better when I am back looking at the white box original showing the [sup]/[/sup]s.) I really hope that I’m not getting too old for this kind of thing.

HUH?

Please tell me that you are a beautiful woman. Even if can’t have you that would really make my day. :smiley:

(Hopelessly romantic…)

  • Jack

**tdn ** said:

:smiley:

Alastair Moonsong said:

You’re thinking 3 dimensions instead of 2. Think a flat plane, 2 D, axes are real on X and imaginary on Y. Simple.

As for what gives the other angles, aX + bY which = a (real) + b *(i)

This stuff is much more clear looking at the Taylor expansion series. [/irony]

eleanorigby said:

Well, somebody has to, might as well be you. :wink:

Jragon said:

Bolding added. This shows you are still hung up on the word “imaginary”, as if that means something. In electrical engineering, the “imaginary” components of the electrical signals are just as real as the “real number” components. They are just as much in the “concretely perceived dimensions”.

If you want a physical representation, try this. The current is traveling along the wire. The “real” numbers are representations of vibrations in one perpendicular axis to the path. The “imaginary” numbers are vibrations in the second perpendicular axis to the path that is also perpendicular to the first axis.

I know you’re joking, but let me point out to everyone else, the commonly given Taylor series proof of Euler’s theorem is an unfortunate example of starting with wonderfully clear intuitions, burying them in a lengthy path of myriad technical results, and then actually working backwards from these results only to recover the original ones in obfuscated form, the underlying ideas now seen as through a glass darkly.

What I mean is, how does one actually discover the Taylor series for e^x, cos(x) and sin(x)? Well, one uses the fact that their derivatives are, respectively, e^x, -sin(x), and cos(x), iterating from these rules to get the higher derivatives, and then constructing the series accordingly. Then, all intuition finally having been lost, one can purely formally substitute in ix for x in the first Taylor series, sees that it is the appropriate linear combination of the latter two Taylor series, and conclude the result [that e^(ix) = cos(x) + i*sin(x)].

But where do those starting differentiation rules come from in the first place? For the first one, we have it as the defining property of e^x, that it should be its own derivative. And as for the latter two, there are various paths to discovery, but perhaps the best is from the fact that <cos(x), sin(x)> describes rotation of x radians round the unit circle; the tangents to this rotating motion will be of unit magnitude and perpendicular to the circle, and thus the derivative will be <cos(x), sin(x)> rotated 90 degrees (and since the positive part of the X-axis rotates into the positive part of the Y-axis, and this in turn rotates into the negative part of the X-axis, this comes out to <-sin(x), cos(x)>).

All of this is necessarily established first, before there can be any development of the Taylor series for these functions. Yet, one discovered, these sources of the differentiation rules already directly give us Euler’s theorem, without needing to grind them into opaque infinite polynomials prior to making the final conclusion. The parenthetical above about the effect of rotation on the axes essentially is the observation that multiplication by i is 90 degree rotation. The substitution of ix for x into the remarks on e^x yield that e^(ix) has derivative equal to itself rotated 90 degrees. And, in deducing the derivatives of cosine and sine, we made the observation that this same differential equation holds of “rotate by x radians”, and that this is corresponded with <cos(x), sin(x)>. But this is already to note that e^(ix), “rotate by x radians”, and cos(x) + i*sin(x) are all equal, the desired result.

Proving that e^(ix) = cos(x) + i*sin(x) from Taylor series expansions is like proving that cosine is its own fourth derivative from its Taylor series expansion: yes, you could do it that way, but it would be reflective of having forgotten how you’ve gotten to where you are; by the time you’ve gotten to the stage where you can write out the Taylor series, you should already have discovered the theorem, without having to try reading it off from such series.