Yeah. As I described above about the PNT. Riemann advanced his hypothesis as a way to prove the PNT, but the fact that there are several proofs of the PNT shines no light on the RH.
Sometimes it’s possible to use A to prove B, and then use not-A to prove B, demonstrating that B is true no matter what A is. There’s a funny quote in the Wikipedia article about the Riemann Hypothesis that mentions the idea:
The method of proof here is truly amazing. If the generalized Riemann hypothesis is true, then the theorem is true. If the generalized Riemann hypothesis is false, then the theorem is true. Thus, the theorem is true!!
I can’t describe what’s going on there, but I can give a simple case: prove that an irrational number to an irrational power can be rational.
Consider \sqrt{2}^\sqrt{2}. If that’s rational, then our proof is done, since \sqrt{2} is irrational. But if it’s irrational, then we can consider (\sqrt{2}^\sqrt{2})^\sqrt{2} = \sqrt{2}^2 = 2, which is obviously rational. So we don’t have to prove the irrationality of \sqrt{2}^\sqrt{2} one way or another, since the conclusion follows either way.
My understanding (as an interested layman who has read a lot of history books – not as a mathematician), is that the entirety of calculus was widely used and applied for about a century and a half after it was invented by Leibniz and Newton, before Cauchy managed to put it on a rigorous, logical footing (i.e., prove it) in the early 1800s.
That was a reflection of a shift towards greater emphasis on rigor, across the board, in mathematics.
And it did nothing for the physics, of course. Arguably a step back since it shoved limits everywhere instead of the more intuitive infinitesimals. Which were eventually themselves put on firm ground with Robinson.
Physicists often come up with unrigorous math that serves their needs before mathematicians have a chance to get their pants on.
Remember this stuff?
And, now that I think about it, the point was '68 vs '98, but nearly as much time has passed since that book came out, with a lot of rigorous mathematics (and physics) to show for it. The good physicists do understand some real mathematics, and vice versa.
Well, it might have been a step back, if physicists hadn’t ignored it and kept on using infinitesimals, anyway.
Sure, no denying that. And obviously by unrigorous I don’t mean that it was wrong. The greats had fantastic intuition about some of these things and were obviously right, so it was really just some loose ends that had to be tied up.
Also, sometimes the rigor leads to useful further developments. For example, the Dirac delta function was invented to be the derivative of the Heaviside step function. But that’s silly, the Heaviside step function has an undefined derivative at 0. And the Dirac delta function can’t possibly be a function at all.
So mathematicians came up with the idea of distributions, which is a sort of generalization of a function, and that developed into its own whole useful theory.
One-question quiz to determine what sort of nerd you are:
Which of the following is a function?
a) Dirac()
b) Dirlichet()
c) printf()
There is a difference between foundational questions and theorems. Pretty much by definition, foundations have to be assumed. No axioms strong enough to allow ordinary arithmetic can be shown to be consistent (Gödel). Until Abraham Robinson came along, calculus based on infinitesimals was just an assumed foundation. Cauchy based it on limits, which are harder to understand than infinitesimals. Robinson based it on the Zermelo-Frankel set theory and used ultraproducts to make models of infinitesimals. But really most mathematicians and virtually all physicists used infinitesimals to think about problems, even if they eventually wrote their solutions in Cauchy style terms.
Oh, and sometimes mathematicians assume certain things true without even being in the process of waiting for a proof, because it’s known that they can’t be proven. For instance, there are a lot of proofs that start out “Assuming the Axiom of Choice…”, when it’s known that set theory is equally consistent with or without the Axiom of Choice. But there are a lot more things that can be proven with it than without it.
(and I still say that the Axiom of Choice is horribly counterintuitive, and the reason you can prove so many things using it is because it’s wrong, and can’t understand why it’s taken any more seriously than any other axiom anyone might come up with)
I guess you’re not on board with Bona’s observation:
The axiom of choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn’s lemma?
In any case where the axiom of choice is obviously true, it’s not needed.
Mathematician: “And now, I will choose one element from each of these sets.”
Me: “OK, which ones did you choose?”
Mathematician: “How am I supposed to know? I just did, OK? The Axiom of Choice says I can do that.”
Me: “But you didn’t actually choose any of them. You just said you were going to.”
What about constructing sets of numbers that I can’t actually identify? For instance, am I not allowed to use the set of incomputable numbers? I can’t identify a single one.
No more so than just about anything else where we stare at a naked infinitude.
How many theorems include a line of the form \forall \ x \in \mathbb R, \exists \ y : \dots
It isn’t the axiom of choice directly, but we are comfortable making assertions about an uncountable set, and wrangling proxies for members.
ZFC gets us a long way. And given our cheerfulness breezing past a lot of other nasties, mostly because we are so familiar with the landscape we just don’t notice, I wouldn’t say that AC is horrible enough to single out.
It still bothers me how many pop science physicists witter on about the singularity in black holes as if they are drinking buddies.
From reading this, I’m wondering whether you would find it acceptable if, instead of being called the “Axiom of Choice,” it were called the “Axiom of Existence.”
Wikipedia describes the Axiom of Choice thusly:
The “informal” statement is worded in terms of what “it is possible to construct” (i.e. it’s about performing some action, making some choice), while the “formal” statement is about something existing.
Do you also have a problem with nonconstructive existence proofs?
If you’d prefer, I could go along with taking something implying the falsehood of Banach-Tarski as an axiom.
If you do not need to use the Axiom of Choice, then you should not use it, but from what I gather people (not necessarily pure mathematicians, could be in physics, etc…) who just want to do standard algebra and geometry not only routinely assume the Axiom of Choice but also the existence of an inaccessible cardinal.
FWIW, Chaitin’s Constant is a family of well defined non-computable numbers. By fixing a particular encoding system you can get one particular number that is non-computable (if it were, you could solve the Halting Problem).
So you can define your number and not compute it, too.
Nearly all of modern mathematics would disappear without the AC. For me, AC is obvious; YMMV.