I always wonder, about proofs like this and proofs in general, just how much you’re allowed to assume that you already know, and how much you need to prove de novo. In this case, for example, are you allowed to “already know” that ab is positive and non-zero?
Yes, we know that a > 0 and b > 0. But do we already know that the product of two positive numbers is positive and non-zero? Or do we have to prove that as part of this proof?
Similarly, I have an old college algebra book that shows a proof that anything times zero is zero. (That is, 0a = 0 for all a) One step in the proof makes use of the fact that anything subtracted from itself is zero – but just assumes that without proving it. Should we bother to show the proof of that too? (Depending on how things are developed up to that point, it might just follow from the definition of “additive inverse” and the definition of subtraction as “addition of the additive inverse”, but should this have been stated explicitly before this proof?)
If you were to start out writing this proof, but didn’t stipulate that a and b are non-zero, then some smartypants would come along and say, “Whoa! What if a = 0?” You’d realize the proof doesn’t work then, and so you tack on the stipulation. You get to do that; it’s your proof.
When the teacher says, “A train leaves Denver at 2:00 a.m.,” you don’t get to show them the actual Amtrak timetables and tell them, “Nuh uh!” It’s their set of premises and you’re stuck with 'em.
You must always start with some set of axioms, and different sets of axioms will lead to different proofs. There are some standard sets of axioms that mathematicians typically use, but you don’t have to use the same ones.
Yes, assuming multiplying two positive numbers yields a positive number. Which it does, obviously, but Senegoid’s point was, how do we know that? I’m no expert (obviously), but there are fundamental axioms that are accepted as true - think the basic things you learned about arithmetic - that cannot be proven. So the question is where do you draw the line between what requires a proof and what will be accepted on faith.
Math is built up in layers from the simplest of concepts. As **Chronos **effectively said, at any given level there’s a standard set of stuff you take as given as your base layer(s), and the set of stuff you’re busy adding as the new layer to be proven.
You’ve assumed zero, non-zero, positive, negative, well-ordering, multiplication, division, and the multiplicative identity are all well-defined axiomatic concepts.
That’s a pretty solid base in ordinary HS-level algebra. If you’re going to question any of those things you pretty quickly have to question almost all of them. At which point you’re almost back to proving what a “number” is.
IANA expert on this stuff, but I don’t know that there’s an easy to understand roster of the commonly accepted bricks in each of the layers working up from plain counting numbers to abstract non-Reimannian mumble mumble mumble.
I was thinking the same thing. If we know that b>a don’t we automatically know that 1/a>1/b? Isn’t that knowledge as fundamental as the knowledge that a/a=1?
When I used to teach proof writing I used to try to get students to avoid using long chains of implications if possible. I would prefer something along the lines of…
In general, you take as few things on faith as possible, and see what you can build up from there. Euclid famously used five axioms (plus five “definitions”) to prove all of classical mathematics over 13 volumes of proof after proof.
You can do what Euclid did, but it gets rather tedious. Starting every simple algebra proof with the axioms of ZFC set theory and inventing all of mathematics up to the point where you can prove that 0 < a < b –> 1/b < 1/a would be rather a lot of work.
So we generally assume that the typical axiomatic systems are in place and that we can rely on the things that have already been built upon those systems, like the construction of the real numbers and their operations, unless the theorem to be proven specifically says we’re doing something else.
Exactly how much explicit detail is required depends on context. In a class, an instructor might want you to be as explicit as possible. But writing for a publication in a narrow subfield, a lot of “obvious” things (which are usually not actually that obvious) may traditionally be assumed.
Yeah, you could say, about any particular proof, that it depends on the context. To be valid, every step or claim you make has to be based on things that have already been proved true (or posited as axioms); and that depends on which part of the scaffolding you’re currently building upon.
This is where the proof fails cromulency because of dismissing that both are positive in your first line.
if a is positive and b is negative, this step would reverse the signs. This is obvious if you let a = -2; b = 3 you would end up with 1/3 < -1/2
That can be resolved by re-emphesizing Since 0 < a < b; 0 < a and 0 < b
Since both a and b are positive; 1/ab is positive
Therefore (1/ab)(ab/b) < (1/ab) (ab/a) (multiply both sides by 1/ab)
Therefore 1/b < 1/a
I had a class in college in which we had to prove something like 0*a = 0 for all a on a test. What you’re asked to do basically is to use the axioms of arithmetic along with the few things shown in class that were proven using those axioms. That was the only class where we really couldn’t use everything we knew about mathematics in our proofs, but the concepts were extended in other classes in the sense that one learned how to be rigorous in that class. You were expected to always prove any statement that hadn’t been proven in class, regardless of how obvious it was. If it was obvious, you could just write down the few lines to prove it. If it wasn’t obvious but came up a lot, it would be gone over in class.
But for mathematics papers in general, they will usually freely declare certain things as obvious that are from it, because the techniques needed to prove them are not particularly difficult. You might need to prove some general element has a certain property and it takes several pages to do so only because you need to consider every case and work through the details each time, but all of that logic is very mechanical in nature. A professor called it “following your nose”, and such results were quite often called “obvious” even if a tremendous amount of work was involved in rigorously proving it, and jokes about this process are often made. In general, just like in class with frequently used results, there are going to be plenty of non-obvious “obvious” things that people will have already agreed upon as having been settled and can be used freely. I don’t remember the details, but I recall an instructor giving a long proof through Zorn’s Lemma to show something and said at the end of it “of course, most of the time we don’t go through all this, we just say it’s true via Zorn’s Lemma, and everyone knows the process to go through”.
How do you know what you’re allowed to do in general? Well, generally you are allowed to use anything that has been shown to be true to you in the group with which the problem is presented. That is, if it’s for a class, use only things proven or stated as axioms for that class. If it’s for your own use, use anything that you’ve proven to be true yourself. When you get into publishing papers, you generally use the results of any other paper published, and you cite them.
The first question to ask is what is a number? So let us say you settle on real numbers. What is a real number? Infinite decimal is not a good answer since it is quite hard to do arithmetic with infinite decimals, although it can be done. We usually use something called Dedekind cuts, although there are other definitions. So let’s assume the real numbers are given. Say as an ordered field. This means there is a subset of positive numbers and the sum and product of positive numbers is positive. Define a number a to be negative if -a is positive. One axiom is that every number is exactly one of positive, negative, or 0. Define a < b to mean b - a is positive. It follows that 0 < a is the same as saying that a is positive. Exercise: show that a < b and b < c imply that a < c. In particular 0 < a < b implies that 0 < b, so that 0 < ab. Exercises: show that the product of a negative number and positive number is negative and the product of two negative numbers is positive and use the former and excluded middle to show that if a > 0, then 1/a > 0. Now from 0 < a < b, you can divide by ab to get 1/b < 1/a.