Infinity is nonsense ?

And that is . . .

Many

Having not read the entire thread, have we figured out yet how much brain fuel it would take to climb the conceptual ladder of uncountable infinities (aleph-1 and beyond)?

I’m thinking “a shitload of coffee” myself, at least for the first time you wrap your mind around the concept. Maybe a coupla fairly strong drinks would help too. Once you’ve got the idea, not much fuel is needed at all.

I’m thinking LSD and a few vanishing points would do the trick.

[

2 * 0/0 would be:
(2 * 0)/0 = 0/0 = 1
or conversely
2*(0/0) = 2 * 1 = 2

You get different answers depending on the order of operation and that is definitely something you don’t want numbers to do. So yeah, when you start trying to treat zero like a number you run into all kinds of problems…

Except they’re not dealbreaker problems at all. What we actually do is restrict our previously universal rule that x/x = 1, instead say 0/0 is undefined for natural reasons, and move on.

And similarly we can reconcile ourselves to the arithmetic behavior of ∞.

There are various different arithmetic systems which incorporate ∞, but perhaps the simplest in this vein would be the projectively extended numbers, which have a single unsigned ∞ such that x + ∞ = ∞ for finite x, x - ∞ = x + ∞, x * ∞ = ∞ for nonzero x, x/∞ = x * 0, x/0 = x * ∞, and ∞ + ∞ and ∞ * 0 are undefined (and thus ∞ - ∞ and ∞/∞ are also undefined).

Yes, undefined, in exactly the same way that we ordinarily take division by zero to be undefined. [Actually, it would be better to say indeterminate, or multivalued to the point of encompassing all possibilities]. It’s no more of a problem than that.

This system comes up all the time in mathematics, whether one acknowledges it or not; it describes the behavior of “limits” in the calculus sense (in particular, the undefined operations correspond to indeterminate forms for limits).

In this system, 2 * ∞/∞ would be indeterminate, because ∞/∞ = ∞ * 0 is indeterminate. Yes, this means we don’t have x/x = 1 unambiguously for all x [but then, we already didn’t have that; x = 0 served just as much as a counterexample as x = ∞ does]
∞, in this system.
[/QUOTE]

Well it seems if you want it to be a number you have to put all kinds of limitations on it, unlike 0, with just the one about not dividing by 0. But OK, if there is a system in which it is useful to call it a number, I’m OK with that. It still seems like there are a lot of “undefined” or “indeterminate” results when you do.
What does calling it a number let you do that you cannot do otherwise?

It should be noted, Crambler, that very similar arguments caused many people to insist for a long time that zero was not a number. And then that negative numbers aren’t numbers. You’re on the wrong side of history. :wink:

As for “what you can do with it,” though, that does sometimes bother me. I don’t know of a single application of the concept of infinity (much less the whole breathtaking landscape of transfinite cardinals) that is not purely formal. (The simple concept of infinity (not, to my knowledge, the transfinites) helps with limits, which do have physical applications, but that’s about it. And you can make do without treating it as a “number” per se.)

I think infinity etc are fun, but I’m not sure what use they are. I don’t tend to limit things’ worth to their usefulness, but that’s just my preference, and others have others.

I’d argue that without understanding the cardinality of the continuum, we can’t nail down which sets you can and can’t integrate over, IOW, you don’t have calculus properly nailed down. And calculus’ usefulness speaks for itself.
P.S. Wouldn’t it be great if some state made the transfinite cardinal its state bird?

OK. That sounds reasonable and anyway, I’m far too long away from college calculus to dispute it.
Thanks for the edification.

No, I won’t call that circular. I would call that hand-waving, because you have only defined “three” as it pertains to *'s. You are assuming that this definition is now applicable to other things, like apples. However, nothing in the definition suggests that this definition is generic in any way.

Now if you were to add that we could match each star to each apple, then we get closer to a good definition. In fact, we could then simply say that we take all sets of all objects where we can match a star out of your set to exactly one object in the target set and give all those sets a defined name for their size: “three”. And this is the modern definition which took thousands of years to find.

The way I learned it, 1 is axiomatic, 2 is defined as 1 + 1, 3 is 2 + 1, and so on.
There is no need to refer to anything in the “real” world.

Wait, what’s so hard about visualizing an infinite line? I expect that everyone can do that, if by “visualizing” a thing we mean “constructing a mental image congruent to the image which would be produced by our visual sensory system acting on the thing”. The visualization of an infinite line is identical to the visualization of a line which spans 180 degrees of one’s field of view.

It depends on what you would like to define. If you would simply like to define something that works on it’s own terms, then your way would be fine. To a point, anyway. You would quickly run into trouble trying to incorporate infinity into your world, which is what the thread’s about.

However, getting that definition to do anything useful for you is more difficult. Keep in mind that I’m not saying that the concept of “three” is difficult to grasp. We understand it intuitively, and it is pretty useful.

However, if you want to define “three” so that it pertains to more than a logical fantasy, you have to find a way to get it to work with any kind of object. Set Theory does this quite nicely, but it took us a long time to get there.

Well, I think the first order of business is defining what three means within mathematics. Once that is done, yes, you do need to correlate it with whatever it is you want to model, both the numbers and the operations you want to perform.

But three will not mean the same thing for all models. With apples it’s easy enough; here is one apple, if I put it with another one I add one, etc.

But with other things like force or acceleration, that doesn’t work so well. In fact, the numbers themselves don’t correspond to anything except your units of measurement, which can be anything you want. But the mathematics remains the same regardless.

Not if they’re real.

Win!

It is employed when studying formal systems (i.e. meta-theory). This might be purely formal, but if so it is different to the study of infinity for infinities’ sake.

On the general issue of the thread; there are lots of interesting things to learn about, and uses of, infinity. In that sense it is not non-sense. I wonder if the worry is really that we can’t have full-sense of infinity. But even with partial-sense we can still use concepts coherently.

Isn’t the cardinality of the continuum undecided? Maybe I’m thinking of something else… it’s been a while.

You can also remove limitations once you have it… for example, you can start dividing nonzero things by zero. That’s a benefit. That’s infinitely many undefined cases no longer undefined.

The piddling novelty of the limitations in operating with infinity by such arithmetic rules as given before are as nothing compared to the previous earth-shattering revolution of the limitation “You can’t divide by 0!”. Once one has already acknowledged the natural need for partial definition in such cases, there’s no strong reason to start stubbornly demanding total definition again.

Mind you, there are also other arithmetic systems containing ∞ of note. For example, if we use the affinely extended semipositives [0, ∞], we have addition, multiplication, and ordering all completely defined, absolute value of subtraction defined everywhere except at |∞ - ∞|, and division defined everywhere except at 0/0 and ∞/∞. Three exceptional cases [one of which (0/0) we’d have anyway, and one of which (∞/∞) is just the mirror image to that], vs. the infinitely many exceptional cases involved in defining division over [0, ∞). That seems a great trade-off.

Nothing. There is nothing that calling anything a number lets you do that you cannot do otherwise; the only benefit to saying 0 is a number rather than splitting into two un-unified cases every instance of an observation which could be made just as well for zero as for nonzero quantities is that it saves one mental energy. But in any application, one could avoid calling zero a number, and think of it just as shorthand for discussion of the case where instead of having something which could be counted by “proper” numbers, you don’t have something at all. [Which it is!]. There is no reasoning that can be carried out with the number zero which could not just as well be translated into terms acceptable to the fellow who thinks the only numbers are positive integers. I defy you to answer of 0 the same challenge you’ve put forth for ∞.

The only benefit to saying rationals are numbers in themselves rather than merely pairs of integers and some rules for manipulating those pairs is the soft psychological one, that the analogy is helpful in understanding those rules. But in any application, one could avoid calling rationals numbers, and think of them just as shorthand for discussion of proportional properties of integers. [Which they are!]. There is no reasoning that can be carried out with the rational numbers which could not just as well be translated into terms acceptable to the fellow who thinks the only numbers are positive integers. I defy you to answer of 3/8 the same challenge you’ve put forth for ∞.

And the only benefit to saying “There is a number 4 and a number 5 and 4 + 5 = 5 + 4” rather than “If you have some quantity of items which is lined up with ****, and some further quantity of items which is lined up with *****, and you join the two lines of items into a longer line in either order, the two unions can be lined up with each other” is that abbreviating the latter into the former is convenient if you’re going to be saying things like the latter all the time; it makes us faster at manipulating such statements, and has the soft psychological benefit of making it easier for us to see generalizations of and analogies to such reasoning. But there’s not strictly anything you can do with a reified abstract concept that you can’t do without reifying it; you can always talk directly in terms of the concept reified, and avoid the reification itself.

Some unduly argumentative "What…?"s:

As I said, you can make do without treating any number as a “number”. What makes infinity different from zero in this regard?

What do you mean by this? If all you mean is “We have a rule that gives us countable additivity for integration, but not unrestricted infinitary additivity, so it is important that we distinguish between countable and arbitrary infinities”, well, it’s true that we have adopted that rule in standard formalizations, but we could just as well have developed the theory of integration without ever thinking about it [as did, of course, the people who originally developed the concept of integration]. Not that I’m against thinking about things, of course; I’m just not sure this is a great example, in the sense of pointing out a genuine need to understand transfinite cardinals in the work of engineers wanting their bridges to stay up.

What makes “Three is the class of all sets which can be put in bijection with {*, &, $}” or “Three is the particular set {{}, {{}}, {{}, {{}}}}” less of a logical fantasy than the “naive” claim “Three is how many *s are in ***; it’s the number of things in a sequence which can be counted ‘one’, ‘two’, ‘three’?”. Indeed, what privileges abstract sets [and perhaps some formalized rules for manipulating them] to be considered on firmer ground than abstract counting numbers [and perhaps some formalized rules for manipulating them] to start with?

Nothing at all, I was just trying to make sure what I was saying directly addressed the question I was responding to.