I know this. I said “know”, not “believe”. This is not a poll on said question.
This comes up very week or so. I am not surprised that some people don’t know this or do not understand the reasons. However I am surprised how many people have really strong opinions on that. People make mathematical errors all the time, but usually they do not defend them against all evidence or even declare them a matter of belief or philosophical standpoint. Keep in mind that apparently most of those who comment on this question do so as a reaction to the relevant column.

What do you think, is there a reason why people are so sure about their views on this topic?
Is there something wrong with the way this is taught in schools? I think this might be possible. I don’t recall exactly what we were taught on this topic in school, but I remember that the “true nature” of real numbers remained a bit obscure.
Do you think this question has some higher significance to people and accepting the truth could cause problems with their general world view (like people who believe that quantum mechanics is not true on some level)?
Sure, it might not seem very intuitive to many people, but neither do many other mathematical facts.

I don’t think that is is a matter of the concept not being intuitive, but rather that it is very counter-intuitive.

Largely, we are taught that 1=1. That is, that 1 is a very specific, isolated and narrowly defined instance. Nothing except 1 is 1. To then say that .9… is 1 is to remove that pillar of instruction from the foundations of the commonly taught math.

I believe that this stems from the fact that, at least in America, we are not taught mathematics but rather just arithmetic. That is, for the most part we recieve little or no training in theory or even the ability to think logically/mathematically. We are taught to perform a set of operations without understanding or thought. Unless one either actively seeks out higher levels of information or goes on to higher education in a field related to mathematics this is the bulk of what one will learn.

In trying to convince someone that .9…=1 you are not (in their view) correcting a mistake (2+2=5) or providing some new information (introducing them to non-euclidean geometry, for instance). You are, in a way, pulling away a pillar of their education. If something other than 1 can equal 1, then what else have they thought to be true and immutable isn’t?

People have a very hard time with “infinity.” Going back to old Greek “paradoxes” up to today. I see people making all sorts of naive statements based on just plain not understanding what infinity is all about. E.g., “In an infinite universe, all things are possible…” Ack, no. The number of combinations of possible things is a bigger infinity than the original set.

Even learned people have problems sometimes. I see texts in Theoretical Computer Science where the authors clearly don’t understand the nuances between finite/bounded/unbounded/infinite.

I was reading a Math article recently about some exotic numbers where the author had to quit the exposition at some point because to go beyond that point would mean having to deal with (what he thought) were infinite values but in fact aren’t infinite at all, just very, very big.

At least in America, children learn Math first via Arithmetic. So they have an algorithmic notion of Math that only works for finite quantities. When seeing the .999… = 1 idea, their minds revert back to finite algorithms and they try to apply their Arithmetic rules to it and fail. They blame their failure on the problem, not on their inappropriate use of tools.

Note that to do .999… = 1 right requires knowledge of limits and infinite sums that I didn’t learn formally until Advanced Calculus in college. (Note: If you haven’t had Advanced Calculus and think ordinary Algebra or Calculus is enough, think again.)

As a Computer Scientist, I see right away that .999… = 1 is false. The value on the left is an infinite series and the value on the right is a number. That is, they have different types. To us, it’s literally apples and oranges. What makes it “correct” is the implied statement that .999… means the limit of the infinite series.

Note that all over in the Sciences, such shortcut notations are used and there is frequently an “implied” something lurking around. This is merely a simple example of where not knowing about the “implied” gets people into trouble.

It would take a pretty stubborn person to not acknowledge that the limit of .999… isn’t 1.

The question is simple to phrase and the answer is obviously false, but that’s not the real reason. It’s so hard for people to grasp because it requires them to understand the distinction between a number and its representation. Back when I was involved in undergrad CS, the section on number representations always caused the most headaches for students.

Personally, I think that distinction just requires a level of abstraction that most people don’t have access to. Maybe it’s cause they’ve never had to deal with it, or maybe they just don’t have the aptitude.

If you understand that 1 is not the number one, this is obvious. If not…well, that’s why we have all these threads.

It has to do with concepts like infinity and limits that mathematicians struggled with for centuries before they reached rigorous definitions of the concepts which avoided creating contradictions. It also has to do with the fact that in this area of mathematics, the same number can be represented in different ways.

I don’t really know much about the content of the different US courses, but back in my CS student days I took “Analysis I + II” (for some reason the term “calculus” does not appear in a German curriculum), one year long and the exact same lecture that the mathematicians attended. In addition to that the topic was touched briefly in “Elementary Number Theory”.

Unfortunately I failed one of my other mathematics exams (“Linear Algebra and analytical Geometry”) twice and had to change to computational linguistics…
So I officially suck at math, but at least I learned many interesting things along the way.

My mother got her BA in math, so we got the distinction drilled into us pretty regularly as kids (whenever we would complain about a particular assignment, for example).

It seems a fairly important distinction to me as I often encounter people who can perform the calculations necessary to make a decision based on numbers but who lack the understanding of the underlying concepts to know which calculations to perform.

I’ll answer this with the same comment I made in the latest 0.9…=1 thread: Equality and identity are not the same thing. The layman reacts with skepticism when confronted with an equals sign sitting between two different things.

But wouldn’t you also say that 1.0 = 1 is false, since they have different types, “real” vs. “integer”? (Though a “real” variable in CS can’t hold any real number, just a finite subset of them.)

Just to put together a few thoughts on this subject

0.9… = 1 is somewhat missleading since 0.9… = 1.0… is perhapse better.

Also this exposes the difference between the number and the representation of the number, and that rules we apply to number representations are not necessarily true to numbers themselves.

We know 0.9 < 1.0 and are used to looking at how the number is writen, and taking the highest order digits and comparing them, in this case 0 from 0.9 and 1 from 1.0 using our knowledge of integers we know 0 < 1 . So we say 0.9 < 1.0 because because 0 < 1, but we have made a mistake in thinking this can be applied in every situation, since looking at the highest order digit is an action based on the way numbers are represented, not an action on the number itself.

It is also easy to fall into the trap of generalizing from the fact that
0.9<0.99<0.999<0.9999< etc. <1.0… for any finite number of 9’s to believe wrongly that 0,9… < 1.0… for an infinite number of 9’s.

In CS, real variables can hold real numbers. In implementations of programming languages on the measly computers from what you call the “real world”, they can’t hold all of them.

I see that people are given virtually no background on sequences in school, but usually people at least get a basic notion of limits. Probably their concept of real numbers is cemented long before that and they refuse to accept the connection afterwards. However the thing that surprises me most is not the fact that people don’t know this (let alone the reasons,) but that they react so hostile.
e.g. a decent proportion of the population suspects that there is a finite number of primes, but as soon as you tell them otherwise, even without proof, they accept the truth happily. Apparently this is much easier to accept, because it deals with remote concepts and people don’t identify with their views of those. Now numbers seem to be a very personal thing and it is hard to admit that you were wrong on something that you considered so familiar.

I have to admit that this topic confuses the hell out of me (no matter how much I read about it), and this doesn’t help. Doesn’t your statement refute Cecil’s column?

I was under the impression that the limit of .999~ is one, which is what makes them equal. That .999~ continues getting bigger until it actually reaches the number 1, whenever that is, and can never go higher.
:smack:

In relation to the points above about the concept being counter-intuitive because of the different use of types, is it proper mathematical grammer to write the statement in question the way it has been presented? Should it really be 0.999~ = 1.000~ or even something else?

Also, does someone have a link to a proof for this thing?