Blog

Why my school teachers were wrong to say 0.999... = 1

I remember school maths surprisingly well. One thing I remember was the day my teacher confidently asserted that the decimal expansion 0.999... = 1. This felt wrong to me, and I remember one day day dreaming while staring at an asymptote on the board.

I daydreamed that the graph represented the time until the end of the period. This produced a Zenos Paradox. If the graph never touched the x-axis, then the lesson would NEVER END! But yet, somehow it did end, leaving me with some questions. Of course, asking these in the middle of class was disrupting the class from progressing ever closer to something or other. Anyhow, I asked my teacher about it the whole 0.9..=1 thing and she showed me some proofs. I don't accept these proofs today, on account of the fact that they made some assumptions which they had no right to make! Lets look at one such proof, shall we?

Proof 1:

we all know that so multiplying by 3 on both sides: we've done nothing to the equation, yet we get that: .

Debunk

We agree that represents , sure and I agree then that a multiple of 3 applied to both sides shouldn't change the equation but my problem comes in when we begin to treat 0.333... as something which we can multiply things by. It is a different object to this . is a non-terminating decimal expansion. To treat such objects correctly requires you set up some formal rules. If we were allowed to reason in this way then we could say something like the following:

Theorem (erm... not really)

Let be an open subset of . Let be a continuous function , continuous in . If for some we have that , and , the partial derivative of with respect to , is nonzero, then there exists numbers and a function which defines implicitly in terms of on an interval . Furthermore, .

Proof (naat)

Actually, one can prove the first part of the conclusion correctly, but I want to focus on the last part. . This is because we can (apparently) divide top and bottom by and juggle the fractions .

What's wrong with the proof?

Well, we see these differentials and we think we can treat them like the numbers we are used to. In fact this is wrong, the conclusion of the implicit function theorem is that . So most often the conclusion given above is incorrect. This is a result of not treating the mathematical objects correctly. You can see the analogy with non-terminating decimals. Unless one is equipped with the framework which deals with these (analysis), it's futile making claims like . Sure, one can discuss an algebra in which , but it's not necessarily true that this algebra corresponds to the real valued algebra which we're so used to.

<< Go back to the previous page