If x = 0.999... (a) -> 10x=9.999... -> 10x-x=(9.999...)-(0.999...)
-> 9x = 9 -> x = 9/9 = 1 (b)
x = 0.999 (a) = 1 (b) ?
So... what is the right explanation for this occur? I know that exists a tiny error somewhere... I think that the subtraction of the repeating decimal is incorrect, because the principle of infinity. That's it?
Am I right?
Answer
I'll recommend you the best article I know in this subject. Fred Richman's: Is $0.999\dots=1$? [MAA: Mathematics Magazine, $12/1999$].
From the beginning: "Arguing whether $0.999\dots=1$ is a popular sport on the newsgroup sci.math. It seems that people are often too quick to dismiss the idea that these numbers might be different."
The article is interesting because it actually discusses why it is and why it couldn't be. What are the needed tools for it to be. Also, there are amazing references.
EDIT: I also recommend Courant's Differential and Integral Calculus
(Chapter 1, Section2). There is a great explanation in there which makes the subject appear less magic that it seems.
No comments:
Post a Comment