A double-precision floating point number carries fifteen significant digits. What does this really mean?
I multiplied 0.619207 by 10,000,000 and got 6192069.999999991 instead of 6192070. That’s only six significant digits; where’s my fifteen?
Talking about significant digits is really just a shorthand for talking about relative precision. “Fifteen significant digits” means that the representational resolution is one part in 1015. It doesn’t mean that the first fifteen digits are correct. (If you spend more time with numerical analysis, you can even see people talking about things like “five and a half significant digits”. If the meaning of “significant digits” were literal, how could you have half a digit?) The relative error in the above computation is 9 / 6192070000000000 = 1.5 × 10-15, which is consistent with about fifteen significant digits. (And that’s assuming the decimal representations are exact, which they aren’t.) Even if you take a literalist interpretation of significant digits, the values are still good to fifteen digits. Remember that 0.99999… = 1, and therefore the values
6192069.999999991 |
6192069.999999999… |
agree to fifteen significant digits, as promised. Now, if you’re a hyperliteralist and refuse to accept that 0.99999… = 1, then you are forced to accept that the only possible numbers of significant digits are zero and infinity. Consider a computation whose result is the value 1 exactly, and that the computation is performed to N significant digits (with N > 0). Since you do not accept that 0.9 agrees with 1.0 to even one significant digit, the only values that agree with 1.0 to at least one significant digit must be of the form “one point something”. Let’s call the result 1 + ε with 0 ≤ ε < 1. Now subtract this result from 2.0, yielding 1 − ε. Again, since you do not accept that 0.9 agrees with 1.0 to even one significant digit, in order for this result to be good to N significant digits (N > 0), the result must be of the form “one point something”. Let’s call that result 1 + δ with 0 ≤ δ < 1. Therefore, we have 1 − ε = 1 + δ and therefore, ε = −δ. Since both δ and ε are greater than or equal to zero, the only way for this equation to be satisfied is to have ε = δ = 0. Consequently, the only number which is equal to 1 to any nonzero number of significant digits (if you subscribe to the hyperliteral definition of significant digits) would be 1 itself. In other words, the only positive number of significant digits is infinity. And I think we all agree that if the only valid numbers of significant digits are zero and infinity, then the whole concept of significant digits would become kind of silly.
That’s why significant digits don’t use the hyperliteralist definition.
0 comments