12. Be careful: Working with floats¶
In a previous section, we saw that there
were several reasons that we should not expect floating point
evaluations and representations to be exact. There were issues of
having finite degrees of decimal precision, rounding and truncation.
For example, the result of evaluating 2**0.5
can only ever be an
approximation to the true mathematical value, which has an infinite
number of decimal places; the Python result of 1.4142135623730951
is pretty detailed and generally a useful approximation, but it is not
exact.
Those issues were essentially observable when results had a lot of decimal places to represent, and approximations were needed. But what about evaluations of floating point numbers that only require one or two decimal places, like 0.1, 0.25, -8.5, etc.: they look safe to treat as exact, but are they?
12.1. Binary vs decimal representation of floats¶
As humans, we typically work with and write numbers in base-10 (like we did just up above), which is literally what "decimal" means. However, the underlying code interpreters work in base-2, or what is called operating in binary. The base-10 quantities we type into the Python environment are translated into "machine speak", and the actual calculations are done in binary using bits (which only take values 1 and 0) and bytes (a basic grouping of eight bits). So let's look at floating point values in each representation.
In general, to finitely represent a number in a particular base B, we have to be able to write it as a fraction where:
- the numerator is a finite sum of integer coefficients times powers of B, i.e.:, for finite M.
the denominator is a single integer power of B, i.e.: , for finite N.
Sidenote: The numerator's coefficients represent the digits of a number. So, in base-10, is the ones digit, the tens, etc. Therefore, the sums in any base are often written in order of decreasing exponent, so the coefficient order matches how they slot into the number they represent.
Case A. Consider 7.0, which we can express as the fraction . In base-10, the numerator would be just the single term , and the denominator is made unity by raising the base to zero, . In base-2, the numerator has more components , but the denominator is made unity in the same way, . So, we have done the job of showing that a finite, "exact" representation is possible for this number in either base-10 or base-2.
Case B. Consider 0.5. In decimal representation of a fraction, we have to find an integer that is a sum of powers of ten, that can be divided by some power of 10 to provide our value of interest. This is solvable with a finite number of terms in the numerator, basically reading from the decimal representation:
Thus, this number has a finite decimal representation, as expected. In terms of a binary representation, a similar rule applies with powers of 2, which is also solvable (by looking at the fraction notation:
So, both the decimal and binary representations are finite for this number.
Case C. Now consider the humble 0.1
, which initially looks to
be a repetition of Case B, above. First, the decimal representation
is:
However, in binary, we have a problem finding a denominator that will fit with any representation we try---they never seem to be an integer power of 2:
It turns out that we can never find a satisfactory denominator, and
there is no exact, finite representation of 0.1 in
binary. Therefore, Python internally uses just a fractional
approximation (to be precise, 3602879701896397 / 2**55
). Thus,
computers can introduce rounding or truncation error even when
representing finite decimal numbers. We can only approximate 0.1
(and other decimals with similar properties) in Python.
12.2. Consequences of binary approximation¶
We can see the effects of the unavoidable, internal binary approximations with a few examples.
First, note that the expression 5.1 % 1
evaluates to
0.09999999999999964
, instead of to 0.1
. And we might have
expected each of the following to evaluate to True
, but not all
do:
5.5 % 1 == 0.5 # True
5.4 % 1 == 0.4 # False
5.3 % 1 == 0.3 # False
5.2 % 1 == 0.2 # False
5.1 % 1 == 0.1 # False
0.1 * 1 == 0.1 # True
0.1 * 2 == 0.2 # True
0.1 * 3 == 0.3 # False
0.1 * 4 == 0.4 # True
As a consequence, we see that even some values that we might think are safe to consider absolutely "precise" on paper are really not so exact within the computer. This does not mean we should avoid using such numbers---that is really not feasible. But it does mean that we should adjust our assumptions as to the exactness of evaluations of them, and we should avoid using them in certain kinds of expressions where the approximative nature would be unstable or otherwise problematic. In particular, as we noted before, we should typically not use floating point evaluations within expressions of exact equality or inequality, because results will be unreliable.
Note
In truth, calculation "precision" is properly defined in terms of bytes, not decimal places, though we will often speak of the latter. In general, we won't have to think about base-2 representations of numbers as we work: this is just another point to emphasize that we can't ask for exactness with floating point numbers.
5.6 % 1 == 0.6
, 5.7 % 1 == 0.7
, 5.8 % 1 == 0.8
and
5.9 % 1 == 0.9
True
?0.1 * 5 == 0.5
, 0.1 * 6 == 0.6
, ..., 0.1 * 13 ==
1.3
True
?12.3. Practice¶
Does Python produce the correct answer for the following:
0.1**2
? Why or why not? (Hint: "correct" mathematically might differ from "correct" computationally.)Consider having some float type variable
x
. As noted above, we should avoid using either one of the following equivalent expressions for exact equality:x % 1 == 0.1 x % 1 - 0.1 == 0
But is there an alternative way we could try to evaluate whether
x % 1
matches with0.1
using a different comparison, which would still allow us to tell the difference between cases where, say,x = 5.1
andx = 5.9
? That is, even if we cannot judge exact equality, what might be the next best thing we could test?