When you perform arithmetic operations in Python using float operands, you may notice that the results aren’t always exact—even when the expected outcome is a finite decimal or an integer with a known number of digits.
For example:
3 * 0.14 ➞ 0.42000000000000004
10e9 – 10e-9 ➞ 10000000000.0
2.3 – 0.1 – 0.1 – 0.1 ➞ 1.9999999999999996
None of these results are perfectly accurate results.
The reason is that computers represent numbers in binary, using powers of two. Decimal fractions cannot always be represented exactly as binary fractions. This limitation usually doesn’t cause problems in most applications. However, there are scientific and mathematical tasks where numerical precision is crucial, and even tiny rounding or representation errors can accumulate and lead to unacceptable results. Such areas include orbital mechanics, climate modeling, and nuclear physics.
Fortunately, we don’t have to give up on high-precision calculations. Python’s standard library provides the decimal module, which allows us to perform decimal arithmetic with arbitrary precision. Numbers are represented using the Decimal type from this module.
Let’s take a look at the results of the operations above when using Decimal instead of float. You’ll also see how precisely the two types represent the square root of 2 — an irrational number.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
from decimal import Decimal, getcontext x = Decimal(3) * Decimal('0.14') print(x) # Result: 0.42 y = Decimal('1e9') - Decimal('1e-9') print(y) # Result: 999999999.999999999 z = Decimal('2.3') - Decimal('0.1') - Decimal('0.1') - Decimal('0.1') print(z) # Result: 2.0 # Square root using float and Decimal r = 2**0.5 print(r) # Result: 1.4142135623730951 # By default, Decimal precision (number of significant digits) is 28 u = Decimal(2).sqrt() print(u) # Result: 1.414213562373095048801688724 # Increase precision to 40 by adjusting the context attribute ctx = getcontext() ctx.prec = 40 v = Decimal(2).sqrt() print(v) # Result: 1.414213562373095048801688724209698078570 |
A key feature of the decimal module is the ability to define a computation context, which allows you to control various aspects of arithmetic — most importantly, precision — to suit your needs. This is shown in the last few lines of the sample code above.
The decimal module offers a wide range of tools for performing high-precision calculations. However, understanding and using it effectively also requires some familiarity with how decimal arithmetic works internally. For that reason, learning it solely from the official Python documentation is not always straightforward.
That’s why the e-book Python Knowledge Building Step by Step dedicates an entire chapter to this module, explaining its concepts clearly and providing numerous examples, illustrations, and tables to help readers fully understand the topic.