< Previous
Next > Splitting Hairs
over Precision There is a really interesting short article in
the February 13, 2006, edition of EETimes titled, "Humans Still Thrive on Decimals,"
by Clive (Max) Maxfield, that illustrates by example how binary mathematics can
make simple numerical operations troublesome. Drawing on the extensive collection
of works by IBM Fellow Mike Cowlishaw's "General Decimal Arithmetic"
website, Max shows how, for instance, the "double" floating point real number type
cannot precisely represent 0.1. Multiplying 0.1 x 8 returns, to full precision,
0.8000000000000000444089209850062616169452667236328125. However, adding 0.1 to itself
8 times yields an entirely different result, 0.7999999999999999333866185224906075
458209991455078125. Theoretically, an infinite number of binary bits is
required to precisely represent many seemingly simple decimal values. Having written
a fair amount of engineering software that requires carrying calculations to many
decimal places, I have run into this problem often. It can be a real headache to
resolve in a manner that does not cause users to question the validity of results.
For instance, consider the case of calculating frequency conversions where both
very high and very low frequencies are employed in the same calculation. The two
sets of numbers (maybe three sets, depending on the relative frequency of the local
oscillator) may be separated by six or even nine orders of magnitude. Many decimal
places are required to maintain precision at both extremities. Nobody I work with
will accept an answer that turns what should be, say, 5.000000 kHz into 4.999999
kHz. It just doesn't feel right even though we know it is "close enough."
Because of the limitations imposed by our finite systems, financial calculations
are by law required to carry out using binary coded decimal (BCD) operations. Doing
so adding more bits to represent each number, but the phenomenon of binary approximation
is eliminated. Financials can face the same number extremities as engineering calculations
when you consider the case of calculating the trillion dollar gross domestic product
of a country like the U.S. down to the penny ($1,000,000,000,000.01). Calculation
times increases significantly when using BCD arithmetic, and can take 1000 times
or more longer than floating point operations. Of course, special circuitry could
be used (FPGAs) to implement calculators whose only function is to do binary math
and not floating point. The story reminds me of a simple test that was popular
in the 1980s as a cursory means of determining whether a particular handheld calculator
was worthy of engineering pursuits. Perform the operation 1 / 3 * 3 =. If the answer
was 1.00000000, then you could buy the calculator. If it was 0.9999999, then it
was time to scoff at it in an elitist way and move on – let the knaves use those.
|