An old fogy's website
Not a trendy site. Just a dos dinosaur playing around.
Fractional-Exponential Integer Math

Fractional-Exponential Integer Math (fx) is a programming technique for storing and computing fractional and exponential numbers without the inaccuracies inherent to floating point numbers (see "Floating point inaccuracies" note at right). It is precise because all numeric values are stored internally as integers. This page explores the use of the fx technique in C.

Floating point inaccuracies

2299.500 * 1837.500 = 4225331.000

See anything wrong with this equation? Hint: the product of multiplying two fractional numbers both of which end in .5 must be fractional as well, ending with .25 or .75; the product can't be an integer. (In this case, the product of 2299.5 times 1837.5 is 4225331.25) The above inaccurate equation was generated by my reliable old Dell home computer. Specifically, it is the output of this C code compiled with gcc 3.3.5:

float a, b, x;
a = 2299.5;
b = 1837.5;
x = a * b;
printf("%.3f * %.3f = %.3f\n",a,b,x);

The problem is not in the computer nor the code nor the compiler, but in the nature of floating point numbers. True, one could correct the specific example above by using type double, or by implementing a different implementation of type float. But at some point in some other computation, the internal limitations of floating point abscissa-mantissa approximations would result in a similar inaccuracy, or worse. You want an example of worse? Try this:

0.0 ^ 0.0 = 1.0

In the logic of floating point numbers of any implementation, of any precision, zero raised to the zeroeth power is equal to one! How can this be? Because of abscissa-mantissa approximations, in which floating point numbers are rarely, if ever, absolutely precise. Floating point zero is not really zero, but a close approximation of zero. Now, one could argue that this is legit. It is mathematically correct to say that a close approximation of zero raised to the power of a close approximation of zero will equal a close approximation of 1 (one). But can you find an honest mathematician or engineer who will accept the above floating point computation as it stands? This engineer says it's not good math.

My implementation of fx uses these two C structures:

#define IntSize long

struct frac
    {
    IntSize num;
    IntSize den;
    };

struct fx
    {
    struct frac base;
    struct frac exp;
    } xx;

(IntSize is equated here with type long, giving it 32-bit or 64-bit precision, depending upon the target machine. Its definition could be altered to attain greater or more portable precision, depending upon the developer's needs.)

All fx numbers take the form struct fx, which is to say, a fractional (struct frac) base number with a fractional exponent. All four components are IntSize integers. Where the fx is itself an integer or whole number, all but the xx.base.num is 1 (one). So, for example, the whole number 142 would take the form

xx.base.num = 142;
xx.base.den = 1;
xx.exp.num = 1;
xx.exp.den = 1;

Which is

(142 / 1) ^ (1 / 1)

The quantity 142 divided by 1, raised to the power of the quantity of 1 divided by 1

Fractions

An example of a simple fractional value might be

(1 / 3) ^ (1 / 1)

There is no repeating decimal .3333 and no risk of rounding errors. Indeed, there is no attempt to carry out the division operation. The fraction is simply stored as a numerator / denominator fraction, all components still IntSize integers.

For base 10 decimal values, the fx number begins to resemble Cobol's implied decimal concept. xx.base.den is always some power of ten, depending upon the number of implied decimals. In a business application in which there are dollar & cents fields, the denominator would always be 100. For example, the dollar amount $547.95 would be

(54795 / 100) ^ (1 / 1)

If IntSize is a 64-bit integer, the fx number could represent a decimal number with 18 significant digits (as in Cobol), before or after the implied decimal point. This translates to dollar / cents amounts in the range of about a thousand trillion dollars without ever losing a single cent to rounding errors or floating point inaccuracies. That's (almost?) enough to keep track of the entire US national debt, down to the last penny. If the developer needs a greater range than that, define IntSize as 128 bits, or whatever the target machine will allow.

Exponents

The second half of the fx number is the exponent, or power to which the base fraction is raised. It is also a fractional number, so that fractional powers (roots) may be represented. So, for example, thirteen cubed would be

xx.base.num = 13;
xx.base.den = 1;
xx.exp.num = 3;
xx.exp.den = 1;

Which is

(13 / 1) ^ (3 / 1)

The square root of .7 would be

(7 / 10) ^ (1 / 2)

As with base fractions, there is no attempt to carry out the power operation. The exponent is simply stored as two IntSize integers.

Imaginary Numbers

A unique feature of the fx technique is the ability to handle imaginary numbers, i.e. numbers based on i, the square root of negative one.

(-1 / 1) ^ (1 / 2)

You learned this in high school. Any negative number with an even root is an imaginary number. This fx number, for example:

(-5 / 1) ^ (7 / 4)

is impossible to resolve within the domain of real numbers. But normal math rules still apply. For example, two imaginary numbers can be multiplied together:

  (-5 / 1) ^ (7 / 4)
* (-5 / 1) ^ (5 / 4)
 -------------------
  (-5 / 1) ^ (12/ 4)

The product, in this case, no longer has an even root (since the exponent can be simplified to the integer 3), and so can be resolved as the real number -125.

My point is, the fx technique readily facilitates the above and other computations involving imaginary numbers. I wouldn't know how to do it using floating point numbers or any other conventional programming method.

NOTE: This page is a work in progress. Stay tuned.

   rev. 2019.07.11