Table of Contents
Fetching ...

Precision Arithmetic: A New Floating-Point Arithmetic

Chengpu Wang

TL;DR

This work introduces precision arithmetic, a deterministic floating-point framework that tracks and bounds uncertainty using a central-limit-theorem–based, truncated-Gaussian model under an uncorrelated-uncertainty assumption. It replaces worst-case interval bounds with probabilistic uncertainty propagation, preserving the scaling and recovering principles and representing numbers as $S\sim R\,2^E$ with a controlled rounding-up mechanism. The paper develops analytic tools for addition, subtraction, multiplication, division, and function evaluation, extends to Taylor expansion, and validates the approach through FFT benchmarks, matrix inversion, recursive sine calculations, Taylor/Taylor expansions, and numerical integration, demonstrating improved uncertainty tracking and more realistic bounding than interval arithmetic. It also discusses implementation details, computational costs, and avenues for improvement, including hardware optimizations and calibration to manage dependency effects in progressive algorithms. The results indicate precision arithmetic offers a practical, statistically grounded alternative to interval arithmetic for normal usages, with a clear framework for validation and comparison across common numerical tasks.

Abstract

A new deterministic floating-point arithmetic called precision arithmetic is developed to track precision for arithmetic calculations. It uses a novel rounding scheme to avoid excessive rounding error propagation of conventional floating-point arithmetic. Unlike interval arithmetic, its uncertainty tracking is based on statistics and the central limit theorem, with a much tighter bounding range. Its stable rounding error distribution is approximated by a truncated normal distribution. Generic standards and systematic methods for validating uncertainty-bearing arithmetics are discussed. The precision arithmetic is found to be better than interval arithmetic in both uncertainty-tracking and uncertainty-bounding for normal usages. The precision arithmetic is available publicly at http://precisionarithm.sourceforge.net.

Precision Arithmetic: A New Floating-Point Arithmetic

TL;DR

This work introduces precision arithmetic, a deterministic floating-point framework that tracks and bounds uncertainty using a central-limit-theorem–based, truncated-Gaussian model under an uncorrelated-uncertainty assumption. It replaces worst-case interval bounds with probabilistic uncertainty propagation, preserving the scaling and recovering principles and representing numbers as with a controlled rounding-up mechanism. The paper develops analytic tools for addition, subtraction, multiplication, division, and function evaluation, extends to Taylor expansion, and validates the approach through FFT benchmarks, matrix inversion, recursive sine calculations, Taylor/Taylor expansions, and numerical integration, demonstrating improved uncertainty tracking and more realistic bounding than interval arithmetic. It also discusses implementation details, computational costs, and avenues for improvement, including hardware optimizations and calibration to manage dependency effects in progressive algorithms. The results indicate precision arithmetic offers a practical, statistically grounded alternative to interval arithmetic for normal usages, with a clear framework for validation and comparison across common numerical tasks.

Abstract

A new deterministic floating-point arithmetic called precision arithmetic is developed to track precision for arithmetic calculations. It uses a novel rounding scheme to avoid excessive rounding error propagation of conventional floating-point arithmetic. Unlike interval arithmetic, its uncertainty tracking is based on statistics and the central limit theorem, with a much tighter bounding range. Its stable rounding error distribution is approximated by a truncated normal distribution. Generic standards and systematic methods for validating uncertainty-bearing arithmetics are discussed. The precision arithmetic is found to be better than interval arithmetic in both uncertainty-tracking and uncertainty-bounding for normal usages. The precision arithmetic is available publicly at http://precisionarithm.sourceforge.net.

Paper Structure

This paper contains 59 sections, 80 equations, 52 figures, 4 tables.

Figures (52)

  • Figure 1: Effect of noise on bit values of a measured value. The triangular wave signal and the added white noise are shown at top using the thin black line and the grey area, respectively. The values are measured by a theoretical 4-bit Digital-to-Analog Converter in ideal condition, assuming LSB is the 0th bit. The measured 3rd and 2nd bits without the added noise are shown using thin black lines, while the mean values of the measured 3rd and 2nd bits with the added noise are shown using thin grey lines.
  • Figure 2: Allowed maximal correlation between two values vs. input precisions and independence standard (as shown in legend) for the independence uncertainty assumption of precision arithmetic to be true.
  • Figure 3: Measured probability distribution of rounding errors of precision round-up rule for the minimal significand thresholds 0, 1, 2, 4, and 8 respectively. Mathematically the probability is usually defined either in range (-1/2, +1/2] or in range [-1/2, +1/2), but not in range [-1/2, +1/2]. Because -1/2 and +1/2 in bounding range have different meaning in precision representation, the probability range is defined as [-1/2, +1/2], which introduces the artificially smaller count of histogram in sections containing either -1/2 or +1/2.
  • Figure 4: Measured probability distribution of the rounding error after addition and subtraction. In the legend, "1" for measured rounding error distribution for the minimal significand thresholds 0, "1+1" for addition once and "1-1" for subtraction once using the rounding error distribution of "1", while "1+1+1" for addition twice, "1-1-1" for subtraction twice, "1+1-1" for addition once then subtraction once, and "1-1+1" for subtraction once then addition once.
  • Figure 5: The result rounding error distribution $R=8/2$ after the original error distribution $R=8$ is rounded up once. The $R=8/2$ distribution is compared with the $R=4$ distribution and the $R=2$ distribution, which have the same bounding range and deviation, respectively.
  • ...and 47 more figures