dxf <
dxforth@gmail.com> writes:
On 11/07/2025 1:17 pm, Paul Rubin wrote:
This gives a good perspective on posits:
https://people.eecs.berkeley.edu/~demmel/ma221_Fall20/Dinechin_etal_2019.pdf
Yes, that looks ok. One thing I noticed is that they suggest
implementing the smaller posit formats by intelligent table lookup.
If we have small bit widths and table lookup, I wonder if we should go
for any variant of FP (including posits) at all, or if an
exponent-only (i.e., logarithmic) representation would not be better.
E.g., for 8 bits, out of the 256 values, 2 would represent infinities,
one would represent NaN, and one would represent 0, leaving 252
remaining values. If we use 2^-11 (~1.065) as base B, this would give
a number range of B^-126=0.000356 to B^125=2635. You can vary B to
either give a more fine-grained resolution at the expense of a smaller
number range or a larger number range at the expense of a finer
resolution.
<
https://developer.nvidia.com/blog/floating-point-8-an-introduction-to-efficient-lower-precision-ai-training/>
presents E4M3 with +-448 range, and E5M2 with +-57344 range. But note
that the next number after 1 is 1.125 for E4M3 and 1.25 for E5M2, both
more coarse-grained than the 1.065 that an exponent-only format with
B=2^-11 gives you.
Addition and subtraction would be performed by table lookup (and would
almost always be approximate), for multiplication and division an
integer adder can be used.
Floating point arithmetic in the 1960s (before my time) was really in a
terrible state. Kahan has written about it. Apparently IBM 360
floating point arithmetic had to be redesigned after the fact, because
the original version had such weird anomalies.
>
But was it the case by the mid/late 70's - or certain individuals saw an
opportunity to influence the burgeoning microprocessor market?
Yes, that's the thing with FP. Some people just do their computations
and who cares if the results might be an artifact of numerical
instability. For wheather forecasts, there is no telling if a bad
prediction is due to a numerical error, due to imperfect measurement
data, or because of the butterfly effect (which is a convenient
excuse).
Other people care more about the results, and perform numerical
analysis. There are only a few specialists for that, and they have
asked for and gotten features in IEEE 754 and the hardware that the
vast majority of programmers never consciously uses, e.g., rounding
modes or the inexact "exception" (actually a flag, not a Forth
exception), which allows them to tell if there was a rounding error in
a computation. But when you use a library designed with the help of
numerical analysis, you might benefit from the existence of these
features.
They have also asked for and gotten things like denormal numbers,
infinities and NaNs that result in fewer numerical pitfalls for
programmers who are not numerical analysts. These features may be
irrelevant for those who do weather prediction, but I expect that
those who found that binary64 provided by VFX's SSE2-based package was
not good enough may benefit from such features.
In any case, FP numbers are used in very diverse ways. Not everybody
needs all the features, and even fewer features are consciously
needed, but that's the usual case with things that are not
custom-taylored for your application.
- anton
-- M. Anton Ertl http://www.complang.tuwien.ac.at/anton/home.htmlcomp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html New standard: https://forth-standard.org/EuroForth 2023 proceedings: http://www.euroforth.org/ef23/papers/EuroForth 2024 proceedings:
http://www.euroforth.org/ef24/papers/