Sujet : Re: Misc: Applications of small floating point formats.
De : cr88192 (at) *nospam* gmail.com (BGB)
Groupes : comp.archDate : 02. Aug 2024, 07:27:29
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <v8hu8l$2m5hi$1@dont-email.me>
References : 1 2 3 4
User-Agent : Mozilla Thunderbird
On 8/2/2024 12:53 AM, Thomas Koenig wrote:
EricP <ThatWouldBeTelling@thevillage.com> schrieb:
With FP128 will there again be a significant difference in speed to
FP62 or FP32 (including transcendentals)? Seems there would be because not
every HW implementation is going to implement a full width multiplier.
The only major architecture I'm aware of that uses FP128, POWER,
chose to use their decimal FP unit do do it on the side.
This makes multiplication _really_ slow, unfortunately.
FWIW: It is sufficiently overkill for most purposes, and sufficiently rarely used, that there isn't a strong reason not to just do Binary128 or similar in software.
Except maybe in some special case one actually needs fast 128-bit floating point.
For most things though, Binary64 is sufficient (and many use-cases exist where Binary32 is not sufficient, so Binary64 seems mostly necessary for "general use").
At one point, some years back, I had looked into whether to do Binary128 or to do something like the .NET Decimal format.
IIRC, .NET's format was something like:
Three 32-bit words each holding 9 decimal digits;
Another 32-bit word with an exponent.
Each 32-bit word representing a value as a linear integer between 000000000 and 999999999.
In my evaluation, Binary128 won (both more accurate and faster for general computation).
Granted, both seem likely to be faster than a software implementation of Decimal128.
...
In my evaluations, I ended up prioritizing 128-bit integers:
Cheaper to implement in hardware;
Can be used to make 128-bit floating point faster;
More often to be useful.
Though, the use of 128-bit integers is hindered for portable code, given seemingly none of the major compilers support them "in general".
...