Sujet : Re: "The Best Programming Language for the End of the World"
De : mhx (at) *nospam* iae.nl (mhx)
Groupes : comp.lang.forthDate : 11. Apr 2025, 16:42:04
Autres entêtes
Organisation : novaBBS
Message-ID : <e5f5c904560a535430fa648d32297a2b@www.novabbs.com>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13
User-Agent : Rocksolid Light
This is likely to have been a factor in Intel's decision to use
80-bits internally.
Maybe they needed 80 bits to print a binary 64-bit float in BCD
for COBOL compilers.
However, one needs a few hundred bits to correctly represent a
binary 56-bit significand in decimal, so maybe it is only enough
for single-precision floats?
For sure, high-quality libraries do not rely on 80-bit extended
precision for decimal output - I remember even the MASM library
had special code for that.
The real problem in practice is what to do when the user
calculates, e.g., sin(x)/x for x near 0, or inverts a
10,000 x 10,000 matrix. My approach is that the user knows
the precision of what he wants to see (if not, we have no
problem and can do anything), and his problem is what the
result *looks like* on the screen, on paper, or to other
programs. And it is not only the digits, it also +/-NaN,
+/-Infinity, and maybe the payload of these specials.