Liste des Groupes | Revenir à cl forth |
This is likely to have been a factor in Intel's decision to use
80-bits internally.
Maybe they needed 80 bits to print a binary 64-bit float in BCD
for COBOL compilers.
However, one needs a few hundred bits to correctly represent a
binary 56-bit significand in decimal, so maybe it is only enough
for single-precision floats?
For sure, high-quality libraries do not rely on 80-bit extended
precision for decimal output - I remember even the MASM library
had special code for that.
...
Les messages affichés proviennent d'usenet.