On 29.06.2025 05:51, Keith Thompson wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 28.06.2025 02:56, Keith Thompson wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 27.06.2025 02:10, Keith Thompson wrote:
scott@slp53.sl.home (Scott Lurndal) writes:
[...]
But not all decimal floating point implementations used "hex
floating point".
>
Burroughs medium systems had BCD floating point - one of the advantages
was that it could exactly represent any floating point number that
could be specified with a 100 digit mantissa and a 2 digit exponent.
>
BCD uses 4 bits to represent values from 0 to 9. That's about 83%
efficent relative to pure binary. (And it still can't represent 1/3.)
>
That's a problem of where your numbers stem from. "1/3" is a formula!
>
1/3 is also a C expression with the value 0. But what I was
referring to was the real number 1/3, the unique real number that
yields one when multiplied by three.
>
Yes, sure. That was also how I interpreted it; that you meant (in
"C" parlance) 1.0/3.0.
As mentioned elsethread, I was referring to the real value.
Yes, me too, when I saw your original 1/3. - You *then* spoke about
that being 0 in "C" (with integer division) I explained that I took
it as what I still think was what you were saying with "1/3" being
the real value, but (since to address your 1/3==0) I explained that
I meant the real value (that you would get in "C" [approximately]
by 1.0/3.0, which of course differs from the real math number).
I guess we might have been talking cross-purpose.
What I was trying to explain were different things on different
levels.
a) Errors on input/output conversion.
the value 1.33 - BCD no errors, two's-complement binary w/ errors;
the real value 1.333333... - generally an error (infinite string)
0.10 - in BCD no errors, in binary errors;
b) Errors in calculations.
all exact internal representation of external quantities can be
calculated correctly (with the previously presented conditions)
in decimal; examples 0.10, 1.33, 1.33333333333333333333333, but
*not* 1.33333333333333333333333... (the infinite form, whether
expressed as depicted here with '...' or whether expressed as
formula '1/3'.
1.0/3.0 as a C expression yields a value of type double, typically
0.333333333333333314829616256247390992939472198486328125 or
There are numbers that can be expressed accurately in binary; as
0.5, 1.0, 2.0 (for example). Those can also be expressed accurately
with decimal encoding.
Other finite numbers/number-sequences can be expressed accurately
with decimal encoding, as 0.1, 1.33 (for example), but only specific
ones can be represented accurately with binary encoding.
With infinite sequences of digits you will have problems with both
internal representations (binary, decimal); as you see with specific
real values as 'sqrt(2)', 'pi', 'e', '1/3' (for example) which are
cut at some decimal place internally depending on supported "register
width".
[...]
In numerics you have various places where errors appear in principle
and accumulate. One of the errors is when transferred from (and to)
external representation. Another one is when performing calculations
with internally imprecise represented numbers.
>
The point with decimal encoding addresses the lossless (and fast[*])
input/output of given [finite] numbers. Numbers that have been (and
are) used e.g. in financial contexts (Billions of Euros and Cents).
And you can also perform exact arithmetic in the typical operations
(sum, multiply, subtract)[**] without errors.[***]
Which is convenient only because we happen to use decimal notation
when writing numbers.
But that exactly is the point! With decimal encoding you get an exact
internal picture of the external representations of the numbers, if
only because the external representations are finite. (The same holds
for the output.) With binary encoding you have the first degradation
during that I/O process. Decimal encoding, OTOH, is robust here.
That's why it's so advantageous specifically for the financial sector.
It would not be the best choice where a lot of internal calculations
are done, as (for example) in calculating hydrodynamic processes.
Later, when it comes to internal calculations, yet more deficiencies
appear (with both encodings; but decimal is more robust in the basic
operations, where in binary the previous errors contribute to further
degradation).
(I completely left out algorithmic error management here (numerics),
because it applies in principle to all algorithms [mostly] independent
of the encoding; this would go too far.)
BTW, not only mainframes and the major programming languages used for
financial software supported decimal encoding. Also pocket calculators
did that. (For example, the BASIC programmable and interactive usable
Sharp PC 1401 supported real numbers processing using decimal encoding
(10 digits visible BCD, and 2 "hidden" digits for internal rounding, 2
digits exponent, plus information for signs, etc., all in all 8 bytes;
implemented with in-memory calculations, not done in registers.)
Decimal encoding; it's fast, has good properties (WRT errors and error
propagation), but requires more space (in case that matters).
Janis
[...]