Sujet : Re: evolution of arithmetic, was bytes, The joy of FORTRAN
De : antispam (at) *nospam* fricas.org (Waldek Hebisch)
Groupes : alt.folklore.computers comp.os.linux.miscDate : 05. Mar 2025, 00:39:32
Autres entêtes
Organisation : To protect and to server
Message-ID : <vq82vi$1horp$1@paganini.bofh.team>
References : 1 2 3 4 5
User-Agent : tin/2.6.2-20221225 ("Pittyvaich") (Linux/6.1.0-9-amd64 (x86_64))
In alt.folklore.computers John Levine <
johnl@taugh.com> wrote:
According to Waldek Hebisch <antispam@fricas.org>:
In alt.folklore.computers Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sun, 2 Mar 2025 14:58:22 -0000 (UTC), Waldek Hebisch wrote:
IBM deemed decimal arithmetic to be necessary for comercial market.
Interesting to see a recognition nowadays that even scientific users might
be interested in decimal arithmetic, too. Look at how the latest version
of the IEEE 754 spec includes decimal data types in addition to the
traditional binary ones.
>
Pushing decimal numbers into modern hardware is practical idiocracy.
Basically, IBM wants to have a selling point so they pushed inclusion
in standards. ...
Yes, but. Decimal floating point is not the same as binary floating point.
Its goal is to provide predictable decimal rounding, which is important
in many financial calculations. Forty years ago I implemented the financial
functions like bond pricing for Javelin. That required simulating decimal
rounding with binary arithmetic, which was quite painful. If DFP makes it
easier to get correct rounding on zillion dollar financial calculations it
could well be worth the cost.
That's different from BCD integer arithmetic which I agree long ago stopped
making sense.
First, it seems that financial rules specify decimal _fixed_ point.
So there is still extra code to get it from floating point.
I did some experiments and getting decimal rounding from binary
arithmetic was not hard. It did require extra bits in binary
compared to decimal, but "double precision" integer arithmetic
in not that hard and is a standard feature in modern compilers.
Performance of resulting code code was IMO good enough even
for rather high volume calculation. I had small e-mail
exchange with main pusher of binary floating point at IBM.
He wrote that he knows what I found (and that other people told
him the same) and the reason for decimal floating point was that
doing things as in my experiment required hand written code,
that is there were no (fast) support in compilers. He somewhat
ignored fact that rather small C++ library would give nice
high-level fixed point for C++ codes. And that it would be
rather small job to add similar thing in other compilers.
So fact that there was no fast support in compilers indicated
low demand. GNU Cobol had needed functionality, but code
was slower and IIUC there were appropriate Java classes
that supposedly were slower than what was in GNU Cobol.
Ada has decimal fixed point, I do not remember how fast it
was.
Java was probably uncurable without extention to Java spec,
but for the same reason decimal floating point would be no
help for Java from that era. Native compilers had no
such limitations, co clearly their authors did not think
that fast decimal fixed point is important enough.
Anyway, pushing hardware feature for things easily solvable
at software (compiler) level does not make much sense, but
blends well into IBM marketing strategy. And pushed floating
point has apparently only one "advantage": it is harder to
do in software than fixed point.
-- Waldek Hebisch