Sujet : Re: New milestone float formatting [LoL] (Was: Request for comments, Novacore the sequel to ISO modules)
De : janburse (at) *nospam* fastmail.fm (Mild Shock)
Groupes : comp.lang.prologDate : 28. Jul 2024, 16:42:47
Autres entêtes
Message-ID : <v85ld4$ff98$1@solani.org>
References : 1 2 3 4
User-Agent : Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0 SeaMonkey/2.53.18.2
GNU Prolog seems to still use the non-adaptive
algorithm with 17 decimal precision. It could
profit from the adaptive algorithm that arbitrates
between 16 and 17 decimal precision:
/* GNU Prolog 1.5.0 */
?- X is 0.1+0.1+0.1+0.1+0.1+0.1+0.1+0.1.
X = 0.79999999999999993
?- 0.79999999999999993 == 0.7999999999999999.
Yes
?- X is 23/10.
X = 2.2999999999999998
?- 2.2999999999999998 == 2.3.
Yes
All discrepancies are not incorrect displays,
since reparsing decimal numbers shows that they
hit the same floating point values.
But 2.3 would be cuter!
Mild Shock schrieb:
Further test cases are:
?- X is 370370367037037036703703703670 / 123456789012345678901234567890.
X = 3.0000000000000004.
?- X is 0.1+0.1+0.1+0.1+0.1+0.1+0.1+0.1.
X = 0.7999999999999999.
The first test case doesn't work in SWI-Prolog
since recently it has improve its realization of
(/)/2 arithemetic function. While in most Prolog
systems we should have the above result, since
neither the division equals 3.0 nor the sum equals
0.8 when we use floating point numbers, and
when we convert first to floating point number
before doing the division. The adaptive algorithm
is more expensive than just calling num.toPrecision(17).
It will in mimimum call num.toPrecision(16) and do
the back conversion, i.e. Number(res). So unparsing
has a parsing cost. And for critical numbers, it
has a second unparsing via num.toPrecision(17) cost.
But I guess we can accept this little slow down.