Sujet : Re: Radians Or Degrees?
De : terje.mathisen (at) *nospam* tmsw.no (Terje Mathisen)
Groupes : comp.lang.c comp.archDate : 15. Mar 2024, 11:23:45
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <ut17ji$27n6b$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14
User-Agent : Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0 SeaMonkey/2.53.18.1
Michael, I for the main part agree with you here, i.e. calculating sin(x) with x larger than 2^53 or so, is almost certainly stupid.
Actually using and depending upon the result is more stupid.
OTOH, it is and have always been, a core principle of ieee754 that basic operations (FADD/FSUB/FMUL/FDIV/FSQRT) shall assume that the inputs are exact (no fractional ulp uncertainty), and that we from that starting point must deliver a correctly rounded version of the infinitely precise exact result of the operation.
Given the latter, it is in fact very tempting to see if that basic result rule could be applied to more of the non-core operations, but I cannot foresee any situation where I would use it myself: If I find myself in a situation where the final fractional ulp is important, then I would far rather switch to doing the operation in fp128.
Terje
Michael S wrote:
On Fri, 23 Feb 2024 11:10:00 +0100
Terje Mathisen <terje.mathisen@tmsw.no> wrote:
MitchAlsup1 wrote:
Steven G. Kargl wrote:
Agreed a programmer should use what is required by the problem
that they are solving. I'll note that SW implementations have
their sets of tricks (e.g., use of double-double arithmetic to
achieve double precision).
>
To get near IEEE desired precision, one HAS TO use more than 754
precision.
>
There are groups who have shown that exactly rounded trancendental
functions are in fact achievable with maybe 3X reduced performance.
>
At which cost in tables sizes?
There is a suggestion on the table to make that a (probably optional
imho) feature for an upcoming ieee754 revision.
>
Terje
>
The critical point here is definition of what considered exact. If
'exact' is measured only on y side of y=foo(x), disregarding
possible imprecision on the x side then you are very likely to end up
with results that are slower to calculate, but not at all more useful
from point of view of engineer or physicist. Exactly like Payne-Hanek
or Mitch's equivalent of Payne-Hanek.
The definition of 'exact' should be:
For any finite-precision function foo(x) lets designate the same
mathematical function calculated with infinite precision as Foo(x).
Let's designate an operation of rounding of infinite-precision number to
desired finite precision as Rnd(). Rounding is done in to-nearest mode.
Unlike in the case of basic operations, ties are allowed to be broken in
any direction.
The result of y=foo(x) for finite-precision number x considered
exact if *at least one* two conditions is true:
(1=Y-clause) Rnd(Foo(x)) == y
(2=X-clause) There exist an infinite precision number X for which
both Foo(X) == y and Rnd(X) == x.
As follows from the (2), it is possible and not uncommon that more
than one finite-precision number y is accepted exact result of foo(x).
If Committee omits the 2nd clause then the whole proposition will be not
just not useful, but harmful.
-- - <Terje.Mathisen at tmsw.no>"almost all programming can be viewed as an exercise in caching"