Sujet : Re: Radians Or Degrees?
De : mitchalsup (at) *nospam* aol.com (MitchAlsup1)
Groupes : comp.lang.c comp.archDate : 20. Mar 2024, 21:33:44
Autres entêtes
Organisation : Rocksolid Light
Message-ID : <874a1d36b91024222a126fbedb677615@www.novabbs.org>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
User-Agent : Rocksolid Light
Michael S wrote:
On Wed, 20 Mar 2024 09:54:36 -0400
Stefan Monnier <monnier@iro.umontreal.ca> wrote:
[ Their key insight is the idea that to get correct rounding, you
shouldn't try to compute the best approximation of the exact result
and then round, but you should instead try to compute any
approximation whose rounding gives the correct result. ]
My impression was that their performance was good enough that the case
for not-correctly-rounded implementations becomes very weak.
>
It all depend of what you compare against.
For scalar call for majority of transcendental functions on IEEE-754
list, it's probably very easy to get correctly rounded binary32 results
in approximately the same time as results calculated with max. err of,
say, 0.75 ULP. Especially so if target machine has fast binary64
arithmetic.
But in practice when we use lower (than binary64) precision we often
care about vector performance rather than scalar.
I.e. we care little about speed of sinf(), but want ippsTone_32f() as
fast as possible. In case you wonder, this function is part Intel
Performance Primitives and it is FAST. Writing correctly rounded
function that approaches the speed of this *almost* correctly
rounded routine (I think, for sane input ranges it's better than
0.55 ULP) would not be easy at all!
I challenge ANY software version of SIN() correctly rounded or not
to compete with my <patented> HW implementations for speed (or even
power).
I don't know what are/were the motivations for the people working
on exact transcendentals, but they have applications unrelated to
the fact that they're "better": the main benefit (from this here
PL guy) is that it gives them a reliable, reproducible semantics.
Bit-for-bit reproducibility makes several things much easier.
>
Consider moving an application which uses libm from machine to
machine. When libm is correctly rounded, there is no issue at all;
not so other- wise.
Exactly!
[ Or should I say "Correctly rounded!"? ]
Stefan
You like this proposal because you are implementer of the language/lib.
It makes your regression tests easier. And it's good challenge.
I don't like it because I am primarily user of the language/lib. My
floating-point tests have zero chance of repeatability of this sort for
a thousand of other reasons.
I don't want to pay for correct rounding of transcendental functions.
Even when the HW is 7× faster than SW algorithms ??
Neither in speed and especially nor in tables footprint. Not even a
little. Because for me there are no advantages.
My HW algorithms use no memory (cache, DRAM, or even LDs.)
Now, there are things that I am ready to pay for. E.g. preservation of
mathematical properties of original exact function.
You know, of course, that incorrectly rounded SIN() and COS() do not
maintain the property of SIN()^2+COS()^2 == 1.0
I.e. if original is
monotonic on certain interval then I do want at least weak monotonicity
of approximation.
My HW algorithms have a numeric proof of this maintenance.
If original is even (or odd) I want the same for
approximation. If original never exceed 1, I want the same
for approximation. Etc... But correct rounding is not on the list.
SIN(x) can be incorrectly rounded to be greater than 1.0.
Still want incorrect rounding--or just a polynomial that does not
have SI(X) > 1.0 ??