Sujet : Re: Radians Or Degrees?
De : chris.m.thomasson.1 (at) *nospam* gmail.com (Chris M. Thomasson)
Groupes : comp.archDate : 14. Mar 2024, 23:06:55
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <usvse1$1s78k$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13
User-Agent : Mozilla Thunderbird
On 3/14/2024 10:28 AM, MitchAlsup1 wrote:
Terje Mathisen wrote:
Michael S wrote:
On Fri, 23 Feb 2024 11:01:02 +0100
Terje Mathisen <terje.mathisen@tmsw.no> wrote:
>
Michael S wrote:
On Thu, 22 Feb 2024 21:04:52 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
>
Ï€ radians = half a circle. Are there any other examples of the
usefulness of half-circles as an angle unit? As opposed to the
dozens or hundreds of examples of the usefulness of radians as an
angle unit?
>
In digital signal processing circle-based units are pretty much
always more natural than radians.
For specific case of 1/2 circle, I can't see where it can be used
directly.
From algorithmic perspective, full circle looks like the most
obvious choice.
From [binary floating point] numerical properties perspective,
1/8th of the circle (==pi/4 radians = 45 degrees) is probably the
best option for a library routine, because for sin() its derivative
at 0 is closest to 1 among all powers of two which means that loss
of precision near 0 is very similar for input and for output. But
this advantage does not sound like particularly big deal.
>
ieee754 defines sinpi() and siblings, but imho it really doesn't
matter if you use circles, half-circles (i.e. sinpi) or some other
binary fraction of a circle: Argument reduction for huge inputs are
just as easy, you might just have to multiply by the corresponding
power of two (i.e. adjust the exponent) before extracting the
fractional term.
>
For sinpi(x) I could do it like this:
>
if (abs(x) >= two_to_52nd_power) error("Zero significant bits.");
ix = int(x);
x_reduced = x - (double) (ix & ~1);
if (x_reduced < 0.0) x_reduced += 2.0;
>
but it is probably better to return a value in the [-1.0 .. 1.0>
range?
>
Terje
>
>
Both you and Mitch look at it from wrong perspective.
When we define a library API, an ease of implementation of the library
function should be pretty low on our priority scale. As long as
reasonably precise calculation is theoretically possible, we should
give credit to intelligence of implementor, that's all.
The real concern of API designer should be with avoidance of loss of
precision in preparation of inputs and use of outputs.
In specific case of y=sin2pi(x), it is x that is more problematic,
because near 0 it starts to lose precision 3 octaves before y. In
subnormal range we lose ~2.5 bits of precision in preparation of the
argument. An implementation, no matter how good, can't recover what's
already lost.
sinpi() is slightly better, but only slightly, not enough better to
justify less natural semantics.
>
My yesterday suggestion is a step in right direction, but today I think
that it is not sufficiently radical step. In specific case of sin/cos
there is no good reason to match loss of precision on input with loss of
precision.
<Thinking out load>
There are advantages in matched loss of precision for input and for
output when both input and output occupy full range of real numbers
(ignoring sign). Output of sin/cos does not occupy o full range. But
for tan() it does. So, may be, there is a one good reason for matching
input with output for sin/cos - consistency with tan.
</Thinking out load>
So, ignoring tan(), what is really an optimal input scaling for sin/cos
inputs? Today I think that it is a scaling in which full circle
corresponds to 2**64. With such scaling you never lose any input
precision to subnoramals before precision of the result is lost
completely.
Now, one could ask "Why 2**64, why not 2**56 that has the same
property?". My answer is "Because 2**64 is a scaling that is most
convenient for preparation of trig arguments in fix-point [on 64-bit
computer]." I.e. apart from being a good scaling for avoidance of loss
of precision in tiny range, it happens to be the best scaling for
interoperability with fixed point.
That is my answer today. Will it hold tomorrow? Tomorrow will tell.
This is, except for today being a 64-bit world as opposed to 30 years earlier, exactly the same reasoning Garmin's programmers used to decide that all their lat/long calculations would use 2^32 as the full circle.
With a signed 32-bit int you get a resolution of 40e6m / 2^32 = 0.0093m or 9.3 mm, which they considered was more than good enough back in the days of SA and its ~100 RMS noise, and even after Clinton got rid of that (May 2 2000 or 2001?), sub-cm GPS is very rarely available.
The drift rate of the oscillators in the satellites is such it will neve be.
The clock drift is reset every orbit.
Also note: hackers are now using ground based GPS transmitters to alter
where your GPS calculates where you think you are. This is most annoying
around airports when planes use GPS to auto guide the planes to runways.
GPS Spoofing?
Doing the same with 64-bit means that you get a resolution of 2.17e-12 m which is 2.17 picometers or 0.0217 Å, so significantly smaller than a single H atom which is about 1 Å in size.
And yet, driving by Edwards AFB sometimes my car's GPS shows my 50m off the
interstate quality road, and sometimes not.
Terje