Sujet : Re: Making Lemonade (Floating-point format changes)
De : tkoenig (at) *nospam* netcologne.de (Thomas Koenig)
Groupes : comp.archDate : 12. May 2024, 21:55:03
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <v1rab7$2vt3u$1@dont-email.me>
References : 1 2
User-Agent : slrn/1.0.3 (Linux)
John Dallman <
jgd@cix.co.uk> schrieb:
In article <abe04jhkngt2uun1e7ict8vmf1fq8p7rnm@4ax.com>,
quadibloc@servername.invalid (John Savard) wrote:
>
I'm not really sure such floating-pont precision is useful, but I
do remember some people telling me that higher float precision is
indeed something to be desired.
I would be in favour of 128-bit being available.
Me, too. Solving tricky linear systems, or obtaining derivatives
numerically (for example for Jacobians) eats up a _lot_ of precision
bits, and double precision can sometimes run into trouble.
At least gcc and gfortran now support POWER's native 128-bit format
in hardware. On other systems, software emulation is used, which
is of course much slower.
I'm not sure my field
has need for 256- or 512-bit, but that doesn't mean that nobody has.
I've finally found the time to play around with Julia in the last
few weeks. One of the nice things does is that you can just use
the same packages with different numerical types, for example for
ODE integration. Just set up the problem as you would normally
and supply an starting vector with a different precision.
So, for doing some experiments on numerical data types, Julia
is quite nice.