Sujet : Re: Parsing timestamps?
De : minforth (at) *nospam* gmx.net (minforth)
Groupes : comp.lang.forthDate : 10. Jul 2025, 06:37:02
Autres entêtes
Message-ID : <md91rtFtejdU1@mid.individual.net>
References : 1 2 3 4 5 6 7 8 9 10 11
User-Agent : Mozilla Thunderbird
Am 10.07.2025 um 06:32 schrieb Paul Rubin:
minforth <minforth@gmx.net> writes:
You don't need 64-bit doubles for signal or image processing.
Most vector/matrix operations on streaming data don't require
them either. Whether SSE2 is adequate or not to handle such data
depends on the application.
Sure, and for that matter, AI inference uses 8 bit and even 4 bit
floating point.
Or fuzzy control for instance.
Kahan on the other hand was interested in engineering
and scientific applications like PDE solvers (airfoils, fluid dynamics,
FEM, etc.). That's an area where roundoff error builds up after many
iterations, thus extended precision.
That's why I use Kahan summation for dot products. It is slow but
rounding error accumulation remains small. A while ago I read an
article about this issue in which the author(s) performed extensive tests
of different dot product calculation algorithms on many serial
data sets from finance, geology, oil industry, meteorology etc.
Their target criterion was to find an acceptable balance between
computational speed and minimal error.
The 'winner' was a chained fused-multiply-add algorithm (many
CPUs/GPUs can perform FMA in hardware) which makes for shorter code
(good for caching). And it supports speed improvement by
parallelization (recursive halving of the sets until manageable
vector size followed by parallel computation).
I don't do parallelization, but I was still surprised by the good
results using FMA. In other words, increasing floating-point number
size is not always the way to go. Anyhow, first step is to select
the best fp rounding method ....