Sujet : Re: Well DUH ! AI People Finally Realize They Can Ditch Most Floating-Point for Big Ints
De : 186283 (at) *nospam* ud0s4.net (186282@ud0s4.net)
Groupes : comp.os.linux.miscDate : 16. Oct 2024, 07:38:08
Autres entêtes
Organisation : wokiesux
Message-ID : <VEGdnTMGMJLMwpL6nZ2dnZfqn_adnZ2d@earthlink.com>
References : 1 2 3 4
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.13.0
On 10/15/24 7:06 AM, The Natural Philosopher wrote:
On 15/10/2024 07:43, 186282@ud0s4.net wrote:
The question is how EXACT the precision HAS to be for
most "AI" uses. Might be safe to throw away a few
decimal points at the bottom.
My thesis is that *in some applications*, more low quality calculations bets a fewer high quality ones anyway.
I wasn't thinking of AI, as much as modelling complex turbulent flow in aero and hydrodynamics or weather forecasting
Well, weather, any decimal points are BS anyway :-)
However, AI and fuzzy logic and neural networks - it
has just been standard practice to use floats to handle
all values. I've got books going back into the mid 80s
on all those and you JUST USED floats.
BUT ... as said, even a 32-bit int can handle fairly
large vals. Mult little vals by 100 or 1000 and you can
throw away the need for decimal points - and the POWER
required to do such calx. Accuracy should be more than
adequate.
In any case, I'm happy SOMEONE finally realized this.
TOOK a really LONG time though ......