Sujet : Re: Well DUH ! AI People Finally Realize They Can Ditch Most Floating-Point for Big Ints
De : 186283 (at) *nospam* ud0s4.net (186282@ud0s4.net)
Groupes : comp.os.linux.miscDate : 15. Oct 2024, 07:31:58
Autres entêtes
Organisation : wokiesux
Message-ID : <Gv-dnUVOoczDkZP6nZ2dnZfqn_UAAAAA@earthlink.com>
References : 1 2 3 4 5
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.13.0
On 10/14/24 6:16 AM, The Natural Philosopher wrote:
On 13/10/2024 14:23, Pancho wrote:
On 10/13/24 13:25, The Natural Philosopher wrote:
On 13/10/2024 10:15, Richard Kettlewell wrote:
"186282@ud0s4.net" <186283@ud0s4.net> writes:
https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html >
[...]
The default use of floating-point really took off when
'neural networks' became popular in the 80s. Seemed the
ideal way to keep track of all the various weightings
and values.
>
But, floating-point operations use a huge amount of
CPU/NPU power.
>
Seems somebody finally realized that the 'extra resolution'
of floating-point was rarely necessary and you can just
use large integers instead. Integer math is FAST and uses
LITTLE power .....
>
That’s situational. In this case, the paper isn’t about using large
integers, it’s about very low precision floating point representations.
They’ve just found a way to approximate floating point multiplication
without multiplying the fractional parts of the mantissas.
>
Last I heard they were going to use D to As feeding analog multipliers. And convert back to D afterwards. for a speed/ precision tradeoff.
>
>
That sounds like the 1960s. I guess this idea does sound like a slide rule.
No, apparently its a new (sic!) idea.
I think that even if it does not work successfully it is great that people are thinking outside the box.
Analogue computers could offer massive parallelism for simulating complex dynamic systems.
Yea, but not much PRECISION beyond a stage or two
of calx :-)
No "perfect" fixes.