Sujet : Re: Well DUH ! AI People Finally Realize They Can Ditch Most Floating-Point for Big Ints
De : tnp (at) *nospam* invalid.invalid (The Natural Philosopher)
Groupes : comp.os.linux.miscDate : 14. Oct 2024, 11:16:48
Autres entêtes
Organisation : A little, after lunch
Message-ID : <veir2g$156pd$5@dont-email.me>
References : 1 2 3 4
User-Agent : Mozilla Thunderbird
On 13/10/2024 14:23, Pancho wrote:
On 10/13/24 13:25, The Natural Philosopher wrote:
On 13/10/2024 10:15, Richard Kettlewell wrote:
"186282@ud0s4.net" <186283@ud0s4.net> writes:
https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html
[...]
The default use of floating-point really took off when
'neural networks' became popular in the 80s. Seemed the
ideal way to keep track of all the various weightings
and values.
>
But, floating-point operations use a huge amount of
CPU/NPU power.
>
Seems somebody finally realized that the 'extra resolution'
of floating-point was rarely necessary and you can just
use large integers instead. Integer math is FAST and uses
LITTLE power .....
>
That’s situational. In this case, the paper isn’t about using large
integers, it’s about very low precision floating point representations.
They’ve just found a way to approximate floating point multiplication
without multiplying the fractional parts of the mantissas.
>
Last I heard they were going to use D to As feeding analog multipliers. And convert back to D afterwards. for a speed/ precision tradeoff.
>
That sounds like the 1960s. I guess this idea does sound like a slide rule.
No, apparently its a new (sic!) idea.
I think that even if it does not work successfully it is great that people are thinking outside the box.
Analogue computers could offer massive parallelism for simulating complex dynamic systems.
-- There’s a mighty big difference between good, sound reasons and reasons that sound good.Burton Hillis (William Vaughn, American columnist)