Re: Well DUH ! AI People Finally Realize They Can Ditch Most Floating-Point for Big Ints

Liste des GroupesRevenir à col misc 
Sujet : Re: Well DUH ! AI People Finally Realize They Can Ditch Most Floating-Point for Big Ints
De : Pancho.Jones (at) *nospam* proton.me (Pancho)
Groupes : comp.os.linux.misc
Date : 13. Oct 2024, 14:23:25
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <veghkd$mhii$1@dont-email.me>
References : 1 2 3
User-Agent : Mozilla Thunderbird
On 10/13/24 13:25, The Natural Philosopher wrote:
On 13/10/2024 10:15, Richard Kettlewell wrote:
"186282@ud0s4.net" <186283@ud0s4.net> writes:
https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html
[...]
   The default use of floating-point really took off when
   'neural networks' became popular in the 80s. Seemed the
   ideal way to keep track of all the various weightings
   and values.
>
   But, floating-point operations use a huge amount of
   CPU/NPU power.
>
   Seems somebody finally realized that the 'extra resolution'
   of floating-point was rarely necessary and you can just
   use large integers instead. Integer math is FAST and uses
   LITTLE power .....
>
That’s situational. In this case, the paper isn’t about using large
integers, it’s about very low precision floating point representations.
They’ve just found a way to approximate floating point multiplication
without multiplying the fractional parts of the mantissas.
>
Last I heard they were going to use D to As feeding analog multipliers. And convert back to D afterwards. for a speed/ precision tradeoff.
 
That sounds like the 1960s. I guess this idea does sound like a slide rule.

Date Sujet#  Auteur
11 Jul 25 o 

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal