Re: Well DUH ! AI People Finally Realize They Can Ditch Most Floating-Point for Big Ints

Liste des GroupesRevenir à col misc 
Sujet : Re: Well DUH ! AI People Finally Realize They Can Ditch Most Floating-Point for Big Ints
De : invalid (at) *nospam* invalid.invalid (Richard Kettlewell)
Groupes : comp.os.linux.misc
Date : 13. Oct 2024, 10:15:54
Autres entêtes
Organisation : terraraq NNTP server
Message-ID : <wwv5xpw8it1.fsf@LkoBDZeT.terraraq.uk>
References : 1
User-Agent : Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux)
"186282@ud0s4.net" <186283@ud0s4.net> writes:
https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html
[...]
  The default use of floating-point really took off when
  'neural networks' became popular in the 80s. Seemed the
  ideal way to keep track of all the various weightings
  and values.
>
  But, floating-point operations use a huge amount of
  CPU/NPU power.
>
  Seems somebody finally realized that the 'extra resolution'
  of floating-point was rarely necessary and you can just
  use large integers instead. Integer math is FAST and uses
  LITTLE power .....

That’s situational. In this case, the paper isn’t about using large
integers, it’s about very low precision floating point representations.
They’ve just found a way to approximate floating point multiplication
without multiplying the fractional parts of the mantissas.

--
https://www.greenend.org.uk/rjk/

Date Sujet#  Auteur
12 Jul 25 o 

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal