Liste des Groupes | Revenir à col misc |
"186282@ud0s4.net" <186283@ud0s4.net> writes:They need to take it further - integers insteadhttps://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html[...]The default use of floating-point really took off whenThat’s situational. In this case, the paper isn’t about using large
'neural networks' became popular in the 80s. Seemed the
ideal way to keep track of all the various weightings
and values.
>
But, floating-point operations use a huge amount of
CPU/NPU power.
>
Seems somebody finally realized that the 'extra resolution'
of floating-point was rarely necessary and you can just
use large integers instead. Integer math is FAST and uses
LITTLE power .....
integers, it’s about very low precision floating point representations.
They’ve just found a way to approximate floating point multiplication
without multiplying the fractional parts of the mantissas.
Les messages affichés proviennent d'usenet.