Liste des Groupes | Revenir à col misc |
On 10/13/24 13:25, The Natural Philosopher wrote:No, apparently its a new (sic!) idea.On 13/10/2024 10:15, Richard Kettlewell wrote:That sounds like the 1960s. I guess this idea does sound like a slide rule."186282@ud0s4.net" <186283@ud0s4.net> writes:Last I heard they were going to use D to As feeding analog multipliers. And convert back to D afterwards. for a speed/ precision tradeoff.https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html[...]The default use of floating-point really took off when>
'neural networks' became popular in the 80s. Seemed the
ideal way to keep track of all the various weightings
and values.
>
But, floating-point operations use a huge amount of
CPU/NPU power.
>
Seems somebody finally realized that the 'extra resolution'
of floating-point was rarely necessary and you can just
use large integers instead. Integer math is FAST and uses
LITTLE power .....
That’s situational. In this case, the paper isn’t about using large
integers, it’s about very low precision floating point representations.
They’ve just found a way to approximate floating point multiplication
without multiplying the fractional parts of the mantissas.
>
>
Les messages affichés proviennent d'usenet.