Re: Well DUH ! AI People Finally Realize They Can Ditch Most Floating-Point for Big Ints

Liste des GroupesRevenir à col advocacy 
Sujet : Re: Well DUH ! AI People Finally Realize They Can Ditch Most Floating-Point for Big Ints
De : 186283 (at) *nospam* ud0s4.net (186282@ud0s4.net)
Groupes : comp.os.linux.misc
Date : 15. Oct 2024, 07:43:08
Autres entêtes
Organisation : wokiesux
Message-ID : <LpScnb7e54pgk5P6nZ2dnZfqn_qdnZ2d@earthlink.com>
References : 1 2
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.13.0
On 10/13/24 6:45 AM, Pancho wrote:
On 10/13/24 03:54, 186282@ud0s4.net wrote:
 
The new technique is basic—instead of using complex
floating-point multiplication (FPM), the method uses integer
addition. Apps use FPM to handle extremely large or small
numbers, allowing applications to carry out calculations
using them with extreme precision. It is also the most
energy-intensive part of AI number crunching.
>
 That isn't really true. Floats can handle big and small, but the reason people use them is for simplicity.
   "Simple", usually. Energy/time-efficient ... not so much.

The problem is that typical integer calculations are not closed, the result is not an integer. Addition is fine, but the result of division is typically not an integer. So if you use integers to model a problem every time you do a division (or exp, log, sin, etc) you need to make a decision about how to force the result into an integer.
   The question is how EXACT the precision HAS to be for
   most "AI" uses. Might be safe to throw away a few
   decimal points at the bottom.

Floats actually use integral values for exponent and mantissa, but they automatically make ballpark reasonable decisions about how to force the results into integral values for mantissa and exponent, meaning operations are effectively closed (ignoring exceptions).  So the programmer doesn't have to worry, so much.
 Floating point ops are actually quite efficient, much less of a concern than something like a branch misprediction. A 20x speed up (energy saving) sounds close to a theoretical maximum. I would be surprised if it can be achieved in anything but a few cases.
   Well ... the article insists they are NOT energy-efficient,
   esp when performed en-masse. I think their prelim tests
   suggested an almost 95% savings (sometimes).
   Anyway, at least the IDEA is back out there again. We
   old guys, oft dealing with microcontrollers, knew the
   advantages of wider integers over even 'small' FP.
   Math processors disguised the amount of processing
   required for FP ... but it was STILL there.

Date Sujet#  Auteur
25 May 25 o 

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal