Sujet : Re: Well DUH ! AI People Finally Realize They Can Ditch Most Floating-Point for Big Ints
De : Pancho.Jones (at) *nospam* proton.me (Pancho)
Groupes : comp.os.linux.miscDate : 13. Oct 2024, 11:45:46
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <veg8cq$k36i$1@dont-email.me>
References : 1
User-Agent : Mozilla Thunderbird
On 10/13/24 03:54,
186282@ud0s4.net wrote:
The new technique is basic—instead of using complex
floating-point multiplication (FPM), the method uses integer
addition. Apps use FPM to handle extremely large or small
numbers, allowing applications to carry out calculations
using them with extreme precision. It is also the most
energy-intensive part of AI number crunching.
That isn't really true. Floats can handle big and small, but the reason people use them is for simplicity.
The problem is that typical integer calculations are not closed, the result is not an integer. Addition is fine, but the result of division is typically not an integer. So if you use integers to model a problem every time you do a division (or exp, log, sin, etc) you need to make a decision about how to force the result into an integer.
Floats actually use integral values for exponent and mantissa, but they automatically make ballpark reasonable decisions about how to force the results into integral values for mantissa and exponent, meaning operations are effectively closed (ignoring exceptions). So the programmer doesn't have to worry, so much.
Floating point ops are actually quite efficient, much less of a concern than something like a branch misprediction. A 20x speed up (energy saving) sounds close to a theoretical maximum. I would be surprised if it can be achieved in anything but a few cases.