Re: Misc: Applications of small floating point formats.

Liste des GroupesRevenir à c arch 
Sujet : Re: Misc: Applications of small floating point formats.
De : cr88192 (at) *nospam* gmail.com (BGB)
Groupes : comp.arch
Date : 06. Aug 2024, 19:37:05
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <v8tn04$1o43a$1@dont-email.me>
References : 1 2 3 4 5 6 7
User-Agent : Mozilla Thunderbird
On 8/6/2024 10:51 AM, George Neuner wrote:
On Mon, 5 Aug 2024 17:35:22 -0500, BGB-Alt
<bohannonindustriesllc@gmail.com> wrote:
 
On 8/5/2024 11:24 AM, George Neuner wrote:
On Sat, 3 Aug 2024 21:09:43 -0000 (UTC), Lawrence D'Oliveiro
<ldo@nz.invalid> wrote:
>
On Sat, 3 Aug 2024 11:40:23 +0200, Terje Mathisen wrote:
>
MitchAlsup1 wrote:
>
So, you have identified the problem:: 8-bits contains insufficient
exponent and fraction widths to be considered standard format. Thus, in
order to utilize 8-bit FP one needs several incarnations.
This just points back at the problem:: FP needs at least 10 bits.
>
I agree that fp10 is probably the shortest sane/useful version, but
1:3:4 does in fact contain enough exponent and mantissa bits to be
considered an ieee754 format.
>
The AI folks are quite happy with 8-bit floats for many applications. In
fact, they prefer more exponent bits and fewer in the mantissa.
>
Insufficient precision is one of the many reasons that ANNs are prone
to hallucinate.
>
Also likely depends on the type of NN as well.
>
As noted, for some of the stuff I had tried doing, there was a
noticeable detrimental effect with fewer than around 8 to 10 bits in the
mantissa for the accumulator. Weights and biases could use fewer bits
(as could the inputs/outputs between layers), but not so much the
accumulator.
>
Whereas, large exponent ranges tended to be much less of a factor
(though with training via genetic algos, it was needed to detect and
handle some cases where values went outside of a "reasonable" exponent
range, such as E+14 or so).
 You can use more precision in the mantissa, or more range in the
exponent ... generally you don't need both ;-) ... but in either you
do need *enough* bits.
 The problem with 8-bit reals is they have neither enough precision nor
enough range - they too easily can be saturated during training, and
even if the values are (re)normalized afterward, the damage already
has been done.
 16-bit values seem to enough for many uses. It does not matter much
how the bits are split mantissa vs exponent ... what matters is having
enough relevant (to the algorithm) bits to avoid values being
saturated during training.
 
Yeah.
Typically 16-bits is the working format in my case for these sorts of things.
For general use, S.E5.F10 works well.
In some cases, S.E6.F9 might have been better, but is non-standard.
In other news, got the RGBA32 and HDR-FP8U modes more or less implemented in TKRA-GL. The RGBA32 mode is currently kinda slow as it only implements a few "generic paths" for the SW rasterizer.
It also has a lot of hacks...
The HDR mode is a further hack on top of the RGBA32 mode, which mostly handles pixels as FP8 values.
To simplify implementation, there are a few further hacks on my original intent:
   Alpha values are still LDR (seen as 0.0 .. 1.0);
   Color modulator values ended up as E3.F5 unit-range;
   ...
As it just so happens, a fixed-point multiply keeping the high-bits between an E4.F4 and an E3.F5 values, gives "more or less" the desired result.
Say, with the latter, E0=0.5, where say: 7F*E0 => 6F, ...
Not really confirmed, but checking other values seems "more or less about right".
This allows keeping much of the original LDR path intact in the HDR mode (simplifying implementation). This mostly leaves some of the blending modes (mostly those like GL_SRC_COLOR, GL_ONE_MINUS_DST_COLOR, etc), that need special handling (though, still need to work out the specifics on these).
Say, for example, simply shifting the values left 1-bit and using these as LDR values for interpolation is unlikely to give passable results.
In the "ONE_MINUS" cases, "1.0-0.5" implemented as ~(E3.F5) will not give a value anywhere near 0.5; but if one makes it "more correct" on one side, but uses fixed-point add, then SRC_COLOR + ONE_MINUS_SRC_COLOR don't add up to 1.0, and this is a problem.
Options:
   Alternate blend logic that implements it as floating point ops;
   Quick and dirty converter to map the FP8U values to fix-point values.
     Likely the cheaper option here for the rasterizer module.
Though, ironically, despite seeming like a lot of cruft and wonk, a lot of this does seem to match up with the behavior I had seen from actual GPUs in HDR mode in the past, so I suspect it is possible they might have gone down a similar path (where color modulation, alpha, and fixed-function blending, seemed to be all clamped to unit-range).
The latter option could allow possibly allow going over to an FP8U alpha channel (was the original idea) but would need tweak things like the handling of ALPHA_TEST, ... (in HDR mode, would need to check if A>=0x70 rather than merely checking if the MSB is set, ...).
...

 
One other thing I had found was that it was possible to DC-bias the
inputs (before multiplying against the weight), but the gains were small.
>
>
So, say, for each input:
   (In+InBias)*Weight
Then, output:
   OutFunc(Accum*OutGain+OutBias)
>
Though, OutGain is also debatable (as is InBias), but both seem to help
slightly. Theoretically, they are unnecessary as far as the math goes
(and what gains they offer are more likely a product of numerical
precision and the training process).
>
Will note that for transfer functions, I have tended to use one of:
   SQRT: (x>0)?sqrt(x):0
   ReLU: (x>0)?x:0
   SSQRT: (x>0)?sqrt(x):-sqrt(-x)
   Heaviside: (x>0)?1:0
>
While tanh is traditionally popular, it had little obvious advantage
over SSQRT and lacks a cheap approximation (and numerical accuracy
doesn't really matter here).
>
...
>

Date Sujet#  Auteur
1 Aug 24 * Misc: Applications of small floating point formats.47BGB
1 Aug 24 +* Re: Misc: Applications of small floating point formats.12MitchAlsup1
1 Aug 24 i+- Re: Misc: Applications of small floating point formats.1BGB
2 Aug 24 i+- Re: Misc: Applications of small floating point formats.1MitchAlsup1
2 Aug 24 i+* Re: Misc: Applications of small floating point formats.2Thomas Koenig
2 Aug 24 ii`- Re: Misc: Applications of small floating point formats.1BGB
3 Aug 24 i`* Re: Misc: Applications of small floating point formats.7Terje Mathisen
3 Aug 24 i +- Re: Misc: Applications of small floating point formats.1BGB
3 Aug 24 i `* Re: Misc: Applications of small floating point formats.5Lawrence D'Oliveiro
5 Aug 24 i  `* Re: Misc: Applications of small floating point formats.4George Neuner
6 Aug 24 i   `* Re: Misc: Applications of small floating point formats.3BGB-Alt
6 Aug 24 i    `* Re: Misc: Applications of small floating point formats.2George Neuner
6 Aug 24 i     `- Re: Misc: Applications of small floating point formats.1BGB
1 Aug 24 `* Re: Misc: Applications of small floating point formats.34Lawrence D'Oliveiro
1 Aug 24  +* Re: Misc: Applications of small floating point formats.31BGB
2 Aug 24  i`* Re: Misc: Applications of small floating point formats.30Lawrence D'Oliveiro
2 Aug 24  i `* Re: Misc: Applications of small floating point formats.29BGB
2 Aug 24  i  `* Re: Misc: Applications of small floating point formats.28Lawrence D'Oliveiro
2 Aug 24  i   `* Re: Misc: Applications of small floating point formats.27BGB
2 Aug 24  i    `* Re: Misc: Applications of small floating point formats.26BGB
2 Aug 24  i     `* Re: Misc: Applications of small floating point formats.25Lawrence D'Oliveiro
2 Aug 24  i      `* Re: Misc: Applications of small floating point formats.24BGB
3 Aug 24  i       `* Re: Misc: Applications of small floating point formats.23Lawrence D'Oliveiro
3 Aug 24  i        +* Re: Misc: Applications of small floating point formats.11Chris M. Thomasson
3 Aug 24  i        i+* Re: Misc: Applications of small floating point formats.7Lawrence D'Oliveiro
3 Aug 24  i        ii`* Re: Misc: Applications of small floating point formats.6BGB
3 Aug 24  i        ii `* Re: Misc: Applications of small floating point formats.5Lawrence D'Oliveiro
3 Aug 24  i        ii  `* Re: Misc: Applications of small floating point formats.4Chris M. Thomasson
3 Aug 24  i        ii   `* Re: Misc: Applications of small floating point formats.3BGB
3 Aug 24  i        ii    `* Re: Misc: Applications of small floating point formats.2Chris M. Thomasson
4 Aug 24  i        ii     `- Re: Misc: Applications of small floating point formats.1Lawrence D'Oliveiro
3 Aug 24  i        i`* Re: Misc: Applications of small floating point formats.3BGB
3 Aug 24  i        i `* Re: Misc: Applications of small floating point formats.2Lawrence D'Oliveiro
3 Aug 24  i        i  `- Re: Misc: Applications of small floating point formats.1BGB
3 Aug 24  i        `* Re: Misc: Applications of small floating point formats.11BGB
3 Aug 24  i         `* Re: Misc: Applications of small floating point formats.10Lawrence D'Oliveiro
3 Aug 24  i          `* Re: Misc: Applications of small floating point formats.9BGB
3 Aug 24  i           `* Re: Misc: Applications of small floating point formats.8Lawrence D'Oliveiro
3 Aug 24  i            `* Re: Misc: Applications of small floating point formats.7Chris M. Thomasson
4 Aug 24  i             `* Re: Misc: Applications of small floating point formats.6Lawrence D'Oliveiro
4 Aug 24  i              `* Re: Misc: Applications of small floating point formats.5Chris M. Thomasson
4 Aug 24  i               `* Re: Misc: Applications of small floating point formats.4BGB
5 Aug 24  i                +* Re: Misc: Applications of small floating point formats.2Chris M. Thomasson
5 Aug 24  i                i`- Re: Misc: Applications of small floating point formats.1Chris M. Thomasson
5 Aug 24  i                `- Re: Misc: Applications of small floating point formats.1Lawrence D'Oliveiro
3 Aug 24  `* Re: Misc: Applications of small floating point formats.2Terje Mathisen
3 Aug 24   `- Re: Misc: Applications of small floating point formats.1BGB

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal