Re: Misc: Applications of small floating point formats.

Liste des GroupesRevenir à c arch 
Sujet : Re: Misc: Applications of small floating point formats.
De : cr88192 (at) *nospam* gmail.com (BGB)
Groupes : comp.arch
Date : 01. Aug 2024, 10:45:52
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <v8fi05$2381g$1@dont-email.me>
References : 1 2
User-Agent : Mozilla Thunderbird
On 7/31/2024 7:31 PM, MitchAlsup1 wrote:
On Wed, 31 Jul 2024 23:31:35 +0000, BGB wrote:
 
So, say, we have common formats:
   Binary64, S.E11.F52, Common Use
   Binary32, S.E8.F23, Common Use
   Binary16, S.E5.F10, Less Common Use
>
But, things get funky below this:
   A-Law: S.E3.F4 (Bias=8)
   FP8: S.E4.F3 (Bias=7) (E4M3 in NVIDIA terms)
   FP8U: E4.F4 (Bias=7)
   FP8S: E4.F3.S (Bias=7)
>
>
Semi-absent in my case:
   BFloat16: S.E8.F7
     Can be faked in software in my case using Shuffle ops.
   NVIDIA E5M2 (S.E5.F2)
     Could be faked using RGBA32 pack/unpack ops.
 So, you have identified the problem:: 8-bits contains insufficient
exponent and fraction widths to be considered standard format.
Thus, in order to utilize 8-bit FP one needs several incarnations.
This just points back at the problem:: FP needs at least 10 bits.
 
Though, 10 bits only gives 3 components per 32-bit word, or would need 5 bytes for 4 components, neither is ideal...

>
No immediate plans to add these later cases as (usually) I have a need
for more precision than more exponent range. The main seeming merit of
these formats being that they are truncated forms of the wider formats.
>
>
No need to elaborate on the use-cases for Binary32 and Binary64, wide
and varied.
 There is a growing clamor for 128-bit FP, too.
Supported in my case, but currently software only (as "long double").
There were past plans for truncated Binary128 in hardware, but going much bigger than Binary64, the cost quickly gets out-of-hand.

>
>
Binary16 is useful for graphics
probably,
                                and audio processing.
 Insufficient data width as high quality Audio has gone to 24-bits
{120 DBa S/N).
 You can call MP3 and other "phone" formats Audio, but please restrict
yourself from using the term High Quality when doing so.
 
Usually "gold standard" audio format IME are 44100Hz and 48000Hz 16-bit stereo. Personally, I don't notice much difference between 44kHz and 48kHz.
Have noted that 8-bit PCM sounds poor at nearly every "reasonable" sample rate.
Seemingly, somehow, 8-bit PCM adds a very obvious "hiss" to the audio which is distracting.
I personally consider 16kHz to be near the lower end of acceptable (at 8kHz or 11kHz there is a notable distortion, things like speech become highly muffled and nearly unintelligible).
This combination of hiss and muffling seems to be the normal situation on phones, making it very difficult to understand what people are saying.
Or, basically:
   8kHz: very bad
   11kHz: poor
   16kHz: OK
   22kHz: OK
   32kHz: Good
   44khz: Ideal
   48kHz: Ideal
   Past this: Overkill.
And, bit-depth:
   8-bit PCM: Poor
   8-bit A-Law: OK
   8-bit u-Law: OK
   16-bit PCM: Ideal
   16-bit FP: Ideal
   32-bit FP: Probably overkill
For my projects, I am mostly using 16kHz A-Law, because it sounds "pretty OK".
Major factor is how much memory one needs for the "loop buffer".
Typically, the ideal size for the loop buffer is around 250ms.
   For 16kHz 8-bit stereo, this is 8K; and 8K is reasonable.
For 44kHz 16-bit stereo, one would likely need a 64K buffer (rounding up to the next power-of-2). A 64K buffer for the PCM audio loop is a bit steep for an FPGA...
MP3 sounds good at around 128kbps.
Have noted at 40kbps or 64kbps it sounds rather poor. It tends to develop artifacts that sound like a bunch of high-frequency distortions (whistling and other effects) and rattling broken glass in a can, which is very displeasing.
Even arguably worse strategies, like say driving the audio at 32kHz 1 bit/sample with a delta-sigma modulator, can sound better IMO. Not great, but doesn't sound quite so much like one is rapidly shaking a steel can full of broken glass either.
But, I can use a sharp 2kHz to 8kHz bandpass, and to me it sounds mostly the same, but my cats will respond as if some great evil has come forth from the speakers.
For me though, seems like this is a fairly important range:
   With just this range, the audio is intact;
   Without this range, it is basically just a muffle.
Then again, I have noted that my sense of hearing may be anomalous, so my experiences may not exactly match other people.
Then again, I have noted that a lot of people also sit around playing MIDI music on floppy drives and stepper motors as a novelty, so maybe not that far off either. But, results here are rather variable.

                                                      Seemingly IEEE
specifies it mostly for storage and not for computation, but for these
cases it is good enough for computation as well.
>
Binary16 is mostly sufficient for 3D model geometry, and for small 3D
scenes, but not really for 3D computations or larger scenes (using it
for transform or projection matrices or matrix multiply does not give
acceptable results).
>
Does work well for fast sin/cos lookup tables (if supported natively),
say, because the error of storing an angle as 1/256 of a circle is
larger than the error introduced by the 10 bit mantissa.
>
I had also used it as the computational format in a lot of my neural-net
experiments.
>
I have seen NN used compressed FP formats where 0 uses 1-bit and
1.0 uses but 2-bits. ...
I had my experimental BITNN thing, where:
   Inputs are 1 or 2 bits (1b = +/-1; 2b = 0/1/0/-1);
   Weights are 3 bits (+/- 0/1/3/7).
It can be made fast and was fairly effective at things like OCR, but is naturally limited to things where inputs and outputs are 1-bit signals (like monochrome images).
Basically, it was able to evaluate a 16-input neuron in 1 clock-cycle, and could run a small OCR test fast enough to still be acceptably fast in the Verilator simulation.
For "softer" inputs and outputs, like image and audio processing, seemingly floating-point is needed (and with roughly enough precision to represent the dynamic range of the inputs).
Potentially, these two types of nets could be interfaced, using different types of nets for the different layers, but this might still require some more specialized converter ops (say, to convert between sign-bits and 1bpp bit-masks).

>
The 8-bit formats get a bit more niche; main use-cases mostly to save
memory.
>
Sometimes power also.
Probably.
Less memory means faster and less memory bandwidth.
In this case, limiting factors are more memory bandwidth and data wrangling than FPU throughput.
As I can note, in some past experiments in these areas, the BJX2 core at 50MHz manages to be competitive with a 1.4GHz laptop despite the 28x clock-speed difference.
Though, likely this is more just because of how much of a penalty trying to run NN stuff through x87 via generic C code imposes (vs generating the nets as BJX2 assembly code; generally structured to evaluate 4 to 16 neurons in parallel, where 8 or 16 in parallel can roughly maximize throughput by trying to optimize the pipelining).
But, with instruction wrangling and memory bandwidth, still difficult to get near the theoretical 200 MFLOP/sec limit of the SIMD unit.
Using 8-bit formats can limit the amount of cache misses, which could help over just using Binary16 SIMD.
But, as-is, the laptop is seemingly also hard-pressed to get anywhere near 200 MFLOP/sec with x87, so...
Though, there is a pretty big contrast in the machine code (between the x87 and SIMD sequences).
Though, if the test were running Doom or Quake or Dhrystone, the laptop wins by a huge margin (no contest).
Also, despite my Ryzen only having a theoretical 2.65x clock-speed advantage over the laptop, it runs circles around it in terms of overall performance.
Seemingly, in relation to clock-speed, the laptop also has fairly slow RAM access.
Neither of them is doing particularly great at trying to run basic computer vision tasks though...

>
FP8s originally exists because it was cheaper to encode/decode alongside
FP8U, vs traditional FP8. Originally, FP8S replaced FP8, but now FP8 has
been re-added. I couldn't simply entirely replace FP8S back with FP8,
partly as it seems my existing binaries depend on FP8S in a few places,
and so simply replacing it would have broken my existing binaries.
>
So, options were to either add some separate ops for FP8, or just live
with using my wonky/non-standard FP8S format (or break my existing
binaries). Ended up deciding to re-add FP8.
>
Or don't do it that way.
Better option?...
I could have just lived with FP8S, but didn't really want to extend the scope of a format that basically nobody else uses.
I did replace the PMOV.M8 (FP8S) instructions with PMOV.F8 (FP8), but pretty much nothing was using these as of yet.
But, then I realized that PMOV.F8 also largely overlaps with the use-case imagined for PMUL.F8H, just one would to the conversion on the memory load, and the other on the multiply.
I would also be stuck with PNOV.F8 if I wanted to use combined FMUL+Shuffle ops (the PMUL.F8H instruction doesn't support inline shuffle at present).
But... Enabling inline shuffle is not great for LUT cost or timing, so this may be moot.

>
FP8 is used apparently by NVIDIA GPUs, also apparently by PyTorch and a
few other things. The variant used in my case is seemingly fairly
similar to that used by NVIDIA and PyTorch.
 If you are going to do an F8 make it compatible with OpenGL.
 
OpenGL doesn't have FP8.
If they did, would probably be a similar format to the one NVIDIA is using.
Both this and the Wikipedia minifloat were S.E4.F3, but differ in the interpretation.
Wikipedia format:
   E=15: Inf and NaN;
   E=1..14, Normal Range;
   E=0, Subnormal Range.
The NV/PyTorch format differs here:
   E=15: 256, 288, 320, 352, 384, 416, 448, Inf/NaN
Mine differs slightly also on the low end:
   E=1: 0.016, 0.018, 0.020, 0.022, ...
   E=0: 0.000, 0.009, 0.010, 0.011, 0.012, 0.013, 0.014, 0.015
Vs, say:
   E=1: 0.016, 0.018, 0.020, 0.022, ...
   E=0: 0.000, 0.002, 0.004, 0.006, 0.008, 0.010, 0.012, 0.014
But, in this case, shouldn't matter much (not likely worth the added logic cost).

Unlike the minifloat format described on Wikipedia (which had defined it
as following IEEE 754 rules), it differs from IEEE rules in the handling
of large and small values. No separate Inf/NaN range, rather the largest
value serves as an implicit combined Inf/NaN, with the smallest value
understood as 0.
>
The main difference here between FP8 and FP8S being the location of the
sign bit (putting it in the LSB initially allowed avoiding some MUX'ing
when paired with FP8U).
>
>
The re-added FP8 was instead overlapped with the unpack logic used for
A-Law (even with the obvious difference...).
>
The encoder-side logic for FP8 can be implemented by taking the FP8S
output and reordering the bits (in an "assign"). Though, doing this on
the decoder input side would not likely have saved anything (attempts to
MUX on the input side seemingly tend to result in duplicating any LUTs
that follows afterwards).
>
Though, one could almost argue for combining all 4 cases into shared
encoder/decoder modules (well, since at least 3/4 of the formats have
the mantissa and exponent bits in the same place, FP8 being the odd one
out; and A-Law being off-by-1 in terms of Bias).
>
That combination is well served with a single 10-bit FP format.
Much easier to have multiple 8-bit FP formats than to try to somehow shove 2 more bits into a byte...

>
This appears to be similar to what NV and PyTorch used, and also
overlaps with my handling of A-Law (though, the largest possible value
of A-Law is understood as ~ 0.969).
>
Where, A-Law has slightly higher precision, but is normally limited to
unit range. Main use-case is in representing audio, but was sometimes
also used when a small unit-range format was needed and precision wasn't
a priority.
>
For example, with slight fudging, it can be used to store
unit-quaternions, among other things. It is basically accurate enough to
store things like object orientations and 3D camera rotations. Though,
generally, it is needed to normalize the quaternion after unpacking it.
>
>
Ironically, for A-Law, my implementation and typical use differs from
how it is usually stored in WAV files, in that in WAV files it is
generally XOR'ed with 0x55, but this is an easy enough fix when loading
audio data or similar.
>
There is also u-Law, but u-Law isn't really a minifloat format.
>
>
>
These formats can also be used for pixel data; though FP8U often made
more sense for RGBA values (generally, negative RGBA isn't really a
thing).
>
However, pixel values may go outside unit range, so A-Law doesn't work
for HDR pixel data. The use of FP8 or FP8S works, but gives lower
quality than FP8U. Here, FP8U gives slightly better quality than RGB555
over LDR range, whereas FP8 or FP8S is slightly worse for bright values
(1 bit less accuracy between 0.5 and 1.0).
>
>
For normal bitmap graphics, I am mostly using RGB555 at present though.
>
There isn't yet a fast conversion path between RGB555 and floating-point
formats, but, say:
   RGB5UPCK64  //Unpack RGB555 to 4x WORD
   PCVTUW2H    //Packed Word to Half (1.0 .. 2.0)
   PADD.H      //To adjust DC bias to 0.0 .. 1.0.
   ? PSTCM8UH  //to FP8U (typical option for HDR RGBA pixel data)
   ? PSTCF8H   //to FP8 (newly added)
>
>
But, the crufty Word<->Half SIMD conversions exist mostly because it
would have been more expensive to support "better" SIMD converters (the
DC bias offset allowed doing the conversions via repacking the bits;
whereas unit-range conversions would have required the more expensive
path of adding the format conversion logic to the SIMD FADD units).
>
Note that most of the SIMD format converters exist as applied use of
bit-twiddling (and generally no rounding or similar, as rounding would
add considerable amounts of cost here...).
>
>
Though, cases needing fast conversion of pixel data between RGB555 and
floating-point forms have been uncommon (most pixel math starting from
RGB555 tends to remain on the integer side of things).
>
>
If TKRA-GL were using HDR, most likely option here is:
   If HDR is used;
   The program binds an LDR texture.
>
The GL backend can internally quietly generate an HDR version of the
texture and use this instead; as opposed to trying to dynamically
transform RGB555 or UTX2 into HDR during texel load.
>
Though, another option would be to base it on the GL context:
   If the OpenGL framebuffer is HDR;
   All uploaded textures get converted to HDR formats as well.
     So, RGB555/RGBA8888/... -> FP8U, and DXT1/DXT5/BC6H/BC7 -> UTX3.
>
....
>
>
>
For things like streaming PCM audio to the audio hardware, say:
   2x PSHUF.W+MOVxxD     //Shuffle from Stereo to 1x/2x Mono
   PCVTSW2H    //Packed Word to Half (2.0 .. 4.0)
   PADD.H      //To adjust DC bias to -1.0 .. 1.0.
   PCVTH2AL    //Convert Half to A-Law
>
Where, the programs and API use 16-bit stereo PCM, and my audio hardware
generally uses separate Left/Right A-Law for the loop buffers.
>
A-Law was used mostly because:
   8-bit linear PCM tends to sound like garbage;
 It sounds more like computer automated speaking to me--Oh Wait--that
does sound like Garbage:: Sorry !!
 

   16-bit PCM needs twice the Block-RAM (relative to sample rate);
 16-bit Audio is so 1990.....
 
Still pretty much standard though...
Though, checking:
   Sound Blaster, 1989, 22kHz 8-bit Mono, 11kHz Stereo
   Sound Blaster Pro, 1991, 44kHz 8-bit Mono, 22kHz Stereo
   Sound Blaster 16, 1992, 44kHz 16-bit Stereo
My modern onboard audio does... 48kHz 16-bit Stereo.

   A-Law quality is closer to 16-bit PCM, at the same size as 8-bit PCM.
So, I ended up designing the audio hardware to use A-Law.
But, on a 50MHz CPU, the CPU is slow enough that one has reason to care
about how many clock-cycles are used by the audio encoding (doing it in
software was slow; so ended up doing it via SIMD).
>
Generally, most audio mixing code has tended to use 16-bit PCM, as using
Binary16 or Binary32 for front-end audio mixing is a bit of a novelty.
Wouldn't be that hard to support in theory, would just need to be
expressed via the WAVEFORMATEX structure (and, assuming the backend code
was added to support raw floating-point PCM).
>
The API does also support 8-bit PCM, but this is the worst case quality
wise (combining both the initial poorness of 8-bit PCM with some
additional information loss in the conversion to A-Law).
Though, 8-bit PCM is still acceptable for use in sound-effects and
similar. When mixed into a PCM buffer, typically amplitude and DC bias
is all over the place.
>
>
Had (early on) experimented with possible "small block" audio
compression (and an ADPCM variant) for the audio hardware, but couldn't
really get acceptable results. A-Law seemed to be the most reasonable
compromise (in terms of encoding cost and "didn't sound like crap").
 Audio is supposed to sound like you were there listening to it live in
a building designed for its acoustics.....but alas...
I was more going for "works, doesn't sound like crap".
   16kHz A-Law passes this threshold.
   8kHz or 11kHz 8-bit PCM, does not.
Wolfenstein 3D had used 8kHz, and it was like nearly 30 years before I realized the characters in the game were... saying stuff...
And, this was mostly because I was messing around with the data files for the iOS port of Wolf3D, which had comparably better versions of the sound effects where one could actually hear what the characters were saying (as opposed to ambiguous voice-like sounds).

>
While ADPCM can give OK quality relative to size, it was a rather poor
fit for the use-case (it is much better as an "offline" audio storage
format).
>
>
>
These 8-bit floating-point formats are generally too poor in terms of
quality to be used for direct computation in SIMD operations.
 So why support them ?
You don't use them for direct computation, but rather as storage formats (and for constant data).
Any actual computations will generally be performed internally using Binary16.
So, in this case, one can support the 8-bit formats as:
   Converter ops ("PLDCF8H"/"PSTCF8H");
   Convert on Load/Store ("PMOV.F8S");
   ...
But, for actual math:
   PMUL.H  //MUL, Binary16
   PADD.H  //ADD, Binary16
The closest I now have to an operation computing on FP8 is "PMUL.F8H", but the output from this instruction, is Binary16.
But, then I am left to realize this may not gain much, since I already had instructions to do format conversion on memory and immediate load.
It prompted me to add the FP8 converter ops, mostly because I didn't want to add such an instruction only to have it based around a weird format that no one else uses (or a format that can't be easily encoded from elsewhere in the ISA, ...).
Worse case, I drop it again, and it goes into the "random cruft pile...".

>
Some stuff online implies that FP8 could be used as an intermediate
computation format in neural nets, but my own past experiments in these
areas implied that FP8 was insufficient (for good results, one seemingly
needs around 7 or 8 bits for the mantissa).
 What several NN architectures do is to use a 256-bit word and then
decode it into multiple F8 or F10 or F12 components using a Huffman
coding scheme. 0 takes 1-bit 1.0 takes 2, leaving lots of bits for
other mantissas. These were done to same memory BW not particulary
size but raw aggregated BW.
Not really gonna happen in the BJX2 core...

>
Granted, this was mostly with trying to use NN's for image processing
tasks (which likely have higher accuracy requirements than, say, LLMs).
>
However, FP8 can work OK for weights. Some experiments had used A-Law,
but I can note that A-Law requires to add an extra scaling step before
adding the bias and invoking an activation function (this could be
avoided with FP8).
>
For image-filtering NNs, seems to be better to work primarily using
Binary16 and ReLU activation or similar.
>
Though, the "approximate ssqrt" can work OK (where approximate ssqrt is
roughly comparable to "tanh", but cheaper to calculate). The
"approximate" part being that, by usual definition, one can leave off
the Newton-Raphson iteration stages.
>
Well, in a similar way to how, in graphics processing, it can sometimes
be useful to redefine Binary16 divide "A/B" as roughly "A*(0x7800-B)"
(if the speed of the divide matters more than the accuracy of the
result).
>
Though, generally makes sense to train the net with the same range and
precision intended to run it (so, if it is going to be run as Binary16
with approximate operators, it also needs to be trained using Binary16
and approximate operators).
>
>
Though, moderately annoying for "normal C on a desktop PC", as both
Binary16 and FP8 are absent and will need to be faked in software.
>
Run it as a GPGPU
Possibly.
In most of my NN experiments, I was running everything on the CPU.
But, running NN training code by faking the Binary16 ops in software (so that the math matches up), is kinda slow.
Granted, I haven't exactly been training LLMs or similar.
More often it was things like trying to recognize features or estimate how far away something is.
But, for camera input from a low-end camera module, the distance to an object is mostly a property of how blurry it is. But, setting up the camera such that only objects close to the camera are in-focus, is kinda weak.
One doesn't actually need a NN for this, but actually a DCT and some math trickery (such as normalization and a glorified FIR filter) will work here.
Though, this does have weaknesses, like any mostly flat-colored areas would be assumed to be much further away.

>
>
Ended up going with FP8 for a "packed multiply expanding" instruction:
   PMUL.F8H Rs, Rt, Rn
Where, each FP8 in Rs and Rt is multiplied, and the result expands to a
Binary16 element in Rn.
 Stuff like this falls out "for free" under VVM.
>
Ended up not going with FMAC, as it is likely the cost and latency would
have been a bit higher than I would like (and possibly higher than the
"inline shuffle" experiment).
>
The "PMUL.F8H" instruction was added with a 2-cycle latency, and seems
to have a moderately low cost (no obvious impact on overall LUT costs).
However, its logic is still complicated enough that I wouldn't want to
try adding it as a 1-cycle operation.
>
As one merit of using FP8, the 3-bit mantissa is small enough that the
pair of mantissas can directly use LUT6 lookups (and most of the cost is
likely along the exponent path).
>
But, don't know if this would have much use out of potentially being
useful for neural-net code.
>
>
>
But, any thoughts?...
 Architecture is as much about what gets left out as what gets put in.
Dunno...
Maybe a "general cruft cleanup" may be needed at some point...
Say, pruning old/experimental code or features, or stuff which isn't being used, ...

Date Sujet#  Auteur
1 Aug 24 * Misc: Applications of small floating point formats.47BGB
1 Aug 24 +* Re: Misc: Applications of small floating point formats.12MitchAlsup1
1 Aug 24 i+- Re: Misc: Applications of small floating point formats.1BGB
2 Aug 24 i+- Re: Misc: Applications of small floating point formats.1MitchAlsup1
2 Aug 24 i+* Re: Misc: Applications of small floating point formats.2Thomas Koenig
2 Aug 24 ii`- Re: Misc: Applications of small floating point formats.1BGB
3 Aug 24 i`* Re: Misc: Applications of small floating point formats.7Terje Mathisen
3 Aug 24 i +- Re: Misc: Applications of small floating point formats.1BGB
3 Aug 24 i `* Re: Misc: Applications of small floating point formats.5Lawrence D'Oliveiro
5 Aug 24 i  `* Re: Misc: Applications of small floating point formats.4George Neuner
6 Aug 24 i   `* Re: Misc: Applications of small floating point formats.3BGB-Alt
6 Aug 24 i    `* Re: Misc: Applications of small floating point formats.2George Neuner
6 Aug 24 i     `- Re: Misc: Applications of small floating point formats.1BGB
1 Aug 24 `* Re: Misc: Applications of small floating point formats.34Lawrence D'Oliveiro
1 Aug 24  +* Re: Misc: Applications of small floating point formats.31BGB
2 Aug 24  i`* Re: Misc: Applications of small floating point formats.30Lawrence D'Oliveiro
2 Aug 24  i `* Re: Misc: Applications of small floating point formats.29BGB
2 Aug 24  i  `* Re: Misc: Applications of small floating point formats.28Lawrence D'Oliveiro
2 Aug 24  i   `* Re: Misc: Applications of small floating point formats.27BGB
2 Aug 24  i    `* Re: Misc: Applications of small floating point formats.26BGB
2 Aug 24  i     `* Re: Misc: Applications of small floating point formats.25Lawrence D'Oliveiro
2 Aug 24  i      `* Re: Misc: Applications of small floating point formats.24BGB
3 Aug 24  i       `* Re: Misc: Applications of small floating point formats.23Lawrence D'Oliveiro
3 Aug 24  i        +* Re: Misc: Applications of small floating point formats.11Chris M. Thomasson
3 Aug 24  i        i+* Re: Misc: Applications of small floating point formats.7Lawrence D'Oliveiro
3 Aug 24  i        ii`* Re: Misc: Applications of small floating point formats.6BGB
3 Aug 24  i        ii `* Re: Misc: Applications of small floating point formats.5Lawrence D'Oliveiro
3 Aug 24  i        ii  `* Re: Misc: Applications of small floating point formats.4Chris M. Thomasson
3 Aug 24  i        ii   `* Re: Misc: Applications of small floating point formats.3BGB
3 Aug 24  i        ii    `* Re: Misc: Applications of small floating point formats.2Chris M. Thomasson
4 Aug 24  i        ii     `- Re: Misc: Applications of small floating point formats.1Lawrence D'Oliveiro
3 Aug 24  i        i`* Re: Misc: Applications of small floating point formats.3BGB
3 Aug 24  i        i `* Re: Misc: Applications of small floating point formats.2Lawrence D'Oliveiro
3 Aug 24  i        i  `- Re: Misc: Applications of small floating point formats.1BGB
3 Aug 24  i        `* Re: Misc: Applications of small floating point formats.11BGB
3 Aug 24  i         `* Re: Misc: Applications of small floating point formats.10Lawrence D'Oliveiro
3 Aug 24  i          `* Re: Misc: Applications of small floating point formats.9BGB
3 Aug 24  i           `* Re: Misc: Applications of small floating point formats.8Lawrence D'Oliveiro
3 Aug 24  i            `* Re: Misc: Applications of small floating point formats.7Chris M. Thomasson
4 Aug 24  i             `* Re: Misc: Applications of small floating point formats.6Lawrence D'Oliveiro
4 Aug 24  i              `* Re: Misc: Applications of small floating point formats.5Chris M. Thomasson
4 Aug 24  i               `* Re: Misc: Applications of small floating point formats.4BGB
5 Aug 24  i                +* Re: Misc: Applications of small floating point formats.2Chris M. Thomasson
5 Aug 24  i                i`- Re: Misc: Applications of small floating point formats.1Chris M. Thomasson
5 Aug 24  i                `- Re: Misc: Applications of small floating point formats.1Lawrence D'Oliveiro
3 Aug 24  `* Re: Misc: Applications of small floating point formats.2Terje Mathisen
3 Aug 24   `- Re: Misc: Applications of small floating point formats.1BGB

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal