On 5/21/2024 12:19 PM, MitchAlsup1 wrote:
BGB wrote:
>
Errm, I was promoting the idea of cost-cut floating point, not blatantly
>
broken floating point...
Would you promote the idea where the customer could specify whether his
car had air bags and crash safety cell or not ??
Same point here.
I guess it more comes into a question for whether the car company can charge extra, or charge a subscription fee, for the potential use of the airbag in a crash...
Then again, I guess while they can get away with the use of automatic fees for higher air-conditioner or heater settings; probably less true of safety features.
Well, unless the airbag will still trigger either way, but then automatically charge the customer $1k or so, if it does so, but I guess allowing this would create incentive to make the airbag trigger in cases where it would not otherwise.
Probably depends some on what one is using floating-point for.
If it is almost exclusively things like image and audio processing (which work well with small integer and Binary16), or 3D modeling (often Binary32), speed is likely the priority.
This mostly leaves Binary64 for things like:
The "math.h" functions;
Things like "atof()"/"printf()"/... (*1)
Cases where C typesystem rules would mandate implicit promotion.
Though, GCC seems to ignore this case on some targets.
*1: Even if the input and output values are float, "atof()" and "printf()" will not work acceptably if implemented using "float".
Granted, to fully accurately parse or print a Binary64 value, one would need to go beyond Binary64 precision, mostly as repeatedly multiplying a value by "0.1" or similar creates an error that progressively increases the further one gets from 1.0 (though, can be mostly ignored with Binary64 and a typical/default precision of 6 digits past the decimal point; but past 10 or so digits, those last digits are going to be wrong).
Then again, saw a video recently where someone is talking about how Doom used 3.141592657 (vs, say, 3.14159265358979...), but like no one noticed, and apparently some amount of other projects had ended up copying Doom's slightly off value of PI.
...
But, in other news, I now have a RISC-V / RV64G version of Doom running on top of TestKern via getting the ELF loader fully working.
ELF does sort of annoy me though:
The way it approaches some things is annoying and convoluted;
One needs to deal with relocs and symbol tables to get PIE binaries loaded, etc.
Its metadata is structured in ways that are much bulkier than in PE/COFF (so, like, roughly half the space in a PIE binary is just its metadata...), which *does* need to be loaded into RAM.
Like, this metadata being the symbol, strings, and relocation tables.
Strings table holds strings for *every* top-level declaration;
Since ELF does not limit it to explicit imports/exports;
Symbols need 24 bytes each;
Relocs also need 24 bytes each;
...
This effectively eating around 400K of memory for Doom...
And, they are using 64-bit values for lots of stuff that, does not need 64-bit values ("well, its 64-bit ELF, gotta make damn near everything 64 bits...").
Well, the BJX2 build (modified PE/COFF), where the base relocation table is, ~ 3K; and most of the binary is taken up by program code and data.
Doesn't need symbol table, and typical base reloc is closer to 2 bytes.
Note, this is not with "-g", which would cause the binary to increase to around 10MB; because apparently DWARF is also needlessly bulky, and the convention is to store debug data inline rather than in an external "debug database" file or similar.
...