Liste des Groupes | Revenir à c arch |
On 5/19/2024 4:16 PM, MitchAlsup1 wrote:BGB wrote:
On 5/19/2024 11:37 AM, Terje Mathisen wrote:Thomas Koenig wrote:So, I did some more measurements on the POWER9 machine, and it cameThe FMA normalizer has to handle a maximally bad cancellation, so it needs to be around 350 bits wide. Mitch knows of course but I'm
to around 18 cycles per FMA. Compared to the 13 cycles for the
FMA instruction, this actually sounds reasonable.
>
The big problem appears to be that, in this particular
implementation, multiplication is not pipelined, but done by
piecewise by addition. This can be explained by the fact that
this is mostly a decimal unit, with the 128-bit QP just added as
an afterthought, and decimal multiplication does not happen all
that often.
>
A fully pipelined FMA unit capable of 128-bit arithmetic would be
an entirely different beast, I would expect a throughput of 1 per
cycle and a latency of (maybe) one cycle more than 64-bit FMA.
>
guessing
>
that this could at least be close to needing an extra cycle on its own and/or heroic hardware?
>This sort of thing is part of what makes proper FMA hopelesslyGetting the LoB correctly rounded showed up the generation prior to
expensive.
FMAC showing up.
Well, in this case, I have neither in a proper sense.
FMAC operators were sorta faked, but mostly exist because they were needed for RV64G, but double-rounded (and not able to expose anything that exists below the ULP, unlike proper FMA).But FMAC can expose the bits below LoB.
Granted, full FMA also allows faking higher precision usingIt also enabled error free floating point calculations, but no existing
>
SIMD vector operations, with math that does not work with
double-rounded
>
FMA instructions.
FP implementation allows exact FP calculations that do not ALSO SET the
inexact flag !?!? {Whereas My 66000 gets this right}
Dunno.
It seems like the existence of anything below the ULP justifies settingYou misunderstand !!
the inexact flag...
Well, and also an issue if one can "just barely" afford to have aThis is NOT an architectural issue, but an implementation choice issue.
single
>
double-precision unit.
Absent things like microcode or traps, architectural and implementationI understand your limitations--the problem I have is that you express
choices are closely tied together. Can't have instructions for things which one can't afford the hardware cost to implement.
Well, and the usefulness of an FPU is dependent on performance. Inaccurate FPU can still be useful, but slow FPU is not.Kahan has several lectures about this....
Though, the trick of possibly having four 27-bit multiplies which combine into a virtual 54 bit multiplier seems like an interesting possibility, though not great as DSP's don't natively handle this size (and would be too expensive to stretch it out with LUTs). Likely, one would need to build it from 34*34->68 bit multipliers (each costing 4This is your implementation choice coloring what you take as
DSPs).
architectural
decisions.
In terms of DSP cost, it would be higher than the current solution:We can now fit (5nm) hundreds of GBOoO cores on a single die. The
16 vs 6+4 (10).
But, possibly lower LUT cost (in both the Binary32 and Binary64 multipliers, the shortfall is made up using smaller LUT-based
multipliers).
difference between a 53×53 tree and a 64×64 tree (makes all problems vanish) is
not
visible at this level (100+ cores on a die).
This is your implementation choice coloring you thoughts.
I can afford FPGAs...I am not asking you to spend big money--I am merely asking you to quit
I can't afford to get an ASIC made.
So, implementation choices here are:I have been wondering for a while--are the DSP things you build your
FPGA;
Nothing.
What kind of car do you drive ??
I don't drive a car...I was going to ask if your car had hand rolled windows, a manual
I tend to fairly rapidly get tired out if trying to drive.
Les messages affichés proviennent d'usenet.