Liste des Groupes | Revenir à c arch |
On 6/18/2024 4:09 PM, MitchAlsup1 wrote:BGB wrote:
On 6/13/2024 3:40 PM, MitchAlsup1 wrote:
In this case, scheduling as-if it were an in-order core was leading to better performance than a more naive ordering (such as directly using the results of previous instructions or memory loads, vs shuffling
other
>
instructions in between them).Either way, seemed to be different behavior than seen on either the Ryzen or on Intel Core based CPUs (where, seemingly, the CPU does not care about the relative order).Because it had no requirement of code scheduling, unlike 1st generation
RISCs, so the cores were designed to put up good performance scores without any code scheduling.
Yeah, but why was Bulldozer/Piledriver seemingly much more sensitive toThey "blew" the microarchitecture.
instruction scheduling issues than either its predecessors (such as the
Phenom II) and successors (Ryzen)?...
Though, apparently "low IPC" was a noted issue with this processor family (apparently trying to gain higher clock-speeds at the expense of
IPC; using a 20-stage pipeline, ...).
Though, less obvious how having a longer pipeline than either its predecessors or successors would effect instruction scheduling.
One of the things we found in Mc 88120 was that the compiler should
NEVER
be allowed to put unnecessary instructions in decode-execute slots that
were unused--and that almost invariable--the best code for the GBOoO machine was almost invariably the one with the fewest instructions, and
if several sequences had equally few instructions, it basically did not
matter.
For example::
for( i = 0; i < max, i++ )
a[i] = b[i];
was invariably faster than::
for( ap = &a[0], bp = & b[0];, i = 0; i < max; i++ )
*ap++ = *bp++;
because the later has 3 ADDs in the loop wile the former has but 1.
Because of this, I altered my programming style and almost never end up
using ++ or -- anymore.
In this case, it would often be something more like:That is what VVM does, without you having to lift a finger.
maxn4=max&(~3);
for(i=0; i<maxn4; i+=4)
{
ap=a+i; bp=b+i;
t0=ap[0]; t1=ap[1];
t2=ap[2]; t3=ap[3];
bp[0]=t0; bp[1]=t1;
bp[2]=t2; bp[3]=t3;
}
if(max!=maxn4)
{
for(; i < max; i++ )
a[i] = b[i];
}
If things are partially or fully unrolled, they often go faster.And ALWAYS eat more code space.
Using a
large number of local variables seems to be effective (even in cases where the number of local variables exceeds the number of CPU
registers).
Generally also using as few branches as possible.
Etc...
Les messages affichés proviennent d'usenet.