Re: Efficiency of in-order vs. OoO

Liste des GroupesRevenir à c arch 
Sujet : Re: Efficiency of in-order vs. OoO
De : mitchalsup (at) *nospam* aol.com (MitchAlsup1)
Groupes : comp.arch
Date : 24. Mar 2024, 20:00:22
Autres entêtes
Organisation : Rocksolid Light
Message-ID : <d28278800443aa5f710d20d03a54ff78@www.novabbs.org>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
User-Agent : Rocksolid Light
Paul A. Clayton wrote:

On 2/25/24 5:22 PM, MitchAlsup1 wrote:
Paul A. Clayton wrote:
[snip]
When I looked at the pipeline design presented in the Arm Cortex-
A55 Software Optimization Guide, I was surprised by the design.
Figure 1 (page 10 in Revision R2p0) shows nine execution pipelines
(ALU0, ALU1, MAC, DIV, branch, store, load, FP/Neon MAC &
DIV/SWRT, FP/Neon ALU) and ALU0 and ALU1 have a shift pipeline
stage before an ALU stage (clearly for AArch32).
 Almost like an Mc88100 which had 5 pipelines.

I think I have an incorrect conception of data communication
(fowarding and register-to-functional-unit). I also seem to be
conflating somewhat issue port and functional unit. Forwarding
from nine locations to nine locations and the remaining eight
locations to eight locations (counting functional unit as a single
target location even though a functional unit may have three
functionally different input operands).
Much newer µArchitectural literature does not draw a firm box
properly around real function units.
For example, Mc 88120 has 6 function units buffered by 6 reservation
stations. Each function unit had an Integer Adder including things
like the branch resolution unit, FADD, and FMUL. When I drew those boxes, I would show post-forwarding operands arriving at the FU
and then after arriving either being diverted to the INT unit or
being diverted to the "other" function unit. This way you could
count operand and result busses and end points for fan-in::fan-out
reasons.
This style seems to have fallen from favor; possible because we made
the transition from value-containing reservation stations to value-
free reservation stations--alleviating register file porting problems.

I am used to functionality being merged; e.g., the multiplier also
having a general ALU. Merged functional units would still need to
route the operands to the appropriate functionality, but selecting
the operation path for two operands *seems* simpler than selecting
distinct operands and separate functional unit independently. This
might also be a nomenclature issue.
The above remains my style in µArchitecture literature, but when
describing block diagram and circuit design levels, only the interior
of the function unit is illustrated.

If one can only begin two operations in a cycle, the generality of
having nine potential paths seems wasteful to me. Having separate
paths for FP/Neon and GPR-using operations makes sense because of
the different register sets (as well as latency/efficiency-
optimized functional units vs. SIMD-optimized functional units;
sharing execution hardware is tempting but there are tradeoffs).
In general, operand timing is tight and you better not screw it up;
while result delivery timing only has to deal with fan-out and data
arrival issues.
My style was conceived back in the days where wires were fast and
metal was precious (3 layers). Now that we have 12-15 layers it
matters less, I suppose.

With nine potential issue ports, it seems strange to me that width
is strictly capped at two.
Likely to be a register porting or a register port analysis limitation.
Value-free reservation stations exacerbate this.

                           Even though AArch64 does not have My
66000's Virtual Vector Method to exploit normally underutilized,
there would be cases where an extra instruction or two could
execute in parallel without increasing resources significantly. As
an outsider, I can only assume that any benefit did not justify
the costs in hardware and design effort. (With in-order execution,
even a nearly free [hardware] increasing of width may not result
in improved performance or efficiency.)
VVM works best with value-containing reservation stations.

The separation of MAC and DIV is mildly questionable — from my
very amateur perspective — not supporting dual issue of a MAC-DIV
pair seems very unlikely to hurt performance but the cost may be
trivial.
 Many (MANY) MUL-DIV pairs are data dependent. y = i*m/n;

I also ass?me the other operations are usually available for
parallel execution (though this depends somewhat on compiler
optimization for the microarchitecture), so execution of a
multiply and a divide in parallel is probably uncommon.
In general, any 2 calculations that are not data-dependent, can
be launched into execution without temporal binds.

The FP/Neon section has these operations merged into a functional
unit; I guess — I am not motivated to look this — that this is
because FP divide/sqrt use the multiplier while integer divide
does not.

The Chips and Cheese article also indicated that branches are only
resolved at writeback, two cycles later than if branch direction
was resolved in the first execution stage. The difference between
a six stage misprediction penalty and an eight stage one is not
huge, but it seems to indicate a difference in focus. With
 In an 8 stage pipeline, the 2 cycles of added delay should hurt by ~5%-7%

5% performance loss sounds expensive for a something that *seems*
not terribly expensive to fix.

[snip]
I would have *guessed* that an AGLU (a functional unit providing
address generation and "simple" ALU functions, like AMD's Bobcat?)
would be more area and power efficient than having separate
pipelines, at least for store address generation.
 Be careful with assumptions like that. Silicon area with no moving signals is remarkably power efficient.

There is also the extra forwarding for separate functional units
(and perhaps some extra costs from increased distance), but I
admit that such factors really expose my complete lack of hardware
experience. (I am aware of clock gating as a power saving
technique and that "doing nothing" is cheap, but I have no
intuition of the weights of the tradeoffs.)
Mc 88120 had forwarding into the reservation stations and forwarding
between reservation station output and function unit input. That is
a lot of forwarding.

(I was also very surprised by how much extra state the A55 has:
over 100 extra "registers". Even though these are not all 64-bit
data storage units, this was still a surprising amount of extra
state for a core targeting area efficiency. The storage itself may
not be particularly expensive, but it gives some insight into how
complex even a "simple" implementation can be.)
Imaging having to stick all this stuff on a die at 2µ instead of 5nm !!

[snip interesting stuff]
Perhaps mildly out-of-order designs (say a little more than the
PowerPC 750) are not actually useful (other than as a starting
point for understanding out-of-order design). I do not understand
why such an intermediate design (between in-order and 30+
scheduling window out-of-order) is not useful. It may be that
 It is useful, just not all that much.
 
going from say 10 to 30 scheduler entries gives so much benefit
for relatively little extra cost (and no design is so precisely
area constrained — even doubling core size would not mean pushing
L1 off-chip, e.g.). I have a lumper taxonomic bias, so I have some
emotional investment in intermediate and mixed designs.
 10 does not accommodate much ILP beyond that of a 10 deep pipeline.
30 accommodates L1 cache misses and typical FP latencies.
90 accommodates "almost everything else"
250 accommodates multiple L1 misses with L2 hits and "everything else".

Presumably the benefit depends on issue width and load-to-use
latency (pipeline depth, cache capacity, etc.). [For a cheap
"general purpose" processor, not covering FP latencies well may
not be very important.] Better hiding L1 _hit_ latency would seem
to provide a significant fraction of the frequency and ILP benefit
of out-or-order for a smallish core. (Some branch resolution
latency can also be hidden; an in-order core can delay resolution
until writeback of control-dependent instructions, but OoO's extra
buffering facilitates deeper speculation.)

If one has a scheduling window of 90 operations, having only three
issue ports seems imbalanced to me.
I agree:: for Mc 88120 we had 96 instructions (max) in flight for
a 6-wide {issue, launch, execute, result, and retire}, we also
had 16-cycle execution window, so to stream DGEMM (from Matrix300}
we had to execute a LD {which would miss ½ the time} and them have
4 cycles for FMUL and 3 cycles for FADD allowing ST to capture the
FADD result and ship it off to cache. Going backwards; 16-(1+3+4)
meant the LD->L1$->miss->memory->LDalign had only 8 cycles.
The modern version with FMAC would allow 11-cycles LD-Miss-Align.

Out-of-order execution would also seem to facilitate opportunistic
use of existing functionality. Even just buffering decoded
instructions would seem to allow a 16-byte (aligned) instruction
fetch with two instruction decoders to issue more than two
instructions on some cycles without increasing register port
count, forwarding paths, etc. OoO would further increase the
frequency of being able to do more work with given hardware
resources.
My 66150 does 16B fetch and parses 2 instructions per cycle,
even though it is only 1-wide. By fetching wide, and scanning
ahead, one can identify branches and fetch their targets prior
to executing the branch, eliminating the need for the delay-slot
and reducing branch taken overhead down to about 0.13 cycles
even without branch prediction !!
But anything wider than 1-inistruction will need a branch predictor of some sort.

Perhaps there may even be a case for a 1+ wide OoO core, i.e., an
OoO core which sometimes issue more than one instruction in a
cycle.

For something like a smart phone, one or two small cores might be
useful for background activity, tasks whose latency (within a
broad range) is not related to system responsiveness for the user.
 
For a server expected to run embarrassingly parallel workloads, if
 Servers are not expected to run embarrassingly parallel applications,
they are expected to run an embarrassing large number of essentially
serial applications.

Shared caching of instructions still seems beneficial in "server
worklaods" compared to fully general multiprogram workloads. A
database server might even have more sharing, potentially having a
single process (so page table sharing would be more beneficial),
but that seems a less common use.

a wimpy core provides sufficient responsiveness, I would expect
most of the cores (possibly even all of the cores) to be wimpy.
There might not be many workloads with such characteristics;
 Talk to Google about that....

Urs Hölzle of Google put out a paper "Brawny cores still beat
wimpy cores, most of the time"(2010). While some of the points —
such as tail latency effects and software developments costs —
made in the paper are (in my opinion) quite significant, I thought
the argument significantly flawed. (I even wrote a blog post about
this paper: https://dandelion-watcher.blogspot.com/2012/01/weak-case-against-wimpy-cores.html)

The microservice programming model (motivated, from what I
understand, by problem-size and performance scaling and service
reliability with moderately reliable hardware without requiring
much programming effort to support scaling) may also have
significant implications on microarchitecture.

The design space is also very large. One can have heterogeneity of
wimpy and brawny cores at the rack level, wimpy-only chips within
a heterogeneous package, heterogeneity within a chip, temporal
heterogeneity (SMT and dynamic partitioning of core resources),
etc. Core strength can very widely and performance balance can be
diverse (e.g., a core with a quarter of the performance of a
brawny core on general tasks might have — with coprocessors,
tightly coupled accelerators, or general microarchitecture —
approximately equal performance for some tasks).
With a "proper interface" one should be able to off-load any
crypto processing too a place that is both time-constant and
where sensitive data never passes into the cache hierarchy of an untrusted core.
The performance of weaker cores can also be increased by
increasing communication performance within local groups of such
cores. Exploiting this would likely require significant
programming effort, but some of the effort might be automated
(even before AI replaces programmers). This assumes that there is
significant communication that is less temporally local than
within a core (out-of-order execution changes the temporal
proximity of value communication; a result consumer might be
nearby in program order but substantially more distant in
execution order) and that intermediate resource allocation to
intermediate latency/bandwdith communication can be beneficial.

(I also think that there is an opportunity for optimization in the
on-chip network. Optimizing the on-chip network for any-to-any
communication seems less appropriate for many workloads not only
because of the often limited scale of communication but also
because the communication is, I suspect, often specialized.
And often necessarily serialized.

Getting a network design that is very good for some uses and
adequate others seems challenging even with software cooperation.
See:: https://www.tachyum.com/media/pdf/tachyum_20isc20.pdf

Rings seem really nice for pipeline-style parallelism and some
other uses, crossbars seem nice for small node groups with heavy
communication, grids seem to fit large node counts with nearest
neighbor communication (physical modeling?), etc. Channel width,
flit size, channel count also involve tradeoffs. Some
communication does not require sending an entire cache block of
data, but a smaller flit will have more overhead.)
We are arriving at the scale where we want to ship a cache line of data in a single clock in order to have sufficient coherent BW for 128+ cores.

Date Sujet#  Auteur
24 Mar 24 * Re: Efficiency of in-order vs. OoO17Paul A. Clayton
24 Mar 24 `* Re: Efficiency of in-order vs. OoO16MitchAlsup1
25 Mar 24  `* Re: Efficiency of in-order vs. OoO15Paul A. Clayton
25 Mar 24   +- Re: Efficiency of in-order vs. OoO1Anton Ertl
25 Mar 24   +- Re: Efficiency of in-order vs. OoO1MitchAlsup1
25 Mar 24   `* Re: Efficiency of in-order vs. OoO12Anton Ertl
25 Mar 24    +- Re: Efficiency of in-order vs. OoO1BGB
25 Mar 24    +* Re: Efficiency of in-order vs. OoO8John Dallman
26 Mar 24    i+- Re: Efficiency of in-order vs. OoO1Anton Ertl
26 Mar 24    i`* Re: Efficiency of in-order vs. OoO6Anton Ertl
26 Mar 24    i +* Performance monitoring (was: Efficiency of in-order vs. OoO)4Anton Ertl
26 Mar 24    i i+- Re: Performance monitoring (was: Efficiency of in-order vs. OoO)1John Dallman
26 Mar 24    i i`* Re: Performance monitoring2MitchAlsup1
1 Oct 24    i i `- Re: Performance monitoring1MitchAlsup1
1 Oct 24    i `- Re: Efficiency of in-order vs. OoO1MitchAlsup1
25 Mar 24    `* Re: Efficiency of in-order vs. OoO2Terje Mathisen
26 Mar 24     `- Re: Efficiency of in-order vs. OoO1Terje Mathisen

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal