Re: Efficiency of in-order vs. OoO

Liste des GroupesRevenir à c arch 
Sujet : Re: Efficiency of in-order vs. OoO
De : paaronclayton (at) *nospam* gmail.com (Paul A. Clayton)
Groupes : comp.arch
Date : 24. Mar 2024, 18:38:44
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <utpkun$fdrm$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.0
On 2/25/24 5:22 PM, MitchAlsup1 wrote:
Paul A. Clayton wrote:
[snip]
When I looked at the pipeline design presented in the Arm Cortex-
A55 Software Optimization Guide, I was surprised by the design.
Figure 1 (page 10 in Revision R2p0) shows nine execution pipelines
(ALU0, ALU1, MAC, DIV, branch, store, load, FP/Neon MAC &
DIV/SWRT, FP/Neon ALU) and ALU0 and ALU1 have a shift pipeline
stage before an ALU stage (clearly for AArch32).
 Almost like an Mc88100 which had 5 pipelines.
I think I have an incorrect conception of data communication
(fowarding and register-to-functional-unit). I also seem to be
conflating somewhat issue port and functional unit. Forwarding
from nine locations to nine locations and the remaining eight
locations to eight locations (counting functional unit as a single
target location even though a functional unit may have three
functionally different input operands).
I am used to functionality being merged; e.g., the multiplier also
having a general ALU. Merged functional units would still need to
route the operands to the appropriate functionality, but selecting
the operation path for two operands *seems* simpler than selecting
distinct operands and separate functional unit independently. This
might also be a nomenclature issue.
If one can only begin two operations in a cycle, the generality of
having nine potential paths seems wasteful to me. Having separate
paths for FP/Neon and GPR-using operations makes sense because of
the different register sets (as well as latency/efficiency-
optimized functional units vs. SIMD-optimized functional units;
sharing execution hardware is tempting but there are tradeoffs).
With nine potential issue ports, it seems strange to me that width
is strictly capped at two. Even though AArch64 does not have My
66000's Virtual Vector Method to exploit normally underutilized,
there would be cases where an extra instruction or two could
execute in parallel without increasing resources significantly. As
an outsider, I can only assume that any benefit did not justify
the costs in hardware and design effort. (With in-order execution,
even a nearly free [hardware] increasing of width may not result
in improved performance or efficiency.)

The separation of MAC and DIV is mildly questionable — from my
very amateur perspective — not supporting dual issue of a MAC-DIV
pair seems very unlikely to hurt performance but the cost may be
trivial.
 Many (MANY) MUL-DIV pairs are data dependent. y = i*m/n;
I also ass?me the other operations are usually available for
parallel execution (though this depends somewhat on compiler
optimization for the microarchitecture), so execution of a
multiply and a divide in parallel is probably uncommon.
The FP/Neon section has these operations merged into a functional
unit; I guess — I am not motivated to look this — that this is
because FP divide/sqrt use the multiplier while integer divide
does not.

The Chips and Cheese article also indicated that branches are only
resolved at writeback, two cycles later than if branch direction
was resolved in the first execution stage. The difference between
a six stage misprediction penalty and an eight stage one is not
huge, but it seems to indicate a difference in focus. With
 In an 8 stage pipeline, the 2 cycles of added delay should hurt by ~5%-7%
5% performance loss sounds expensive for a something that *seems*
not terribly expensive to fix.
[snip]
I would have *guessed* that an AGLU (a functional unit providing
address generation and "simple" ALU functions, like AMD's Bobcat?)
would be more area and power efficient than having separate
pipelines, at least for store address generation.
 Be careful with assumptions like that. Silicon area with no moving signals is remarkably power efficient.
There is also the extra forwarding for separate functional units
(and perhaps some extra costs from increased distance), but I
admit that such factors really expose my complete lack of hardware
experience. (I am aware of clock gating as a power saving
technique and that "doing nothing" is cheap, but I have no
intuition of the weights of the tradeoffs.)
(I was also very surprised by how much extra state the A55 has:
over 100 extra "registers". Even though these are not all 64-bit
data storage units, this was still a surprising amount of extra
state for a core targeting area efficiency. The storage itself may
not be particularly expensive, but it gives some insight into how
complex even a "simple" implementation can be.)
[snip interesting stuff]
Perhaps mildly out-of-order designs (say a little more than the
PowerPC 750) are not actually useful (other than as a starting
point for understanding out-of-order design). I do not understand
why such an intermediate design (between in-order and 30+
scheduling window out-of-order) is not useful. It may be that
 It is useful, just not all that much.
 
going from say 10 to 30 scheduler entries gives so much benefit
for relatively little extra cost (and no design is so precisely
area constrained — even doubling core size would not mean pushing
L1 off-chip, e.g.). I have a lumper taxonomic bias, so I have some
emotional investment in intermediate and mixed designs.
 10 does not accommodate much ILP beyond that of a 10 deep pipeline.
30 accommodates L1 cache misses and typical FP latencies.
90 accommodates "almost everything else"
250 accommodates multiple L1 misses with L2 hits and "everything else".
Presumably the benefit depends on issue width and load-to-use
latency (pipeline depth, cache capacity, etc.). [For a cheap
"general purpose" processor, not covering FP latencies well may
not be very important.] Better hiding L1 _hit_ latency would seem
to provide a significant fraction of the frequency and ILP benefit
of out-or-order for a smallish core. (Some branch resolution
latency can also be hidden; an in-order core can delay resolution
until writeback of control-dependent instructions, but OoO's extra
buffering facilitates deeper speculation.)
If one has a scheduling window of 90 operations, having only three
issue ports seems imbalanced to me.
Out-of-order execution would also seem to facilitate opportunistic
use of existing functionality. Even just buffering decoded
instructions would seem to allow a 16-byte (aligned) instruction
fetch with two instruction decoders to issue more than two
instructions on some cycles without increasing register port
count, forwarding paths, etc. OoO would further increase the
frequency of being able to do more work with given hardware
resources.
Perhaps there may even be a case for a 1+ wide OoO core, i.e., an
OoO core which sometimes issue more than one instruction in a
cycle.

For something like a smart phone, one or two small cores might be
useful for background activity, tasks whose latency (within a
broad range) is not related to system responsiveness for the user.
 
For a server expected to run embarrassingly parallel workloads, if
 Servers are not expected to run embarrassingly parallel applications,
they are expected to run an embarrassing large number of essentially
serial applications.
Shared caching of instructions still seems beneficial in "server
worklaods" compared to fully general multiprogram workloads. A
database server might even have more sharing, potentially having a
single process (so page table sharing would be more beneficial),
but that seems a less common use.

a wimpy core provides sufficient responsiveness, I would expect
most of the cores (possibly even all of the cores) to be wimpy.
There might not be many workloads with such characteristics;
 Talk to Google about that....
Urs Hölzle of Google put out a paper "Brawny cores still beat
wimpy cores, most of the time"(2010). While some of the points —
such as tail latency effects and software developments costs —
made in the paper are (in my opinion) quite significant, I thought
the argument significantly flawed. (I even wrote a blog post about
this paper: https://dandelion-watcher.blogspot.com/2012/01/weak-case-against-wimpy-cores.html)
The microservice programming model (motivated, from what I
understand, by problem-size and performance scaling and service
reliability with moderately reliable hardware without requiring
much programming effort to support scaling) may also have
significant implications on microarchitecture.
The design space is also very large. One can have heterogeneity of
wimpy and brawny cores at the rack level, wimpy-only chips within
a heterogeneous package, heterogeneity within a chip, temporal
heterogeneity (SMT and dynamic partitioning of core resources),
etc. Core strength can very widely and performance balance can be
diverse (e.g., a core with a quarter of the performance of a
brawny core on general tasks might have — with coprocessors,
tightly coupled accelerators, or general microarchitecture —
approximately equal performance for some tasks).
The performance of weaker cores can also be increased by
increasing communication performance within local groups of such
cores. Exploiting this would likely require significant
programming effort, but some of the effort might be automated
(even before AI replaces programmers). This assumes that there is
significant communication that is less temporally local than
within a core (out-of-order execution changes the temporal
proximity of value communication; a result consumer might be
nearby in program order but substantially more distant in
execution order) and that intermediate resource allocation to
intermediate latency/bandwdith communication can be beneficial.
(I also think that there is an opportunity for optimization in the
on-chip network. Optimizing the on-chip network for any-to-any
communication seems less appropriate for many workloads not only
because of the often limited scale of communication but also
because the communication is, I suspect, often specialized.
Getting a network design that is very good for some uses and
adequate others seems challenging even with software cooperation.
Rings seem really nice for pipeline-style parallelism and some
other uses, crossbars seem nice for small node groups with heavy
communication, grids seem to fit large node counts with nearest
neighbor communication (physical modeling?), etc. Channel width,
flit size, channel count also involve tradeoffs. Some
communication does not require sending an entire cache block of
data, but a smaller flit will have more overhead.)

Date Sujet#  Auteur
24 Mar 24 * Re: Efficiency of in-order vs. OoO15Paul A. Clayton
24 Mar 24 `* Re: Efficiency of in-order vs. OoO14MitchAlsup1
25 Mar 24  `* Re: Efficiency of in-order vs. OoO13Paul A. Clayton
25 Mar 24   +- Re: Efficiency of in-order vs. OoO1Anton Ertl
25 Mar 24   +- Re: Efficiency of in-order vs. OoO1MitchAlsup1
25 Mar 24   `* Re: Efficiency of in-order vs. OoO10Anton Ertl
25 Mar 24    +- Re: Efficiency of in-order vs. OoO1BGB
25 Mar 24    +* Re: Efficiency of in-order vs. OoO6John Dallman
26 Mar 24    i+- Re: Efficiency of in-order vs. OoO1Anton Ertl
26 Mar 24    i`* Re: Efficiency of in-order vs. OoO4Anton Ertl
26 Mar 24    i `* Performance monitoring (was: Efficiency of in-order vs. OoO)3Anton Ertl
26 Mar 24    i  +- Re: Performance monitoring (was: Efficiency of in-order vs. OoO)1John Dallman
26 Mar 24    i  `- Re: Performance monitoring1MitchAlsup1
25 Mar 24    `* Re: Efficiency of in-order vs. OoO2Terje Mathisen
26 Mar 24     `- Re: Efficiency of in-order vs. OoO1Terje Mathisen

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal