Liste des Groupes | Revenir à c arch |
On 2/25/24 5:22 PM, MitchAlsup1 wrote:Paul A. Clayton wrote:[snip]When I looked at the pipeline design presented in the Arm Cortex-Almost like an Mc88100 which had 5 pipelines.
A55 Software Optimization Guide, I was surprised by the design.
Figure 1 (page 10 in Revision R2p0) shows nine execution pipelines
(ALU0, ALU1, MAC, DIV, branch, store, load, FP/Neon MAC &
DIV/SWRT, FP/Neon ALU) and ALU0 and ALU1 have a shift pipeline
stage before an ALU stage (clearly for AArch32).
I think I have an incorrect conception of data communicationMuch newer µArchitectural literature does not draw a firm box
(fowarding and register-to-functional-unit). I also seem to be
conflating somewhat issue port and functional unit. Forwarding
from nine locations to nine locations and the remaining eight
locations to eight locations (counting functional unit as a single
target location even though a functional unit may have three
functionally different input operands).
I am used to functionality being merged; e.g., the multiplier alsoThe above remains my style in µArchitecture literature, but when
having a general ALU. Merged functional units would still need to
route the operands to the appropriate functionality, but selecting
the operation path for two operands *seems* simpler than selecting
distinct operands and separate functional unit independently. This
might also be a nomenclature issue.
If one can only begin two operations in a cycle, the generality ofIn general, operand timing is tight and you better not screw it up;
having nine potential paths seems wasteful to me. Having separate
paths for FP/Neon and GPR-using operations makes sense because of
the different register sets (as well as latency/efficiency-
optimized functional units vs. SIMD-optimized functional units;
sharing execution hardware is tempting but there are tradeoffs).
With nine potential issue ports, it seems strange to me that widthLikely to be a register porting or a register port analysis limitation.
is strictly capped at two.
Even though AArch64 does not have MyVVM works best with value-containing reservation stations.
66000's Virtual Vector Method to exploit normally underutilized,
there would be cases where an extra instruction or two could
execute in parallel without increasing resources significantly. As
an outsider, I can only assume that any benefit did not justify
the costs in hardware and design effort. (With in-order execution,
even a nearly free [hardware] increasing of width may not result
in improved performance or efficiency.)
The separation of MAC and DIV is mildly questionable — from myMany (MANY) MUL-DIV pairs are data dependent. y = i*m/n;
very amateur perspective — not supporting dual issue of a MAC-DIV
pair seems very unlikely to hurt performance but the cost may be
trivial.
I also ass?me the other operations are usually available forIn general, any 2 calculations that are not data-dependent, can
parallel execution (though this depends somewhat on compiler
optimization for the microarchitecture), so execution of a
multiply and a divide in parallel is probably uncommon.
The FP/Neon section has these operations merged into a functional
unit; I guess — I am not motivated to look this — that this is
because FP divide/sqrt use the multiplier while integer divide
does not.
The Chips and Cheese article also indicated that branches are onlyIn an 8 stage pipeline, the 2 cycles of added delay should hurt by ~5%-7%
resolved at writeback, two cycles later than if branch direction
was resolved in the first execution stage. The difference between
a six stage misprediction penalty and an eight stage one is not
huge, but it seems to indicate a difference in focus. With
5% performance loss sounds expensive for a something that *seems*
not terribly expensive to fix.
[snip]I would have *guessed* that an AGLU (a functional unit providingBe careful with assumptions like that. Silicon area with no moving signals is remarkably power efficient.
address generation and "simple" ALU functions, like AMD's Bobcat?)
would be more area and power efficient than having separate
pipelines, at least for store address generation.
There is also the extra forwarding for separate functional unitsMc 88120 had forwarding into the reservation stations and forwarding
(and perhaps some extra costs from increased distance), but I
admit that such factors really expose my complete lack of hardware
experience. (I am aware of clock gating as a power saving
technique and that "doing nothing" is cheap, but I have no
intuition of the weights of the tradeoffs.)
(I was also very surprised by how much extra state the A55 has:Imaging having to stick all this stuff on a die at 2µ instead of 5nm !!
over 100 extra "registers". Even though these are not all 64-bit
data storage units, this was still a surprising amount of extra
state for a core targeting area efficiency. The storage itself may
not be particularly expensive, but it gives some insight into how
complex even a "simple" implementation can be.)
[snip interesting stuff]Perhaps mildly out-of-order designs (say a little more than theIt is useful, just not all that much.
PowerPC 750) are not actually useful (other than as a starting
point for understanding out-of-order design). I do not understand
why such an intermediate design (between in-order and 30+
scheduling window out-of-order) is not useful. It may be that
going from say 10 to 30 scheduler entries gives so much benefit10 does not accommodate much ILP beyond that of a 10 deep pipeline.
for relatively little extra cost (and no design is so precisely
area constrained — even doubling core size would not mean pushing
L1 off-chip, e.g.). I have a lumper taxonomic bias, so I have some
emotional investment in intermediate and mixed designs.
30 accommodates L1 cache misses and typical FP latencies.
90 accommodates "almost everything else"
250 accommodates multiple L1 misses with L2 hits and "everything else".
Presumably the benefit depends on issue width and load-to-use
latency (pipeline depth, cache capacity, etc.). [For a cheap
"general purpose" processor, not covering FP latencies well may
not be very important.] Better hiding L1 _hit_ latency would seem
to provide a significant fraction of the frequency and ILP benefit
of out-or-order for a smallish core. (Some branch resolution
latency can also be hidden; an in-order core can delay resolution
until writeback of control-dependent instructions, but OoO's extra
buffering facilitates deeper speculation.)
If one has a scheduling window of 90 operations, having only threeI agree:: for Mc 88120 we had 96 instructions (max) in flight for
issue ports seems imbalanced to me.
Out-of-order execution would also seem to facilitate opportunisticMy 66150 does 16B fetch and parses 2 instructions per cycle,
use of existing functionality. Even just buffering decoded
instructions would seem to allow a 16-byte (aligned) instruction
fetch with two instruction decoders to issue more than two
instructions on some cycles without increasing register port
count, forwarding paths, etc. OoO would further increase the
frequency of being able to do more work with given hardware
resources.
Perhaps there may even be a case for a 1+ wide OoO core, i.e., an
OoO core which sometimes issue more than one instruction in a
cycle.
For something like a smart phone, one or two small cores might be
useful for background activity, tasks whose latency (within a
broad range) is not related to system responsiveness for the user.For a server expected to run embarrassingly parallel workloads, ifServers are not expected to run embarrassingly parallel applications,
they are expected to run an embarrassing large number of essentially
serial applications.
Shared caching of instructions still seems beneficial in "server
worklaods" compared to fully general multiprogram workloads. A
database server might even have more sharing, potentially having a
single process (so page table sharing would be more beneficial),
but that seems a less common use.
a wimpy core provides sufficient responsiveness, I would expectTalk to Google about that....
most of the cores (possibly even all of the cores) to be wimpy.
There might not be many workloads with such characteristics;
Urs Hölzle of Google put out a paper "Brawny cores still beat
wimpy cores, most of the time"(2010). While some of the points —
such as tail latency effects and software developments costs —
made in the paper are (in my opinion) quite significant, I thought
the argument significantly flawed. (I even wrote a blog post about
this paper: https://dandelion-watcher.blogspot.com/2012/01/weak-case-against-wimpy-cores.html)
The microservice programming model (motivated, from what I
understand, by problem-size and performance scaling and service
reliability with moderately reliable hardware without requiring
much programming effort to support scaling) may also have
significant implications on microarchitecture.
The design space is also very large. One can have heterogeneity ofWith a "proper interface" one should be able to off-load any
wimpy and brawny cores at the rack level, wimpy-only chips within
a heterogeneous package, heterogeneity within a chip, temporal
heterogeneity (SMT and dynamic partitioning of core resources),
etc. Core strength can very widely and performance balance can be
diverse (e.g., a core with a quarter of the performance of a
brawny core on general tasks might have — with coprocessors,
tightly coupled accelerators, or general microarchitecture —
approximately equal performance for some tasks).
The performance of weaker cores can also be increased by
increasing communication performance within local groups of such
cores. Exploiting this would likely require significant
programming effort, but some of the effort might be automated
(even before AI replaces programmers). This assumes that there is
significant communication that is less temporally local than
within a core (out-of-order execution changes the temporal
proximity of value communication; a result consumer might be
nearby in program order but substantially more distant in
execution order) and that intermediate resource allocation to
intermediate latency/bandwdith communication can be beneficial.
(I also think that there is an opportunity for optimization in theAnd often necessarily serialized.
on-chip network. Optimizing the on-chip network for any-to-any
communication seems less appropriate for many workloads not only
because of the often limited scale of communication but also
because the communication is, I suspect, often specialized.
Getting a network design that is very good for some uses andSee:: https://www.tachyum.com/media/pdf/tachyum_20isc20.pdf
adequate others seems challenging even with software cooperation.
Rings seem really nice for pipeline-style parallelism and someWe are arriving at the scale where we want to ship a cache line of data in a single clock in order to have sufficient coherent BW for 128+ cores.
other uses, crossbars seem nice for small node groups with heavy
communication, grids seem to fit large node counts with nearest
neighbor communication (physical modeling?), etc. Channel width,
flit size, channel count also involve tradeoffs. Some
communication does not require sending an entire cache block of
data, but a smaller flit will have more overhead.)
Les messages affichés proviennent d'usenet.