Sujet : Dealing with mispredictions (was: Microarchitectural support ...)
De : anton (at) *nospam* mips.complang.tuwien.ac.at (Anton Ertl)
Groupes : comp.archDate : 26. Dec 2024, 10:46:21
Autres entêtes
Organisation : Institut fuer Computersprachen, Technische Universitaet Wien
Message-ID : <2024Dec26.104621@mips.complang.tuwien.ac.at>
References : 1 2 3 4 5 6 7
User-Agent : xrn 10.11
mitchalsup@aol.com (MitchAlsup1) writes:
Sooner or later, the pipeline designer needs to recognize the of
occuring
code sequence pictured as::
>
INST
INST
BC-------\
INST |
INST |
INST |
/----BR |
| INST<----/
| INST
| INST
\--->INST
INST
>
So that the branch predictor predicts as usual, but DECODER recognizes
the join point of this prediction, so if the prediction is wrong, one
only nullifies the mispredicted instructions and then inserts the
alternate instructions while holding the join point instructions until
the alternate instruction complete.
Would this really save much? The main penalty here would still be
fetching and decoding the alternate instructions. Sure, the
instructions after the join point would not have to be fetched and
decoded, but they would still have to go through the renamer, which
typically is as narrow or narrower than instruction fetch and decode,
so avoiding fetch and decode only helps for power (ok, that's
something), but probably not performance.
And the kind of insertion you imagine makes things more complicated,
and only helps in the rare case of a misprediction.
What alternatives do we have? There still are some branches that are
hard to predict and for which it would be helpful to optimize them.
Classically the programmer or compiler was supposed to turn
hard-to-predict branches into conditional execution (e.g., someone
(IIRC ARM) has an ITE instruction for that, and My 6600 has something
similar IIRC). These kinds of instructions tend to turn the condition
from a control-flow dependency (free when predicted, costly when
mispredicted) into a data-flow dependency (usually some cost, but
usually much lower than a misprediction).
But programmers are not that great on predicting mispredictions (and
programming languages usually don't have ways to express them),
compilers are worse (even with feedback-directed optimization as it
exists, i.e., without prediction accuracy feedback), and
predictability might change between phases or callers.
So it seems to me that this is something that the hardware might use
history data to predict whether a branch is hard to predict (and maybe
also taking into account how the dependencies affect the cost), and to
switch between a branch-predicting implementation and a data-flow
implementation of the condition.
I have not followed ISCA and Micro proceedings in recent years, but I
would not be surprised if somebody has already done a paper on such an
idea.
- anton
-- 'Anyone trying for "industrial quality" ISA should avoid undefined behavior.' Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>