Liste des Groupes | Revenir à c arch |
mitchalsup@aol.com (MitchAlsup1) writes:When you have the property that FETCH will stumble over the join pointSooner or later, the pipeline designer needs to recognize the of>
occuring
code sequence pictured as::
>
INST
INST
BC-------\
INST |
INST |
INST |
/----BR |
| INST<----/
| INST
| INST
\--->INST
INST
>
So that the branch predictor predicts as usual, but DECODER recognizes
the join point of this prediction, so if the prediction is wrong, one
only nullifies the mispredicted instructions and then inserts the
alternate instructions while holding the join point instructions until
the alternate instruction complete.
Would this really save much? The main penalty here would still be
fetching and decoding the alternate instructions. Sure, the
instructions after the join point would not have to be fetched and
decoded, but they would still have to go through the renamer, which
typically is as narrow or narrower than instruction fetch and decode,
so avoiding fetch and decode only helps for power (ok, that's
something), but probably not performance.
And the kind of insertion you imagine makes things more complicated,PREDication is designed for the unpredictable branches--as a means to
and only helps in the rare case of a misprediction.
What alternatives do we have? There still are some branches that areConditional execution and merging (CMOV) rarely takes as few
hard to predict and for which it would be helpful to optimize them.
>
Classically the programmer or compiler was supposed to turn
hard-to-predict branches into conditional execution (e.g., someone
(IIRC ARM) has an ITE instruction for that, and My 6600 has something
similar IIRC). These kinds of instructions tend to turn the condition
from a control-flow dependency (free when predicted, costly when
mispredicted) into a data-flow dependency (usually some cost, but
usually much lower than a misprediction).
But programmers are not that great on predicting mispredictions (andA conditional branch inside a subroutine is almost always dependent on
programming languages usually don't have ways to express them),
compilers are worse (even with feedback-directed optimization as it
exists, i.e., without prediction accuracy feedback), and
predictability might change between phases or callers.
So it seems to me that this is something that the hardware might use
history data to predict whether a branch is hard to predict (and maybe
also taking into account how the dependencies affect the cost), and to
switch between a branch-predicting implementation and a data-flow
implementation of the condition.
>
I have not followed ISCA and Micro proceedings in recent years, but I
would not be surprised if somebody has already done a paper on such an
idea.
>
- anton
Les messages affichés proviennent d'usenet.