Liste des Groupes | Revenir à c arch |
Stefan Monnier <monnier@iro.umontreal.ca> wrote:What kind of front end µArchitecture are you assuming that makes>In case of branch predictor itself it means delay feedback by some>
number of clocks, which looks like minor cost.
You can still make your next predictions based on "architectural state
+ pending predictions" if the pending predictions themselves only
depend ultimately on the architectural state.
>OTOH delaying fetches from speculatively fetched addresses will>
increase latency on critical path, possibly leading to
significant slowdown.
I think you can similarly perform eagerly the fetches from speculatively
fetched addresses but only if you can ensure that these will leave no
trace if the speculation happens to fail.
It looks extremaly hard if not impossible.
Agreed insufficient all by itself but when combined...So whether and how you can do it depends the definition of "leave no>
trace". E.g. Mitch argues you can do it if you can refrain from putting
that info into the normal cache (where it would have to displace
something else, thus leaving a trace) and instead have to keep it in
what we could call a "speculative cache" but would likely be just some
sort of load buffer.
Alone that is clearly insufficient.
It does not.If "leave no trace" includes not slowing down other concurrent memory
Here, you use the word fetch as if it were a LD instruction. Isaccesses (e.g. from other CPUs), it might require some kind of>
priority scheme.
First, one needs to ensure that the CPU performing speculative
fetch will not slown down due to say resource contention. If you
put some arbitrary limit like one or two speculative fetches in
flight, that is likely to be detectable by the attacker and may
leak information. If you want several ("arbitrarily many") speculative
fetches without slowing down normal execution, that would mean highly
overprovisioned machine.
Les messages affichés proviennent d'usenet.