Liste des Groupes | Revenir à c arch |
EricP <ThatWouldBeTelling@thevillage.com> writes:Hey, that's me !Anton Ertl wrote:[...]But if I recall correctly the fix for JavaScript was something like>
a judiciously placed FENCE instruction to block speculation.
"A"? IIRC Speculative load hardening inserts LFENCE instructions in
lots of places, IIRC between every branch and a subsequent load. And
it relies on an Intel-specific non-architectural (and thus not
documented in the architecture manual) side effect of LFENCE, and
AFAIK on AMD CPUs LFENCE does not have that side effect. And the
slowdowns I have seen in papers about speculative load hardening have
been in the region 2.3-2.5.
>And for the kernel this attack surface should be quite small as all of>
these values are already validated.
Which values are already validated and how does that happen?
>
What I read about the Linux kernel is that for Spectre V1 the kernel
developers try to put mitigations in those places where potential
attacker-controlled data is expected; one such mitigation is to turn
(predicted) control flow into (non-predicted) data flow. The problem
with that approach is that they can miss such a place, and even if it
works, it's extremely expensive in developer time.
>
As for missing such places that actually does happen: I read one paper
or web page where a security researcher needed some hole in the
Spectre defense of the kernel for his work (I don't remember what that
was) and thanked somebody else for providing information about such a
hole. I am sure this hole is fixed in newer versions of the kernel,
but who knows how many yet-undiscovered (by white hats) holes exist?
This shows that this approach to dealing with Spectre is not a good
long-term solution.
>So wouldn't it just be a matter of replacing certain kernel value>
validation IF statements with IF_NO_SPECULATE?
It's a little bit different, but the major issue here is which
"certain kernel value validation IF statements" should be hardened.
You can, e.g., apply ultimate speculative load hardening across the
whole kernel, and the kernel will slow down by a factor of about 2.5;
and that would fix just Spectre v1 and maybe a few others, but not all
Spectre-type vulnerabilities.
>I have difficulty believing that the branch predictor values from some>
thread in one process would be anything but a *negative* impact on a
random different thread in a different process.
This sounds very similar to the problem of aliasing of two different
branches in the branch predictor. The branch predictor researchers
have looked into that, and found that it does not pay off to tag
predictions with the branches they are for. The aliased branch is at
least as likely to benefit from the prediction as it is to suffer from
interference; as a further measure agree predictors [sprangle+97] were
proposed; I don't know if they ever made it into practical
application.
>
As for the idea of erasing the branch predictor on process switch:
>
Consider the case where your CPU-bound process has to make way for a
short time slice of an I/O-bound process, and once that has submitted
its next synchronous I/O request, your CPU-bound process gets control
again. The I/O bound process tramples only over a small part of
branch predictor state, but if you erase on process switch, all the
branch preductor state will be gone when the CPU-bound process gets
the CPU core again. That's the reason why we do not erase
microarchitectural state on context switch; we do it neither for
caches nor for branch predictors.
>
Moreover, another process will likely use some of the same libraries
the earlier process used, and will benefit from having the branches in
the library predicted (unless ASLR prevents them from using the same
entries in the branch predictor).
>
@InProceedings{sprangle+97,
author = {Eric Sprangle and Robert S. Chappell and Mitch Alsup
and Yale N. Patt},
title = {The Agree Predictor: A Mechanism for Reducing
Negative Branch History Interference},
crossref = {isca97},
pages = {284--291},
annote = {Reduces the number of conflict mispredictions by
having the predictor entries predict whether or not
some other predictor (say, a static predictor) is
correct. This increases the chance that the
predicted direction is correct in case of a
conflict.}
}
>
@Proceedings{isca97,
title = "$24^\textit{th}$ Annual International Symposium on Computer
Architecture",
booktitle = "$24^\textit{th}$ Annual International Symposium on
Computer Architecture",
year = "1997",
key = "ISCA 24",
}
>Because if you retain>
the predictor values then the new thread has to unlearn what it learned,
before it starts to learn values for the new thread. Whereas if the
predictor is flushed it can immediately learn its own values.
Unlearn? The only thing I can think about in that direction is that a
two-bit counter (for some history and maybe branch address) happens to
be in a state where two instead of one misprediction is necessary
before the prediction changes. Anyway, branch prediction research has
looked into the issue a long time ago and found that erasing on
context switch is a net loss.
>
- anton
Les messages affichés proviennent d'usenet.