Liste des Groupes | Revenir à c arch |
EricP <ThatWouldBeTelling@thevillage.com> writes:loop:Anton Ertl wrote:>Compilers certainly can perform if-conversion, and there have been>
papers about that for at least 30 years, but compilers do not know
when if-conversion is profitable. E.g., "branchless" (if-converted)
binary search can be faster than a branching one when the searched
array is cached at a close-enough level, but slower when most of the
accesses miss the close caches
<https://stackoverflow.com/questions/11360831/about-the-branchless-binary-search>.
I'm missing something because the first answer by BeeOnRope looks wrong.
Unfortunately they don't show both pieces of code and the asm,
but a branching binary search to me looks just as load data dependent as
it
recalculates middle each iteration and the array index is array[middle].
Here's the branchless version:
>
while (n > 1) {
int middle = n/2;
base += (needle < *base[middle]) ? 0 : middle;
n -= middle;
}
>loop:
The branching version would then be:
>
while (n > 1) {
int middle = n/2;
if (needle >= *base[middle])
base += middle;
n -= middle;
}
In the branching version we have the following loop recurrences:A shifter with a latency of 2 would really hurt this loop.
>
1) middle = n/2; n -= middle
2) base += middle;
>
Recurrence 1 costs a few cycles per iteration, ideally 2. Recurrence
2 costs even less. There is no load in the loop recurrences. The
load is used only for verifying the branch prediction. And in
particular, the load does not data-depend on any previous load.
I do not see 2 LDDs being performed parallel unless the executionIf branch prediction is always correct, the branching version builds>
an instruction queue that is a long chain of scaled indexed loads
where the load index is data dependent on the prior iteration load value.
That's certainly not the case here.
>
Even if we assume that the branch predictor misses half of the time
(which is realistic), that means that the next load and the load of
all followig correctly predicted ifs just have to wait for the load of
the mispredicted branch plus the misprediction penalty. Which means
that on average two loads will happen in parallel, while the
branchless version only performs dependent loads. If the loads cost
more than a branch misprediction (because of cache misses), the
branching version is faster.
One can now improve the performance by using branchless for the firstPredicating has added latency on the delivery of Rb's recurrence in that
few levels (which are cached), and adding prefetching and a mixture of
branchless and branchy for the rest, but for understanding the
principle the simple variants are best.
Use a less-stupid ISA[*] I want to see the asm because Intel's CMOV always executes the
operand operation, then tosses the result if the predicate is false.
That's the usual thing for conditional execution/predication. But
here 0 has no dependencies and middle has only cheap dependencies, the
only expensive part is the load for the condition that turns into a
data dependency in the branchless version.
>
- anton
Les messages affichés proviennent d'usenet.