Liste des Groupes | Revenir à c arch |
mitchalsup@aol.com (MitchAlsup1) writes:OK...On Fri, 26 Jul 2024 17:00:07 +0000, Anton Ertl wrote:>Similarly, I expect that hardware that is designed for good TSO or>
sequential consistency performance will run faster on code written for
this model than code written for weakly consistent hardware will run
on that hardware.
According to Lamport; only the ATOMIC stuff needs sequential
consistency.
So, it is completely possible to have a causally consistent processor
that switches to sequential consistency when doing ATOMIC stuff and gain
performance when not doing ATOMIC stuff, and gain programmability when
doing atomic stuff.
That's not what I have in mind. What I have in mind is hardware that,
e.g., speculatively performs loads, predicting that no other core will
store there with an earlier time stamp. But if another core actually
performs such a store, the usual misprediction handling happens and
the code starting from that mispredicted load is reexecuted. So as
long as two cores do not access the same memory, they can run at full
speed, and there is only slowdown if there is actual (not potential)
communication between the cores.
>The depth of the execution window may be smaller than the time it takes
A problem with that approach is that this requires enough reorder
buffering (or something equivalent, there may be something cheaper for
this particular problem) to cover at least the shared-cache latency
(usually L3, more with multiple sockets).
A MEMBAR dropped into the pipeline, when nothing is speculative, takes>That's because software written for weakly>
consistent hardware often has to insert barriers or atomic operations
just in case, and these operations are slow on hardware optimized for
weak consistency.
The operations themselves are not slow.
Citation needed.
NEC SX is a multi-processor vector machine with the property that>By contrast, one can design hardware for strong ordering such that the>
slowness occurs only in those cases when actual (not potential)
communication between the cores happens, i.e., much less frequently.
How would you do this for a 256-way banked memory system of the
NEC SX ?? I.E., the processor is not in charge of memory order--
the memory system is.
Memory consistency is defined wrt what several processors do. Some
processor performs some reads and writes and another performs some
read and writes, and memory consistency defines what a processor sees
about what the other does, and what ends up in main memory. But as
long as the processors, their caches, and their interconnect gets the
memory ordering right, the main memory is just the backing store that
eventually gets a consistent result of what the other components did.
So it does not matter whether the main memory has one bank or 256.
One interesting aspect is that for supercomputers I generally thinkThat may be true, but I always saw it as "Supercomputers run
that they have not yet been struck by the software crisis:
Supercomputer hardware is more expensive than supercomputer software.
So I expect that supercomputer hardware designers tend to throw
complexity over the wall to the software people, and in many cases
they do (the Cell broadband engine offers many examples of that).
However, "some ... Fujitsu [ARM] CPUs run with TSO at all times"
<https://lwn.net/Articles/970907/>; that sounds like the A64FX, a
processor designed for supercomputing. So apparently in this case the
hardware designers actually accepted the hardware and design
complexity cost of TSO and gave a better model to software, even in
hardware designed for a supercomputer.
>
- anton
Les messages affichés proviennent d'usenet.