Sujet : Re: Memory ordering
De : cr88192 (at) *nospam* gmail.com (BGB)
Groupes : comp.archDate : 16. Nov 2024, 00:35:22
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <vh8ls8$3l4bh$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13
User-Agent : Mozilla Thunderbird
On 11/15/2024 4:05 PM, Chris M. Thomasson wrote:
On 11/15/2024 12:53 PM, BGB wrote:
On 11/15/2024 11:27 AM, Anton Ertl wrote:
jseigh <jseigh_es00@xemaps.com> writes:
Anybody doing that sort of programming, i.e. lock-free or distributed
algorithms, who can't handle weakly consistent memory models, shouldn't
be doing that sort of programming in the first place.
>
Do you have any argument that supports this claim.
>
Strongly consistent memory won't help incompetence.
>
Strong words to hide lack of arguments?
>
>
In my case, as I see it:
The tradeoff is more about implementation cost, performance, etc.
>
Weak model:
Cheaper (and simpler) to implement;
Performs better when there is no need to synchronize memory;
Performs worse when there is need to synchronize memory;
...
[...]
A TSO from a weak memory model is as it is. It should not necessarily perform "worse" than other systems that have TSO as a default. The weaker models give us flexibility. Any weak memory model should be able to give sequential consistency via using the right membars in the right places.
The speed difference is mostly that, in a weak model, the L1 cache merely needs to fetch memory from the L2 or similar, may write to it whenever, and need not proactively store back results.
As I understand it, a typical TSO like model will require, say:
Any L1 cache that wants to write to a cache line, needs to explicitly request write ownership over that cache line;
Any attempt by other cores to access this line, may require the L2 cache to send a message to the core currently holding the cache line for writing to write back its contents, with the request unable to be handled until after the second core has written back the dirty cache line.
This would create potential for significantly more latency in cases where multiple cores touch the same part of memory; albeit the cores will see each others' memory stores.
So, initially, weak model can be faster due to not needing any additional handling.
But... Any synchronization points, such as a barrier or locking or releasing a mutex, will require manually flushing the cache with a weak model. And, locking/releasing the mutex itself will require a mechanism that is consistent between cores (such as volatile atomic swaps or similar, which may still be weak as a volatile-atomic-swap would still not be atomic from the POV of the L2 cache; and an MMIO interface could be stronger here).
Seems like there could possibly be some way to skip some of the cache flushing if one could verify that a mutex is only being locked and unlocked on a single core.
Issue then is how to deal with trying to lock a mutex which has thus far been exclusive to a single core. One would need some way for the core that last held the mutex to know that it needs to perform an L1 cache flush.
Though, one possibility could be to leave this part to the OS scheduler/syscall/... mechanism; so the core that wants to lock the mutex signals its intention to do so via the OS, and the next time the core that last held the mutex does a syscall (or tries to lock the mutex again), the handler sees this, then performs the L1 flush and flags the mutex as multi-core safe (at which point, the parties will flush L1s at each mutex lock, though possibly with a timeout count so that, if the mutex has been single-core for N locks, it reverts to single-core behavior).
This could reduce the overhead of "frivolous mutex locking" in programs that are otherwise single-threaded or single processor (leaving the cache flushes for the ones that are in-fact being used for synchronization purposes).
...