Liste des Groupes | Revenir à cl c++ |
On 9/15/2024 11:54 AM, Paavo Helde wrote:Took a look at that. It appears for using Relacy I have to somehow translate my algorithm into Relacy-language, but this seems non-trivial. There is about zero documentation and zero code comments, and examples are not compiling with VS2022. Looks like for using it I would need to grow some 20 extra IQ points and spend a significant amount of time.>I need to look at this when I get some time. Been very busy lately. Humm... Perhaps, when you get some free time to burn... Try to model it in Relacy and see what happens:
I am thinking of developing some lock-free data structures for better scaling on multi-core hardware and avoiding potential deadlocks. In particular, I have got a lot of classes which are mostly immutable after construction, except for some cached data members which are calculated on demand only, then stored in the object for later use.
>
Caching single numeric values is easy. However, some cached data is large and accessed via a std::shared_ptr type refcounted smartpointers. Updating such a smartpointer in a thread-shared object is a bit more tricky. There is a std::atomic<std::shared_ptr> in C+ +20, but I wonder if I can do a bit better by providing my own implementation which uses CAS on a single pointer (instead of DCAS with additional data fields or other trickery).
>
This is assuming that
>
a) the cached value will not change any more after assigned, and will stay intact until the containing object destruction;
>
b) it's ok if multiple threads calculate the value at the same time; the first one stored will be the one which gets used.
>
My current prototype code is as follows (Ptr<T> is similar to std::shared_ptr<T>, but is using an internal atomic refcounter; using an internal counter allows me to generate additional smartpointers from a raw pointer).
>
template<typename T>
class CachedAtomicPtr {
public:
CachedAtomicPtr(): ptr_(nullptr) {}
>
/// Store p in *this if *this is not yet assigned.
/// Return pointer stored in *this, which can be \a p or not.
Ptr<T> AssignIfNull(Ptr<T> p) {
const T* other = nullptr;
if (ptr_.compare_exchange_weak(other, p.get(), std::memory_order_release, std::memory_order_acquire)) {
p->IncrementRefcount();
return p;
} else {
// wrap in an extra smartptr (increments refcount)
return Ptr<T>(other);
}
}
>
/// Return pointer stored in *this,
Ptr<T> Load() const {
return Ptr<T>(ptr_);
}
>
~CachedAtomicPtr() {
if (const T* ptr = ptr_) {
ptr->DecrementRefcount();
}
}
private:
std::atomic<const T*> ptr_;
};
>
Example usage:
>
/// Objects of this class are in shared use by multiple threads.
class A {
// Returns B corresponding to the value of *this.
// If not yet in cache, B is calculated and cached in *this.
// Calculating can happen in multiple threads in parallel,
// the first cached result will be used in all threads.
Ptr<B> GetOrCalcB() const {
Ptr<B> b = cached_.Load();
if (!b) {
b = cached_.AssignIfNull(CalcB());
}
return b;
}
// ...
private:
// Calculates cached B object according to value of *this.
Ptr<B> CalcB() const;
private:
mutable CachedAtomicPtr<B> cached_;
// ... own data ...
};
>
So, what do you think? Should I just use std::atomic<std::shared_ptr> instead? Any other suggestions? Did I get the memory order parameters right in compare_exchange_weak()?
>
https://www.1024cores.net/home/relacy-race-detector/rrd-introduction
https://groups.google.com/g/relacy
Les messages affichés proviennent d'usenet.