Liste des Groupes | Revenir à cl c++ |
On 9/17/2024 2:22 AM, Paavo Helde wrote:Yes, that's the idea. The first thread which manages to install non-null pointer will increase the refcount, others will fail and their objects will be released when refcounts drop to zero.On 17.09.2024 09:04, Chris M. Thomasson wrote:Only one thread should ever get here, right? It just installed the pointer p.get() into ptr_, right?On 9/16/2024 10:59 PM, Chris M. Thomasson wrote:>On 9/16/2024 10:54 PM, Paavo Helde wrote:[...]>template<typename T>
class CachedAtomicPtr {
public:
CachedAtomicPtr(): ptr_(nullptr) {}
>
/// Store p in *this if *this is not yet assigned.
/// Return pointer stored in *this, which can be \a p or not.
Ptr<T> AssignIfNull(Ptr<T> p) {
const T* other = nullptr;
if (ptr_.compare_exchange_weak(other, p.get(), std::memory_order_release, std::memory_order_acquire)) {
p->IncrementRefcount();
return p;
} else {
// wrap in an extra smartptr (increments refcount)
return Ptr<T>(other);
}
^^^^^^^^^^^^^^^^^^
>
Is Ptr<T> an intrusive reference count? I assume it is.
Yes. Otherwise I could not generate new smartpointers from bare T*.
>
>
FYI, here is my current full compilable code together with a test harness (no relacy, could not get it working, so this just creates a number of threads which make use of the CachedAtomicPtr objects in parallel.
>
#include <cstddef>
#include <atomic>
#include <iostream>
#include <stdexcept>
#include <deque>
#include <mutex>
#include <thread>
#include <vector>
>
/// debug instrumentation
std::atomic<int> gAcount = 0, gBcount = 0, gCASFailureCount = 0;
/// program exit code
std::atomic<int> exitCode = EXIT_SUCCESS;
>
void Assert(bool x) {
if (!x) {
throw std::logic_error("Assert failed");
}
}
>
class RefCountedBase {
public:
RefCountedBase(): refcount_(0) {}
RefCountedBase(const RefCountedBase&): refcount_(0) {}
RefCountedBase(RefCountedBase&&) = delete;
RefCountedBase& operator=(const RefCountedBase&) = delete;
RefCountedBase& operator=(RefCountedBase&&) = delete;
>
void Capture() const noexcept {
++refcount_;
}
void Release() const noexcept {
if (--refcount_ == 0) {
delete const_cast<RefCountedBase*>(this);
}
}
virtual ~RefCountedBase() {}
>
>
private:
mutable std::atomic<std::size_t> refcount_;
};
>
template<class T>
class Ptr {
public:
Ptr(): ptr_(nullptr) {}
explicit Ptr(const T* ptr): ptr_(ptr) { if (ptr_) { ptr_- >Capture(); } }
Ptr(const Ptr& b): ptr_(b.ptr_) { if (ptr_) { ptr_->Capture(); } }
Ptr(Ptr&& b) noexcept: ptr_(b.ptr_) { b.ptr_ = nullptr; }
~Ptr() { if (ptr_) { ptr_->Release(); } }
Ptr& operator=(const Ptr& b) {
if (b.ptr_) { b.ptr_->Capture(); }
if (ptr_) { ptr_->Release(); }
ptr_ = b.ptr_;
return *this;
}
Ptr& operator=(Ptr&& b) noexcept {
if (ptr_) { ptr_->Release(); }
ptr_ = b.ptr_;
b.ptr_ = nullptr;
return *this;
}
const T* operator->() const { return ptr_; }
const T& operator*() const { return *ptr_; }
explicit operator bool() const { return ptr_!=nullptr; }
const T* get() const { return ptr_; }
private:
mutable const T* ptr_;
};
>
template<typename T>
class CachedAtomicPtr {
public:
CachedAtomicPtr(): ptr_(nullptr) {}
/// Store p in *this if *this is not yet assigned.
/// Return pointer stored in *this, which can be \a p or not.
Ptr<T> AssignIfNull(Ptr<T> p) {
const T* other = nullptr;
if (ptr_.compare_exchange_strong(other, p.get(), std::memory_order_release, std::memory_order_acquire)) {
p->Capture();
In my usage case I do not have thread_c() because nobody is changing the pointer any more after it is set. It is kept alive while the CachedAtomicPtr object is alive. Load() will increment the refcounter, so the pointed B object will stay alive even if CachedAtomicPtr is destroyed. My test harness is checking this scenario as well.return p;
} else {
++gCASFailureCount;
return Ptr<T>(other);}Now this is the crux of an potential issue. Strong thread safety allows for a thread to take a reference even if it does not already own one. This is not allowed in basic thread safety.
}
Ptr<T> Load() const {
return Ptr<T>(ptr_);
}
So, for example this scenario needs strong thread safety:
static atomic_ptr<foo> g_foo(nullptr);
thread_a()
{
g_foo = new foo();
}
thread_b()
{
local_ptr<foo> l_foo = g_foo;
if (l_foo) l_foo->bar();
}
thread_c()
{
g_foo = nullptr;
}
This example does not work with shared_ptr, but should work with atomic<shared_ptr>, it should even be lock-free on archs that support it. thread_b is taking a reference to g_foo when it does not already own a reference.This is achieved automatically because all threads access the A object via their own smartpointers, so A (and its contained CachedAtomicPtr) stay alive while there is any thread which can access them. In particular, they stay alive during the Load() calls as the calling thread holds a smartpointer to A.
So, basically you would need your CachedAtomicPtr to stay alive. It's dtor should only be called after all threads that could potentially use it are joined, and the program is about to end.
Or else, I think you are going to need strong thread safety for the CachedAtomicPtr::Load function to work in a general sense.Thanks for the reply!
Just skimmed over it.
Les messages affichés proviennent d'usenet.