Sujet : Re: smrproxy v2
De : jseigh_es00 (at) *nospam* xemaps.com (jseigh)
Groupes : comp.lang.c++Date : 09. Dec 2024, 19:34:11
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <vj7d74$h1na$1@dont-email.me>
References : 1 2 3 4 5
User-Agent : Mozilla Thunderbird
On 11/27/24 10:29, jseigh wrote:
Some timings with 128 reader threads
unsafe 52.983 nsecs ( 0.000) 860.576 nsecs ( 0.000)
smr 54.714 nsecs ( 1.732) 882.356 nsecs ( 21.780)
smrlite 53.149 nsecs ( 0.166) 870.066 nsecs ( 9.490)
arc 739.833 nsecs ( 686.850) 11,988.289 nsecs ( 11,127.713)
rwlock 1,078.306 nsecs ( 1,025.323) 17,309.882 nsecs ( 16,449.306)
mutex 3,203.034 nsecs ( 3,150.052) 51,479.407 nsecs ( 50,618.831)
The first column is cpu time, third column is elapsed time.
unsafe is without any synchronized reader access. The
value in parentheses is the unsafe access time subtracted
out to separate out the synchronization overheads. smrlite is
smr proxy with thread_local overhead. So smrproxy lock/unlock
by itself is about 0.1 - 0.2 nanoseconds.
I'm going to drop working on the whole proxy interface thing. The
application can decide if it wants to hardcode a dependency on a
particular 3rd party libarary implementation or abstract it out
to a more portable api.
I figured out where the smr vs smrlite overhead is likely coming from.
1) thread_local load about .3 nsecs, 2 for lock/unlock so .6 nsecs.
2) overhead from lazy initialization, about .6 nsecs.
smrlite most of the time doesn't show any measurable overhead,
0 nsecs.
Theoretically, you could do do lazy initialization with zero
runtime overhead, but for most c++ apps, 1 millisecond is
considered fast, so I don't think there would be much interest
in it.
Joe Seigh