Sujet : Named pipes vs. Unix sockets (was: Re: Are We Back to the "Wars" Now ?)
De : vallor (at) *nospam* cultnix.org (vallor)
Groupes : comp.os.linux.miscDate : 22. Nov 2024, 08:29:16
Autres entêtes
Message-ID : <lqaq6cF8btnU3@mid.individual.net>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13
User-Agent : Pan/0.161 (Hmm2; be402cc9; Linux-6.12.0)
On 22 Nov 2024 07:02:47 GMT, vallor <
vallor@cultnix.org> wrote in
<
lqaoknF8btnU2@mid.individual.net>:
On Fri, 22 Nov 2024 06:37:06 -0000 (UTC), Lawrence D'Oliveiro
<ldo@nz.invalid> wrote in <vhp8qi$12m83$2@dont-email.me>:
On 22 Nov 2024 06:09:05 GMT, vallor wrote:
Doesn't the named pipe connection work through the filesystem code?
That
could add overhead.
No. The only thing that exists in the filesystem is the “special file”
entry in the directory. Opening that triggers special-case processing
in
the kernel that creates the usual pipe buffering/synchronization
structures (or links up with existing structures created by some prior
opening of the same special file, perhaps by a different process), not
dependent on any filesystem.
I just tried creating a C program to do speed tests on data transfers
through pipes and socket pairs between processes. I am currently
setting
the counter to 10 gigabytes, and transferring that amount of data
(using
whichever mechanism) only takes a couple of seconds on my system.
So the idea that pipes are somehow not suited to large data transfers
is
patently nonsense.
Can't use named pipes on just any filesystem -- won't work on NFS for
example, unless I'm mistaken.
Hard to believe NFS could stuff that up, but there you go ...
Just tested NFS, and named pipes work there.
$ time -p ( dd if=/dev/zero of=test count=$[1024*1024] ) & cat test > /
dev/null
[1] 38859
1048576+0 records in
1048576+0 records out
536870912 bytes (537 MB, 512 MiB) copied, 0.918945 s, 584 MB/s
real 0.92
user 0.16
sys 0.76
NFS vers 4.1.
$ nc -l -U -N /tmp/socket > /dev/null & time -p ( dd if=/dev/zero
count=$[1024*1024*2] | nc -U -N /tmp/socket )
[1] 40284
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 2.03617 s, 527 MB/s
real 2.03
user 0.47
sys 3.60
[1]+ Done nc -l -U -N /tmp/socket > /dev/null
However, the speed appears to be limited by dd in my examples -- setting a
block size to fill the pipe/packets seems to increase throughput:
$ nc -l -U -N /tmp/socket > /dev/null & time -p ( dd if=/dev/zero
count=$[1024*1024*4] bs=1024 | nc -U -N /tmp/socket > /dev/null )
[1] 41764
4194304+0 records in
4194304+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 4.02026 s, 1.1 GB/s
real 4.02
user 0.89
sys 7.11
$ time -p ( dd if=/dev/zero of=test count=$[1024*1024*4] bs=$[8*512]) &
cat test > /dev/null
[1] 41282
4194304+0 records in
4194304+0 records out
17179869184 bytes (17 GB, 16 GiB) copied, 4.43357 s, 3.9 GB/s
real 4.43
user 0.54
sys 3.88
$ ulimit -p
8
(pipesize in 512-byte blocks)
(Now I'm off to find out the MTU for Unix sockets...)
-- -v System76 Thelio Mega v1.1 x86_64 NVIDIA RTX 3090 Ti OS: Linux 6.12.0 Release: Mint 21.3 Mem: 258G ""This is a job for.. AACK! WAAUGHHH!! ...someone else." - Calvin"