On 5/30/2024 3:31 PM, Michael S wrote:
On Thu, 30 May 2024 14:04:39 -0400
Paul <nospam@needed.invalid> wrote:
>
WSL Ubuntu20.04 version 2
>
Are you sure that you tested WSL, not WLS-2?
Your results looks very much like WLS2.
Your explanationns sound very much as if you are talking about WSL-2.
My WSL testing results are opposit from yours - read speed identical,
write speed consitently faster when writing to /mnt/d/... then when
writing to WSL's native FS.
Part of the reason could be that SSD D: is physically faster than SSD
C: that hosts WSL. I should have tested with /mnt/c as well, but
forgot to do it.
I can't test WSL, because it won't start. It throws an error.
I used what I had.
I am specifically trying to test on the
box with the NVMe in it (to eliminate slower devices from the
picture). I only own one NVMe and one slot to load it.
*******
As for your general problem, you can easily malloc a buffer
for the entire file, and process the table as stored in RAM.
That should help eliminate your variable file system overhead
when benching.
That's not scalable for general usage, but during benchmarking
and fast prototyping stage, you might test with it. That way,
moving the executable around, the filesystem component is removed.
Or, the filesystem component can be timestamped if you want.
I just send timestamps to stderr so they won't interfere with stdout.
*******
I just had a thought. If I use "df" in WSL2, the slash almost
looks like it is on a TMPFS (Ram). That could be why I got 2GB/sec.
Check in wsl environment, and using "df", check for evidence of
how the file systems were set up there.
$ df
Filesystem 1K-blocks Used Available Use% Mounted on
none 32904160 960 32903200 1% /run
none 32904160 0 32904160 0% /run/lock
none 32904160 0 32904160 0% /run/shm
tmpfs 32904160 0 32904160 0% /sys/fs/cgroup
...
C:\ 124493820 60595608 63898212 49% /mnt/c
$ top
top - 15:15:45 up 5 min, 1 user, load average: 0.00, 0.00, 0.00
Tasks: 45 total, 1 running, 44 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 64265.9 total, 63170.6 free, 631.5 used, 463.8 buff/cache
MiB Swap: 16384.0 total, 16384.0 free, 0.0 used. 63035.9 avail Mem
Like a LiveDVD, the TMPFS is using up to a half of available RAM.
It behaves the same way when you boot a LiveDVD.
Your WSL instance, could have quite a different look to the mounts in "df".
Paul