Sujet : Re: Cost of handling misaligned access
De : cr88192 (at) *nospam* gmail.com (BGB)
Groupes : comp.archDate : 07. Feb 2025, 00:34:27
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <vo3gu6$3617l$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13
User-Agent : Mozilla Thunderbird
On 2/6/2025 2:36 PM, Terje Mathisen wrote:
Michael S wrote:
On Thu, 6 Feb 2025 17:47:30 +0100
Terje Mathisen <terje.mathisen@tmsw.no> wrote:
>
Terje Mathisen wrote:
Michael S wrote:
The point of my proposal is not reduction of loop overhead and not
reduction of the # of x86 instructions (in fact, with my proposal
the # of x86 instructions is increased), but reduction in # of
uOps due to reuse of loaded values.
The theory behind it is that most typically in code with very high
IPC like the one above the main bottleneck is the # of uOps that
flows through rename stage.
>
Aha! I see what you mean: Yes, this would be better if the
>
 VPAND reg,reg,[mem]
>
instructions actually took more than one cycle each, but as the
size of the arrays were just 1000 bytes each (250 keys + 250
locks), everything fits easily in $L1. (BTW, I did try to add 6
dummy keys and locks just to avoid any loop end overhead, but that
actually ran slower.)
>
I've just tested it by running either 2 or 4 locks in parallel in the
inner loop: The fastest time I saw actually did drop a smidgen, from
5800 ns to 5700 ns (for both 2 and 4 wide), with 100 ns being the
timing resolution I get from the Rust run_benchmark() function.
>
So yes, it is slightly better to run a stripe instead of just a
single row in each outer loop.
>
Terje
>
>
Assuming that your CPU is new and runs at decent frequency (4-4.5 GHz),
the results are 2-3 times slower than expected. I would guess that it
happens because there are too few iterations in the inner loop.
Turning unrolling upside down, as I suggested in the previous post,
should fix it.
Very easy to do in C with intrinsic. Probably not easy in Rust.
I did mention that this is a (cheap) laptop? It is about 15 months old, and with a base frequency of 2.676 GHz. I guess that would explain most of the difference between what I see and what you expected?
BTW, when I timed 1000 calls to that 5-6 us program, to get around teh 100 ns timer resolution, each iteration ran in 5.23 us.
FWIW: The idea of running a CPU at 4+ GHz seems a bit much (IME, CPUs tend to run excessively hot at these kinds of clock speeds; 3.2 to 3.6 seemingly more reasonable so that it "doesn't melt", or have thermal throttling or stability issues).
But, then again, I guess "modern" is relative, and most of the PC hardware I do end up buying tends to be roughly "2 generations" behind, mostly as in this case, it is significantly cheaper (actual new hardware tending to be a lot more expensive).
Can note though that on my PC, enabling AVX in the compiler (where it actually tries to use it in the program) tends to put a significant hurt on performance, so better off not used (it is new enough to support AVX, but not actually doing the 256-bit stuff natively as apparently it is still using 128-bit SIMD internally).
Well, and the slight wonk that it can accept 112GB of RAM, but as soon as I try to put in a full 128 it boot-loops a few times, then concludes that there is only 4GB (not entirely sure of the MOBO chipset, don't have the box around anymore, and not clearly listed anywhere; can note that BIOS date is from 2018, seemingly the newest version supported).
A lot of the "less modern" PC hardware around here is mostly XP and Vista era (eg, 2002-2009 mostly). This being the era of hardware that most readily appears (sometimes there being a slight value-add though for stuff old enough to still have a parallel port and a 3.5" FDD; PATA support sometimes still also useful, ...).
Still not crossed over into the world of newfangled M.2 SSDs...
My PC has a SATA SSD for the OS, but mostly using 5400 RPM HDDs for the other drives. With 1TB 7200RPM drives (WD Black, *1) mostly being used to hold the pagefiles and similar; and two larger 4TB and 6TB WD Red drives (for copying large files, generally around 75-100 MB/sec).
Say, 112GB RAM + 400GB swap.
*1: WD Seemingly using a color scheme:
Black: 7200 RPM speed-oriented drives.
Usually lower capacity (eg, 1TB).
2x 1TB Drives: WD1003FZEX, WD1002FAEX
Drives get ~ 150 MB/sec or so, both CMR.
As noted, pagefile is on these drives.
Red: 5400 RPM NAS oriented drives;
4TB Drive: CMR (WD40EFRX)
Mostly for file storage.
6TB Drive: SMR (WD60EFAX)
Mostly for bulk files.
Blue: 5400/7200 end-user oriented drives.
May be bigger and/or cheaper, but typically use SMR.
No Blue drives ATM.
A smaller pagefile still exists on the SSD, but mostly because Windows is unhappy if there is no pagefile on 'C'. Don't generally want a pagefile on an SSD though as it is worse for lifespan (but, it is 8GB, which Windows accepts; with around 192GB each on the other drives, for ~ 400GB of swap space).
Not sure how well Windows load-balances swap, apparently not very well though (when it starts paging, most of the load seems to be on one drive; better if it could give a more even spread).
The SSD seems to get ~ 300 MB/sec.
...
Terje