Liste des Groupes | Revenir à c arch |
BGB-Alt wrote:There was a reboot, it became BJX2.
On 4/10/2024 12:12 PM, MitchAlsup1 wrote:BGB wrote:
>On 4/9/2024 7:28 PM, MitchAlsup1 wrote:>BGB-Alt wrote:
>Also the blob of constants needed to be within 512 bytes of the load instruction, which was also kind of an evil mess for branch handling (and extra bad if one needed to spill the constants in the middle of a basic block and then branch over it).>
In My 66000 case, the constant is the word following the instruction.
Easy to find, easy to access, no register pollution, no DCache pollution.
>Yeah.This was why some of the first things I did when I started extending SH-4 were:My suggestion is that:: "Now that you have screwed around for a while,
Adding mechanisms to build constants inline;
Adding Load/Store ops with a displacement (albeit with encodings borrowed from SH-2A);
Adding 3R and 3RI encodings (originally Imm8 for 3RI).
Why not take that experience and do a new ISA without any of those
mistakes in it" ??
For the most part, BJX2 is using 20-bit branches for 32-bit ops.Did have a mess when I later extended the ISA to 32 GPRs, as (like with BJX2 Baseline+XGPR) only part of the ISA had access to R16..R31.Usually they were spilled between basic-blocks, with the basic-block needing to branch to the following basic-block in these cases.>Also 8-bit branch displacements are kinda lame, ...>
Why do that to yourself ??
>I didn't design SuperH, Hitachi did...But you did not fix them en massé, and you complain about them
at least once a week. There comes a time when it takes less time
and less courage to do that big switch and clean up all that mess.
The above was for SuperH; this sort of thing is N/A for BJX2.But, with BJX1, I had added Disp16 branches.With BJX2, they were replaced with 20 bit branches. These have the merit of being able to branch anywhere within a Doom or Quake sized binary.And, if one wanted a 16-bit branch:>
MOV.W (PC, 4), R0 //load a 16-bit branch displacement
BRA/F R0
.L0:
NOP // delay slot
.WORD $(Label - .L0)Also kinda bad...>
Can you say Yech !!
>Yeah.Maybe consider now as the appropriate time to strt.
This sort of stuff created strong incentive for ISA redesign...
At this point, I suspect the main issue for me not (entirely) beating RV64G, is mostly compiler issues...Granted, it is possible had I instead started with RISC-V instead of SuperH, it is probable BJX2 wouldn't exist.Though, at the time, the original thinking was that SuperH having smaller instructions meant it would have better code density than RV32I or similar. Turns out not really, as the penalty of the 16 bit ops was needing almost twice as many on average.My 66000 only requires 70% the instruction count of RISC-V,
Yours could too ................
Not so much in my case.>Things like memcpy/memmove/memset/etc, are function calls in cases when not directly transformed into register load/store sequences.>
My 66000 does not convert them into LD-ST sequences, MM is a single inst-
ruction.
>I have no high-level memory move/copy/set instructions.>
Only loads/stores...
You have the power to fix it.........
>But, at what cost...You would not have to spend hours a week defending the indefensible !!
I had generally avoided anything that will have required microcode or shoving state-machines into the pipeline or similar.Things as simple as IDIV and FDIV require sequencers.
But LDM, STM, MM require sequencers simpler than IDIV and FDIV !!
Possibly.Things like Load/Store-Multiple orIf you like polluted ICaches..............
For small copies, can encode them inline, but past a certain size this becomes too bulky.>A copy loop makes more sense for bigger copies, but has a high overhead for small to medium copy.>
>So, there is a size range where doing it inline would be too bulky, but a loop caries an undesirable level of overhead.>
All the more reason to put it (a highly useful unit of work) into an
instruction.
>This is an area where "slides" work well, the main cost is mostly the bulk that the slide adds to the binary (albeit, it is one-off).Consider that the predictor getting into the slide the first time
always mispredicts !!
Two strategies:Which is why it is a 512B memcpy slide vs, say, a 4kB memcpy slide...What if you only wanted to copy 63 bytes ?? Your DW slide fails miserably,
yet a HW sequencer only has to avoid asserting a single byte write enable
once.
Possible.For looping memcpy, it makes sense to copy 64 or 128 bytes per loop iteration or so to try to limit looping overhead.On low end machines, you want to operate at cache port width,
On high end machines, you want to operate at cache line widths per port.
This is essentially impossible using slides.....here, the same code is
not optimal across a line of implementations.
?...Though, leveraging the memcpy slide for the interior part of the copy could be possible in theory as well.What do you do when the STAT drive wants to write a whole page ??
I will assume it is probably a bit more than this given there is not currently any sort of mechanism that does anything similar.For LZ memcpy, it is typically smaller, as LZ copies tend to be a lot shorter (a big part of LZ decoder performance mostly being in fine-tuning the logic for the match copies).Though, this is part of why my runtime library had added a "_memlzcpy(dst, src, len)" and "_memlzcpyf(dst, src, len)" functions, which can consolidate this rather than needing to do it one-off for each LZ decoder (as I see it, it is a similar issue to not wanting code to endlessly re-roll stuff for functions like memcpy or malloc/free, *).*: Though, nevermind that the standard C interface for malloc is annoyingly minimal, and ends up requiring most non-trivial programs to roll their own memory management.Ended up doing these with "slides", which end up eating roughly several kB of code space, but was more compact than using larger inline copies.>
>Say (IIRC):>
128 bytes or less: Inline Ld/St sequence
129 bytes to 512B: Slide
Over 512B: Call "memcpy()" or similar.
Versus::
1-infinity: use MM instruction.
>Yeah, but it makes the CPU logic more expensive.By what, 37-gates ??
Currently there is no DMA, only polling IO.The slide generally has entry points in multiples of 32 bytes, and operates in reverse order. So, if not a multiple of 32 bytes, the last bytes need to be handled externally prior to branching into the slide.>
Does this remain sequentially consistent ??
>Within a thread, it is fine.What if a SATA drive is reading while you are writing !!
That is, DMA is no different than multi-threaded applications--except
DMA cannot perform locks.
AFAIK, these is no particular requirement for which direction "memcpy()" goes.Main wonk is that it does start copying from the high address first.The only things wanting high-low access patterns are dumping stuff to the stack. The fact you CAN get away with it most of the time is no excuse.
Presumably interrupts or similar wont be messing with application memory mid memcpy.
Granted.The looping memcpy's generally work from low to high addresses though.As does all string processing.
Les messages affichés proviennent d'usenet.