Liste des Groupes | Revenir à c arch |
I haven't really understood how it could be implemented.The register file in Q+ is huge. One of the drawbacks of supporting vectors. There were 1024 physical registers for support. Reduced it to 512 and that still may be too many. There was a 4kb wide mapping ram, resulting in a warning message. I may have to split up components into multiple copies to get the desired size to work.
But, granted, my pipeline design is relatively simplistic, and my priority had usually been trying to make a "fast but cheap and simple" pipeline, rather than a "clever" pipeline.
Still not as cheap or simple as I would want.
Qupls has RISC-V style vector / SIMD registers. For Q+ every instruction can be a vector instruction, as there are bits indicating which registers are vector registers in the instruction. All the scalar instructions become vector. This cuts down on some of the bloat in the ISA. There is only a handful of vector specific instructions (about eight I think). The drawback is that the ISA is 48-bits wide. However, the code bloat is less than 50% as some instructions have dual- operations. Branches can increment or decrement and loop. Bigfoot uses a postfix word to indicate to use the vector form of the instruction. Bigfoot’s code density is a lot better being variable length, but I suspect it will not run as fast. Bigfoot and Q+ share a lot of the same code. Trying to make the guts of the cores generic.In my case, the core ended up generic enough that it can support both BJX2 and RISC-V. Could almost make sense to lean more heavily into this (trying to consolidate more things and better optimize costs).
>
Did also recently get around to more-or-less implementing support for the 'C' extension, even as much as it is kinda dog-chewed and does not efficiently utilize the encoding space.
It burns a lot of encoding space on 6 and 8 bit immediate fields (with 11 bit branch displacements), more 5-bit register fields than ideal, ... so, has relatively few unique instructions, but:
Many of the instructions it does have are left with 3 bit register fields;
Has way a bit too many immediate-field layouts as it just sort of shoe- horns immediate fields into whatever bits are left.
Though, turns out I could skip a few things due to them being N/E in RV64 (RV32, RV64, and RV128 get a slightly different selection of ops in the C extension).
Like, many things in RV land make "annoying and kinda poor" design choices.
Then again, if one assumes that the role of 'C' is mostly:
Does SP-relative loads/stores and MOV-RR.
Well, it does do this at least...
Nevermind if you want to use any of the ALU ops (besides ADD), or non- stack-relative Load/Store, well then, enjoy the 3 bit register fields.
And, still way too many immediate-field encodings for what is effectively load/store and a few ALU ops.
I am not as much a fan of RISC-V's 'V' extension mostly in that it would require essentially doubling the size of the register file.
And, if I were to do something like 'V' I would likely do some things differently:Q+ is setup almost that way. It uses 48b instructions. There is a 2b precision field in instructions that determines the lane/sub element size 8/16/32/64. The precision field also applies to scalar registers. The category is wrapped up in the opcode which is seven bits. One can do a float add on a vector register, then a bitwise operation on the same register. The vector registers work the same way as the scalar ones. There is no type state associated with them, unlike RISCV. To control the length (which lanes are active) there is a global mask register instead of a vector length register.
Rather than having an instruction to load vector control state into CSR's, it would make more sense IMO to use bigger 64-bit instructions and encode the vector state directly into these instructions.
While this would be worse for code density, it would avoid needing to burn instructions setting up vector state, and would have less penalty (in terms of clock-cycles) if working with heterogeneous vectors.
Say, one possibility could be a combo-SIMD op with a control field:
2b vector size
64 / 128 / resv / resv
2b element size
8 / 16/ 32/ 64
2b category
wrap / modulo
float
signed saturate
unsigned saturate
6b operator
add, sub, mul, mac, mulhi, ...
Though, with not every combination necessarily being allowed.Q+ has two ALU’s, which may, at some point, be expanded by two more ALUs with reduced functionality.
Say, for example, if the implementation limits FP-SIMD to 4 or 8 vector elements.
Though, it may make sense to be asymmetric as well:
2-vide vectors can support Binary64
4-wide can support Binary32
8-wide can support Binary16 ( + 4x FP16 units)
16 can support FP8 ( + 8x FP8 units)
Whereas, say, 16x Binary32 capable units would be infeasible.
Well, as opposed to defining encodings one-at-a-time in the 32-bit encoding space.
It could be tempting to possibly consider using pipelining and multi- stage decoding to allow some ops as well. Say, possibly handling 8-wide vectors internally as 2x 4-wide operations, or maybe allowing 256-bit vector ops in the absence of 256-bit vectors in hardware.
...
Les messages affichés proviennent d'usenet.