On 4/2/2025 1:02 AM, Lawrence D'Oliveiro wrote:
On Wed, 02 Apr 2025 16:59:59 +1100, Alexis wrote:
Thought people here might be interested in this image on Jens Gustedt's
blog, which translates section 6.2.5, "Types", of the C23 standard into
a graph of inclusions:
>
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/
Wow, we have bit-precise integers now?
PL/I, come back, all is forgiven!
FWIW:
I added them into my compiler, and (because it overlapped with something else I was working on) I added some Verilog style features as well (as an extension).
Also added "_UBitInt(n)" as "_UnsignedBitInt(n)" and "unsigned _BitInt(n)" were too verbose for my liking.
So, extended features:
_UBitInt(5) cr, cg, cb;
_UBitInt(16) clr;
clr = (_UBitInt(16)) { 0b0u1, cr, cg, cb };
Composes an RGB555 value.
cg = clr[9:5]; //extract bits
clr[9:5] = cg; //assign bits
clr[15:10] = clr[9:5]; //copy bits from one place to another.
And:
(_UBitInt(16)) { 0b0u1, cr, cg, cb } = clr;
Decomposing it into components, any fixed-width constants being treated as placeholders.
Note that within hex and binary literals, in some contexts 'z' is allowed to represent "high impedance" bits. These may be used in some niche cases, but (if assigned to a runtime value) will decay into a normal binary value (with any 'z' bits becoming 0).
Eg:
__switchz(clr)
{
case 0b01zzzz1zzzz1zzzzu16: ... //light
case 0b00zzzz0zzzz0zzzzu16: ... //dim
case 0b1zzzzzzzzzzzzzzzu16: ... //transparent
}
...
Though, some of this was partly because I had started trying to add Verilog support (intention being to use my compiler in a way vaguely similar to Verilator, but with hopefully "better" than the pretty much non-existent debugger support in Verilator), and in this case, exposing some of it C side makes it easier to test features.
Though, possibly, something like:
__vlmodule FooDev(
_UBitInt(1) clock,
_UBitInt(1) reset,
_OutBitInt(1) out)
{
_UBitInt(8) cnt;
__vlassign out = cnt[7];
__vlalways(__vlposedge clock)
{
cnt = cnt + 1;
}
}
Would be going a bit too far...
I don't expect people would actually really use any of this, though (ironically) with my ISA some of this does generate more efficient code than the more traditional use of shifts and masks.
Though, it is still not enough to compete with more specialized helper instructions for things like working with RGB555 values (which are fairly extensively used in my case).
Well, or more specifically, I commonly use a non-standard RGB555 variant:
0rrrrrgggggbbbbb //standard opaque colors
1rrrraggggabbbba //translucent or transparent (3-bit alpha)
a = 111 = 224
...
a = 001 = 32
a = 000 = 0
So, in effect, there are 9 possible alpha levels, with use of alpha coming at the cost of color fidelity (but, on average, color fidelity is still better than RGBA4444; as fully-opaque values are the most common).
To some extent I also use indexed color, but mostly for cases where RGB555 is too much cost.
Had mostly ended up using a scheme like:
srgbyyyy
s=saturation (0=high, 1=low)
r/g/b: red/green/blue
yyyy: luma
With, as special cases (srgb):
0000=grays, 0111=orange, 1000=olive, 1111=azure
And, the colors near black are used for the standard 16 color palette and some ranges of off-white colors (yyyy=0..2). Within the RGBI colors, 0111 was used as the transparent color (0000 and 1000 are absent, 1111 represents a full-intensity white that is slightly lighter than the top of the grays axis, which is formally #F0F0F0 rather than #FFFFFF)
Mostly because generally this gives better image quality than the more standard 216 and 252 color palettes (which tend to just look kinda terrible), and, ironically, is also generally more friendly to hardware decoding.
...