On 4/5/2024 4:43 PM, MitchAlsup1 wrote:
BGB-Alt wrote:
>
I have yet to decide on the default bit-depth for UI widgets, mostly torn between 16-color and 256-color. Probably don't need high bit-depths for "text and widgets" windows, but may need more than 16-color. Also
When I write documents I have a favorite set of pastel colors I use to
shade boxes, arrows, and text. I draw the figures first and then fill in
the text later. To give the eye an easy means to go from reading the text to looking at a figure and finding the discussed item, I place a box around
the text and shade the box with the same R-G-B color of the pre.jpg box in the figure. All figures are *.jpg. So, even a word processor needs 24-bit color.
{{I have also found that Word R-G-B color values are not exactly the same
as the R-G-V values in the *.jpg figure, too; but they are close enough
to avoid "figuring it out and trying to fix it to perfection.}}
TBD which color palette to use in the case if 256 color is used (the
256 is OK enough for DOOM games, but not for professional documentation.
If the palette is set up well, one can approximate most RGB colors semi-passably via the palette. Granted, still not great for photos or video (or even games); and a tradeoff needs to be made between color fidelity and levels of brightness (with the system palette I was using favoring brightness accuracy over color fidelity).
Where games only really look good if the color palette used by the system roughly matches that used by the game. The palette I had used, as can be noted, was 14 levels of 18 colors:
6 high-saturation (3-bit RGB);
6 low saturation;
3 off-white;
grayscale (16 levels);
orange;
azure.
Though, not really a fast/accurate procedural transform, as the palette is complicated enough to require a lookup table (unlike 216 color or RGB332).
Has at one point used a palette encoding which had used a 4-bit Y and the remaining 4 bits effectively encoding a normalized vector in YUV space, but the current scheme is more efficient. In this older scheme, palette entries that would represent an out-of-gamut color were effectively wasted. This strategy worked semi-OK for photo-like graphics, but worse for other things; encoding logic was intermediate complexity (effectively built around a highly modified YCbCr transform).
Though, one possible (semi-OK) way to algorithmically encode colors to the current palette would be:
Calculate the Luma;
Normalize the RGB color into a color vector;
Feed this vector through a fixed RGB222 lookup table;
Encode the color based on which axis was selected.
Though, for most UI widgets, one arguably only needs 4 colors, say:
Black, White, Dark-Gray, Light-Gray.
Add maybe a few other colors for things like title-bars and similar, ...
But, imposing a 16-color RGBI limit may be too restricting (even in the name of saving memory by representing windows at a lower bit-depth when a higher bit depth is unnecessary).
If one allows for customizable UI color schemes (like in older versions of Windows), then RGBI would be insufficient even for simple pointy clicky apps (leaving 256-color as the minimum option).
Here, 256 colors could make sense. Though, RGB555 would likely be overkill for most simple pointy clicky GUI apps.
One possibility here could be for the widget toolkit window to automatically upgrade to a higher bit-depth if it detects the use of bitmap-object widgets or similar (possibly making the choice of bit-depth mostly invisible to the program in this case).
Well, or leave it per program, where a program that may use more complex graphics requests the use of a higher bit-depth, and then say, one that only does a text-editor or calculator or similar, sticks with a more limited colorspace to save memory.
Say, looking at Windows Calculator:
Looks like it uses:
~ 5 or 6 shades of gray;
A shade of blue.
Programs of this sort don't really need a higher color depth.
Would a Notepad style program need lots of colors?
...
Granted, something like an image editor would need full-color, so in this case supporting higher color depths will make sense as well. Just, one doesn't want to use higher bit-depths when they are not needed, as this wastes more RAM on the backing framebuffers for the various windows, etc...
In other news, had been messing with doing SDF versions of Unifont.
Curiously, newer versions (15.1.05) of the Unifont font (post processing) seem to actually be a little bit smaller than an older versions (5.1). Not entirely sure what is going on with this (but, can note that there are a number of large gaps in the codepoint space).
Well, and seemingly multiple versions of the font:
The normal Unicode BMP;
Another 'JP' version which follows JIS rules;
Apparently maps in various characters from Planes 1 and 2, ...
Appears mostly similar apart from the contents of the CJK ranges.
...
Say:
After conversion to a 1bpp glyph set:
5.1 : 1.99 MB
15.1.05: 1.78 MB
After conversion to SDF and storing the bitmaps in a WAD container:
5.1 : ~ 7.0MB
15.1.05: ~ 6.1MB
Where, in this case, WAD based packaging seemed "mostly sufficient".
May or may not end up putting additional metadata in the WAD files.
For now, it is mostly just a collection of bitmap images...
With each image with its number encoded in the lump name, and understood to be a grid of 16x16 glyphs.
Though, glyph shape reconstruction is less exact than could be hoped.
Arguably, still better than if one tries to scale the bitmaps directly, but results may come out a little lumpy/weird here (though, trying to store them at 4-bits per sub-channel probably doesn't exactly help this).
At the moment, storage is basically:
Represent the SDF as a 256-color image, with each pixel split into 2 sub-channels. The palette color is interpreted as the input vector for the SDF drawing (interpolate between the pixels, take the median, and compare this value against a threshold).
Initially, was not using the palette, but the palette allows more flexibility in this case.
The current algorithm generates the SDF images by finding the horizontal and vertical distances. Some papers imply that it may make sense to allow more distance-calculation functions, possibly trying to regenerate each glyph using each combination of functions and picking whichever combination of functions yields the lowest reconstruction error.
Granted, ended up using a fairly naive up-sampling algorithm (from the 16x16 pixel input glyphs), which possibly doesn't really help (I fiddled with it until I got something semi-OK looking, but it has a limitation of not having any idea for what the upsampled glyphs are "supposed" to look like, what corners should be rounded and which kept sharp and pointy; and patterns that work for ASCII/8859-1 range don't extend to the rest of the BMP).
Though, at least for 1252, this range of characters can be handled manually. And in this case, the upsampled glyphs were based on 8x8 pixel variants, where following the shapes as the low-res glyphs makes the font much more tolerant of scaling (comparably the the Unifont glyphs are "less robust" in these areas).
However, as-is, processing the Unicode BMP still takes an annoyingly long time (and a more involved axis-search for each glyph wouldn't exactly help matters).
Likely options, if I did so:
Simple 2D euclidean distance (naive algorithm);
Horizontal distance (current);
Vertical distance (current);
Down-Left (diagonal);
Up-Left (diagonal).
Where, the generation would pick the best 2, with an implicit "3rd axis" which would merely be the average of the first 2.
In theory, could use hi-color images (with 3x 5-bit channels), but this would effectively double the memory requirements of each SDF image (though, the current implementation demand loads each "page", rather than bulk-loading the entire Unicode BMP).
Well, or I use hi-color for CP-1252, but then the 8-bit scheme for Unifont.
Not currently still wanting to face the complexity of dealing with TrueType fonts.
Can also note that TTF fonts that cover the entire Unicode BMP are "not exactly small" either...