On 8/4/2024 1:26 PM, Chris M. Thomasson wrote:
On 8/3/2024 9:14 PM, Lawrence D'Oliveiro wrote:
On Sat, 3 Aug 2024 14:38:11 -0700, Chris M. Thomasson wrote:
>
On 8/3/2024 2:20 PM, Lawrence D'Oliveiro wrote:
>
On Sat, 3 Aug 2024 03:01:16 -0500, BGB wrote:
>
On 8/3/2024 12:32 AM, Lawrence D'Oliveiro wrote:
>
But, what I am saying is, a lot of stuff doesn't need raytracing.
>
Like I said, there are non-raytraced renderers inspired by video-game
engines. They don’t use OpenGL.
>
They are becoming few and far between wrt the big titles, so to speak.
>
No they aren’t. A well-known example is the “Eevee” renderer built into
Blender (alongside the ray-traced “Cycles” renderer).
I was referring to new state of the art games; they don't use OpenGL.
Yeah.
For mainstream games, there has been a move away from both OpenGL and Direct3D towards Vulkan.
Arguably, in some ways Vulkan is "better" for high-end games on modern GPUs, but is a lot harder to use (the amount of work needed to get even basic rendering working is very high).
Similarly, some targets, such as Android and Raspbian, were using GLES2 rather than OpenGL proper.
I have less immediate focus on Vulkan as it makes little sense in the absence of a high-end GPU. It would not make sense for a CPU based rasterizer or for a GPU with a fixed-function pipeline.
Contrast, OpenGL 1.x has a lower barrier to entry; and makes some sense as a more general purpose graphics API (can also be used for GUI and other things).
Though, yes, this includes keeping a lot of the stuff that was deprecated in the later "Core Profile", which is seemingly much more focused on high-end gaming (and not so much on things relevant to "basic 3D rendering" or UI).
Early on, there was also Glide, which could support an OpenGL subset via a wrapper. A lot of 90s era games (including Quake 1/2/3) were mostly targeted at this subset.
General priority I think is to have something like the "OpenGL Compatibility Profile".
Though, my implementation still left out some things that "pretty much no one uses" (such as Display Lists), and there is still some amount of stubs for non-used features.
Some other stuff hasn't really be well tested, such as GL_LIGHTING or GL_FOG (but, theoretically exists).
One can argue though that OpenGL is arguably a heavyweight option for general GUI rendering. An intermediate option could be an API more focused on 2D UI drawing tasks (with an aim to allow for a more efficient software implementation; allowing parts of the OpenGL pipeline to be sidestepped).
Though one design goal could be that (unlike the Windows GDI/User32 stuff), such an API and OpenGL could freely interoperate, probably sharing the same context and buffers. Ideally, the API should be able to be implemented as a thin wrapper on top of OpenGL as well.
Beyond functionality covered by things like "glDrawPixels", could mostly have stuff related to drawing from textures into 2D rectangles.
Might make sense to provide something to draw a block of pixels into a 2D rect using a BITMAPINFOHEADER or similar.
Say, hypothetically:
tkxglEmitTexturedRect(
float x0, float y0, float x1, float y1,
float s0, float t0, float s1, float t1,
float cr, float cg, float cb, float ca);
Which may sidestep GL if in the correct mode, else draws it as a rectangular polygon, though may delay drawing until later (allowing it to build an internal list of QUADs if needed). Will depend on the currently bound texture.
tkxglFinishRects();
Draw any queued rects, at which point one is free to return to using OpenGL as normal.
Likely:
tkxglBind(int target, int tex);
Wraps glBind, but may also emit QUADs if operating as a GL wrapper; needed mostly to allow operation as a wrapper.
May also provide special calls for setting up a 2D transform, so that the API knows it is in the correct mode for 2D drawing (projection set as glOrtho, MODELVIEW is either identity or a 2D translation, ...).
...
Would have to decide though whether to treat like a 2D extension API, or like a separate API that just so happens to overlap with GL (but would need to provide some amount of semi-redundant functionality, such as for uploading textures and similar).
Could maybe also provide for other UI-relevant things, like API calls for drawing text using OS provided fonts. And, stuff for drawing various sorts of UI widgets and dealing with user-input and input focus and similar.
At the moment, mostly working on getting TKRA-GL able to work with 32-bit pixels.
Currently, this works by a mode change:
Basic mode:
Uses RGB555A for everything (framebuffer/textures);
Uses a 16-bit Z-buffer (Z12.S4);
Uses UTX2 for compressed textures;
...
RGBA32 Mode:
Uses RGBA32 for framebuffer and textures;
Uses 32-bit Z-buffer (Z24.S8);
Uses UTX3 for compressed textures;
...
Could then add an HDR32 mode:
Basically RGBA32 Mode, but with FP8U texels and modified blending ops.
While quality will be worse in the Basic Mode, it will still have the advantage (why I went this route originally) of using less memory bandwidth and being faster.
Current approach is that the mode will be a global setting for the GL context, mostly for sake of "implementation sanity" (while, say, RGB555 and UTX2 textures could still make sense in RGBA32 mode, it means more code paths to deal with for now).
Similarly, the HDR32 mode will convert everything over to FP8U internally (say, when one tries to upload an LDR texture, it will be auto-converted). Similar when trying to upload DXT1 or DXT5 (they will be converted and mapped to 0.0 to 1.0 range).
For sake of HDR32 mode, could consider allowing FP8U images to be drawn via TKGDI, would just sort of need to come up with a good way to express it in the "BITMAPINFOHEADER" structure.
I guess, one possibility:
biCompression: "hdru";
biBitCount: 32;
biClrUsed: Repurposed as an HDR gamma-adjust vector (2x Binary16).
biClrImportant: Reserved / MBZ.
Where, pixel transfer is:
Temp = FP8U RGB * GammaScale + GammaBias;
With the values between 0.0 and 1.0 mapped to the LDR range
(ye olde RGB555).
If biBitCount is 64, pixels are 4x Binary16.
If "hdr " is used, would likely assume signed components (probably FP8); TBD if a case could be made for allowing A-Law for pixels here.
Maybe allow using the DIB BMP format, in such a configuration, as a poor man's OpenEXR (would allow mostly reusing the existing BMP handling code; and TestKern already uses some non-standard BMP variants, so this is nothing new).
>
DirectX 12 and Vulkan.
>
Those are strictly for on-screen use, like OpenGL before them.
Nothing is stopping them from being used for offline rendering.
If it looks good enough for what one wants, all is well.
FWIW: There is a trick one can use to get softer lighting (and shadows) with GL:
One can build a cube of bytes around a light-source, which can be used to map how much of the light from the light source is visible at that point in 3D space.
If a point in this cube is directly reachable (non-occluded), it is given a value of 255, if not, it is initially given a value of 0. This part would likely be done with ray-casts.
If it can reach a surface with line of sight to the light source, it is given a value based on the albedo of the surface and relative distances.
Afterwards, one assumes that the light behaves like a fluid that can diffuse into adjacent cells by a certain percentage, and updates the cells to the maximum of the light that could diffuse into that cell from adjacent cells (excluding cases where solid geometry occludes light propagation).
The lighting model can then use this 3D cube when figuring out the light visible at each vertex.
This wouldn't normally be used directly in fragment shaders though, as at least in the past, trying to use 3D textures tended to cause performance to go out the window. Rather, it could be handled as an additional vertex attribute (interpolated from the grid points).
...