If a step has process tasks, any render or compute
pipelines/renderpasses are **not** run automatically: the idea is the
process tasks need to run the relevant pipelines in a custom manner but
needs the objects to be created.
The recent light changes highlighted that the renderer does not own the
efrags (segfault in qwaq when shutting down my test scene). After
digging through the history of efrag clearing, it turns out that the
renderer never owned them, I just didn't understand the concept of
scenes at the time that I moved efrags into the renderer.
This eliminates the O(N^2) (N = map leaf count) operation of finding
visible lights and will later allow for finer culling of the lights as
they can be tested against the leaf volume (which they currently are
not as this was just getting things going). However, this has severely
hurt ad_tears' performance (I suspect due to the extreme number of
leafs), but the speed seems to be very steady. Hopefully, reconstructing
the vis clusters will help (I imagine it will help in many places, not
just lights).
The grid calculations are modified from those of Inigo Quilez
(https://iquilezles.org/articles/filterableprocedurals/), but give very
nice results: when thin enough, the lines fade out nicely instead of
producing crazy moire patterns. Though currently disabled, the default
planes are the xy, yz and zx planes with colored axes.
Based on the article
(https://developer.nvidia.com/content/depth-precision-visualized), this
should give nice precision behavior, and removes the need to worry about
large maps getting clipped. If I'm doing my math correctly, despite
being reversed, near precision is still crazy high. And (thanks to the
reversed depth) about a quarter of a unit (for near clip of 4) out at 1M
unit distance.
This seems to be more for legacy X11 (ie, without fixes etc), but
fullscreen really shouldn't affect grabbing directly (rather, it should
be up to the client whether grabbing (and thus warping) is enabled at
all.
Due to doing most of my testing using the demos, I hadn't noticed the
double-draw until flying around with the debug camera (and it showed as
a weird shimmer behind the sky layers).
The lights debug is from the light splat experiment (this is why I kept
the code), and the bsp debug is based on that. Both currently disabled
for now until I get UI controls in.
SCR_UpdateScreen_Legacy now takes only the screen functions pointer (it
didn't need camera or realtime), and the camera sizzle code has been
moved into one place to make cleaning it up easier (when I get around to
auditing AngleVectors etc).
There's still the problem with unused variables when building for
windows because of vulkan debug stuff, but this fixes the important
errors. It actually still works (at least under wine).
The biggest change was splitting up the job resources into
per-render-pass resources, allowing individual render passes to
reallocate their resources without affecting any others. After that, it
was just getting translucency and capture working after a window resize.
It looks horrible due to the lack of lighting etc, but it's good enough
for basic testing, especially of my render job design (that passed with
flying colors).
I'm not sure I like fontconfig (docs are...), but it is pretty standard,
and I was able to find some reasonable examples on stackexchange
(https://stackoverflow.com/questions/10542832/how-to-use-fontconfig-to-get-font-list-c-c).
Currently, only the one font is handled, no font sets for fall backs
etc. It's meant for the debug UI I'm working on, so that shouldn't be a
big deal.
It's currently very simplistic (visible, not visible), but it gets
things started for making QF more usable in a windowed environment (not
having a visible cursor was fine in DOS, or when full screen, but not
when windowed (and not actively playing).
This let me keep clearValue's simple default rgba float interpretation,
but also have full control over access to the float32, int32 and uint32
fields.
This is necessary because fisheye rendering draws the scene up to 6
times per frame, which results in many of the limits being hit
prematurely, but updating r_framecount that often breaks dynamic lights.
Really? More to clean up before (vulkan) bsp rendering is thread-safe?
However, R_MarkLeaves was pretty close: just oldviewleaf and
visframecount, but that's still too much. Also, the reliance on
r_refdef.worldmodel irked me.
While there will be some GPU resources to sort out for multi-pass bsp
processing, I think this is the last piece required before shadow passes
can be implemented.
They were an interesting idea and might be useful in the future, but
they don't work as well as I had hoped for quake's maps due to the
overlapping light volumes causing contention while doing the additive
blends in the frame buffer. The cause was made obvious when testing in
the marcher map: most of its over 400 lights have infinite radius thus
require full screen passes: all those passes fighting for the frame
buffer did very nasty things to performance. However, light splats might be
useful for many small, non-overlapping light volumes, thus the code is
being kept (and I like the cleanups that came with it).
Move things around a bit so I can restore the previous behavior of doing
all lights in a single full screen pass but keep the code improvements
from trying to do splatted lighting.