Actually, only 29 are used because nvidia's drivers segfault when there
are more than 29 views (regardless of the exact bit pattern in the view
mask). This will allow rendering shadow maps in large batches, which
should make for better GPU utilization.
Even that's getting pretty big, but with the quanta at 128, that's a
maximum of 8 different image sizes (which is nice for my planned
"staging image" idea).
Interestingly, this caused a reduction in memory use for some maps (but
did increase marcher's again, but not as much as the bogus rounding
did). The idea was to use sparse bindings to remap shadow map layers,
but it turns out sparse bindings are insanely slow (beyond unusable).
However, the reduction in the number of shadow map images seems to be
worth it.
Since switching to the 1.2 api as a requirement, might as well use the
relevant structs instead of extension struct (for multiview). Came up
when double-checking the max views property due to running into what
appears to be an nvidia bug where > 29 views (any bit pattern) cause a
segfault when creating the pipeline.
I had missed that upping max lights to 2048 meant that up to 12288
matrices are needed for all the possible lights. This made it so the
light type could not be encoded in id_data, but the shaders never used
it anyway. This leaves one bit free.
I'd added some developer output to see how the layers were distributed
between images and found the image widths to be... odd. It turns out I
was double-adding the shadow_quanta. Oops. Results in ~164MB less memory
used by marcher (for 32 pixel quanta).
This allows "large" updates to be done in a single staging buffer packet
instead of one packet per quad (or slice). Currently, they're batched
into groups of 64 (not really enough for conchars, but that's only at
init-time, so not all that bad). Nicely, this seems to simplify the
staging code.
Fixes#65.
When looking at a struct and seeing "count" and "size", I had to hunt to
see what "size" really meant. Cherno is very much right about size vs
count being bytes vs number of objects.
load_conchars and load_crosshairs were using create_quad directly (due
to make_static_quad having the wrong parameters), but this spread the
handling of which buffer and index where used through the code. Thus fix
make_static_quad to take the x, y offsets (like make_dyn_quad) and then
use it in load_conchars and load_crosshairs.
While QFV_PacketScatterBuffer works on only one destination buffer, it
turns out it's still useful for scattering to multiple buffers, just
with multiple calls. This makes it pretty easy to combine multiple
buffer updates into a single staging buffer packet, resulting in
reducing lighting's packet use from up to 7 to just one, drastically
reducing the pressure on the stating buffer packet pool, and thus
reducing the chances of QFV_PacketAcquire stalling.
This relies on my fork of tracy: https://github.com/taniwha/tracy
on the wip-c-vulkan branch. Everything is still rather flaky though.
This necessitated the jump to vulkan 1.2 as a requirement.
This gets the dynamic data closer to the gpu, so should make a
difference when there's a lot going on. However, for simple tests, it
made no difference.
This allows tracy to clean up properly. However, Sys_Quit will use the
jump buffer (sys_exit_jmpbuf) only if it has been set, so the use of
Sys_setjmp is optional.
I'm still not happy with it being a compile time constant, but this
takes care of the interlock between frames in flight... for now: it's
fragile and really needs the excessive small-packet use in draw and
lighting to be cleaned up.
After discussion with Darian, I've decided to go with one big staging
buffer (with lots of packets) shared between FiF as the large size will,
in the end, be more flexible.
The novelty has rather worn off, I think, and the message has long since
lost its meaning. Sure things still break sometimes, but I've been
trying to keep at least the master branch functional at all times, and
even dev branches.
Host_Error and Host_EndGame use setjmp/longjmp to implement an exception
of sorts, but this messes with tracy's state even with cleanup
attributes. However, it turns out that those cleanup attributes are
exactly how gcc implements C++ destructors, and so the standard Unwind
api (part of libgcc) respects them (so long as -fexceptions is enabled
for C). Thus... replace longjmp with an implementation that uses Unwind
to unwind the stack and call the cleanup functions as needed. This is
actually important for more than just tracy as the cleanup attributed
vars can be thread locks.
Tracy is a frame profiler: https://github.com/wolfpld/tracy
This uses Tracy's C API to instrument the code (already added in several
places). It turns out there is something very weird with the fence
behavior between the staging buffers and render commands as the
inter-frame delay occurs in a very strangle place (in the draw code's
packet acquisition rather than the fence waiter that's there for that
purpose). I suspect some tangled dependencies.
It's there just to work around automake and libtool requirements for
EXEEXT but allowing qfcc to build menu.dat directly. Maybe someday I'll
come up with a better way.
Mostly just macro conflicts (and a little white space in passing).
Commits for integrating tracy will come later when I've come up with a
wrapper-api that I like (so non-tracy builds are easy even with tracy
available).
This fixes the weird slug when running nq on windows. It turns out it
was the "friendly neighbor" sleep code activating due to bitrot. In
addition, there are cvars for enabling unfocused sleep (defaults off)
and disabling minimized sleep (defaults on).
A lot is broken, especially direct input, but things are working. Better
yet, it seems the X11 and Windows key bindings are at least mostly
compatible.