I got tired of writing the same 13 or so lines of code over and over (it
actually put me off experimenting with Vulkan). Thus...
QFV_PacketCopyBuffer does the work of handling barriers and a (full
packet) copy from the staging buffer to a GPU buffer.
QFV_PacketCopyImage does a similar job, but for images. However, it
still needs a lot of work, but it does make getting a basic texture onto
the GPU much less of a hassle.
Both functions should make staging data much less error-prone.
This moves the qfv_resobj_t image initialization code from the IQM
loader into the resource management code. This will allow me to reuse
the code for setting up glyph data. As a bonus, it cleans up the IQM
code nicely.
UVs being 0 meant that lines were picking up the upper left pixel of
char 0 of conchars. With quake data, this meant a transparent pixel.
Fixes invisible debug lines :P.
It turns out that using the swapchain image for the size requirements is
unreliable: when running under renderdoc, vkGetImageMemoryRequirements
sets the memory requirements fields to 0, leading eventually to a null
memory object being passed to vkMapMemory, which does not end well.
I had missed that vkCmdCopyImage requires the source and destination
images to have exactly the same size, and I guess assumed that the
swapchain images would always be the size they said they were, but this
is not the case for tiled-optimal images. However,
vkCmdCopyImageToBuffer does the right thing regardless of the source
image size.
This fixes the skewed screenshots when the window size is not a multiple
of 8 (for me, might differ for others).
There's a problem with screenshot capture in that the image is sheared
after window resize, but the screen view looks good, and vulkan is happy
with the state changes.
As gbuf_base derives from the base pipeline, it inherits base's dynamic
setting, and thus doesn't need its own. I had a FIXME there as I wasn't
sure why I had a redundant setting, but I really can't see why I'd want
it different from any of the other main renderpass pipelines.
I've found and mostly isolated the parts of the code that will be
affected by window resizing, minus pipelines but they use dynamic
viewport and scissor settings and thus shouldn't be affected so long as
the swapchain format doesn't change (how does that happen?)
Finally, the model_funcs and render_funcs struts use designated
initializers. Not only are they good for ensuring correct
initialization, they're great for the programmer finding the right
initializer.
With the addition of dependencies on freetype and harfbuzz, it became
clear that the renderer plugins need to be explicitly linked against
external dependencies (and that I need to do more installed testing,
rather than just my static local builds). This fixes the unresolved
symbols when attempting to load any of the plugins.
qwaq doesn't supply a backtile pic, so Draw_TileClear in the gl and glsl
renderers would segfault when qwaq's window width changed due to some
back-tile being drawn.
As of a recent nvidia driver update, it became necessary to enable the
feature. I guess older drivers (or vulkan validation layers?) were a bit
slack in their checking (or perhaps I didn't actually get those lighting
changes working at the time despite having committed them).
This did involve changing some field names and a little bit of cleanup,
but I've got a better handle on what's going on (I think I was in one of
those coding trances where I quickly forget how things work).
This makes bsp traversal even more re-entrant (needed for shadows).
Everything needed for a "pass" is taken from bsp_pass_t (or indirectly
through bspctx_t (though that too might need some revising)).
Ambient lights are represented by a point at infinity and a zero
direction vector (spherical lights have a non-zero direction vector but
the cone angle is 360 degrees). This fixes what appeared to be mangled
light renderers (it was actually just an ambient light being treated as
a directional light (point at infinity, but non-zero direction vector).
There are some issues with the light renderers getting mangled, and only
the first light is even touched (just begin and end of render pass), but
this gets a lot of the framework into place.
Sounds odd, but it's part of the problem with calling two different
things with essentially the same name. The "high level" render pass in
question may be a compute pass, or a complex series of (Vulkan) render
passes and so won't create a Vulkan render pass for the "high level"
render pass (I do need to come up with a better name for it).
I really don't remember why I made it separate, though it may have been
to do with r_ent_queue. However, putting it together with the rest is
needed for the "render pass" rework.
It now lives in vulkan_renderpass.c and takes most of its parameters
from plist configs (just the name (which is used to find the config),
output spec, and draw function from C). Even the debug colors and names
are taken from the config.
QFV_CreateRenderPass is no longer used, and QFV_CreateFramebuffer hasn't
been used for a long time. The C file is still there for now but is
basically empty.
The real reason for the delay in implementing support for pNext is I
didn't know how to approach it at the time, but with the experience I've
gained using and modifying vkparse, the solution turned out to be fairly
simple. This allows for the use of various extensions (eg, multiview,
which was used for testing, though none of the hookup is in this
commit). No checking is done on the struct type being valid other than
it must be of a chainable type (ie, have its own pNext).
The software renderer uses Bresenham's line slice algorithm as presented
by Michael Abrash in his Graphics Programming Black Book Special Edition
with the serial numbers filed off (as such, more just so *I* can read
the code easily), along with the Chen-Sutherland line clipping
algorithm. The other renderers were more or less trivial in comparison.
Enabled by 'developer lighting'. It was good for confirming that the
lights in ad_e1m1 (Doom Hangar 16) were actually being output (over 600
of them sometimes, ouch). Turned out to be the color scale ambiguity.
Surfaces marked with SURF_DRAWALPHA but not SURF_DRAWTURB are put in a
separate queue for the water shader and run with a turb scale of 0.
Also, entities with colormod alpha < 1 are marked to go in the same
queue as SURF_DRAWALPHA surfaces (ie, no SURF_DRAWTURB unless the
model's texture indicated such).
A listener is used instead of (really, as well as) ie_app_window events
because systems that need to know about windows sizes may not have
anything to do with input and the event system.
This breaks console scaling for now (con_width and con_height are gone),
but is a major step towards window resize support as console stuff
should never have been in viddef_t in the first place.
The client screen init code now sets up a screen view (actually the
renderer's scr_view) that is passed to the client console so it can know
the size of the screen. The same view is used by the status bar code.
Also, the ram/cache/paused icon drawing is moved into the client screen
update code. A bit of duplication, but I do plan on merging that
eventually.