bsp_draw_queue isn't the right place, but it's just place-holder code to
help get the rest of the renderers up and running before I tackle bsp
rendering. Fixes the segfault in demo1 when the zombies get gibbed,
resulting in zombie entities.
This was necessary to get the 2d elements drawn after the fence had been
fired (thus indicating descriptors could be updated) but before actual
rendering of the 2d elements (which is how it was done before the switch
to the new system).
It turns out there was a bug in the old iqm push constants spec (I still
need to figure out how to use layouts in the new system so I can
completely delete the old).
The output system's update_input takes a parameter specifying the render
step from which it is to get the output view of that step and updates
its descriptors as necessary.
With this, the full render job is working for alias models (minus a few
glitches).
When creating a new command buffer and appending it to a queue, the
active buffer count needs to be incremented too otherwise the new
command buffer will be accidentally reused prematurely. Not noticed
earlier because only one buffer was being created.
Many thanks to Peter and Darian for clearing up my misunderstanding of
how vkResetCommandPool works. The manager creates command buffers from
the command pool on an as-needed basis (when the queue of available
buffers is empty), and keeps track of those buffers in a queue. When the
pool is reset, the queues (one each for primary and secondary command
buffers) are reset such that the tracked buffers are available again.
Imageless framebuffers would probably be easier and cleaner, but this
takes care of the validation error attempting to present the second
frame (because rendering was being done to the first frame's swapchain
image instead of the second frame's).
Command buffer pools can't be reset until the commands have all been
executed. Having per-frame pools makes keeping track of pool lifetime
fairly easy.
Interleaving Vulkan objects with stucts containing vec4f_t results in
the vectors becoming unaligned when there is an odd number of objects in
a set, thus producing a segfault. Putting all the structs first prevents
any such issue.
The new render system now passes validation for the first frame (but
no drawing is done by the various subsystems yet). Something is wrong
with how swap chain semaphores are handled thus the second frame fails.
Frame buffer attachments can now be defined externally, with
"$swapchain" supported for now (in which case, the swap chain defines
the size of the frame buffer).
Also, render pass render areas and pipeline viewport and scissor rects
are updated when necessary.
I don't like the current name (update_framebuffer), but if the
referenced render pass doesn't have a framebuffer, one is created. The
renderpass is referenced via the active renderpass of the named render
step. Unfortunately, this has uncovered a bug in the setup of renderpass
objects: main.deferred has output's renderpass, and main.deferred_cube
and output have bogus renderpass objects.
The string type is useful for passing around strings (the only thing
that they can do, currently), particularly as arguments to functions.
The voidptr type is (currently) never generated by the core cexpr
system, but is useful for storing pointers via cexpr (probably a bit of
a hack, but it seems to work well in my current use).
Being able to specify the types in the push constant ranges makes it a
lot easier to get the specification correct. I never did like having to
do the offsets and sizes by hand as it was quite error prone. Right now,
float, int, uint, vec3, vec4 and mat4 are supported, and adheres to
layout std430.
This allows the likes of:
qfv_pushconstantrangeinfo_s = {
.name = qfv_pushconstantrangeinfo_t;
.type = (QFDictionary);
.dictionary = {
.parse = {
type = (labeledarray, qfv_pushconstantinfo_t, name);
size = num_pushconstants;
values = pushconstants;
};
stageFlags = $name.auto;
};
stageFlags = auto;
};
Leading to:
pushConstants = {
vertex = { Model = mat4; blend = float; };
fragment = { colors = uint; base_color = vec4; fog = vec4; };
};
Where the label of the labeled array (which pushConstants is) is
actually an enum flag and the dictionary value is another labeled array.
The up-coming changes to push constant handling has qfv_float etc type
enum values and using "float" instead of "qfv_float" is highly desirable
as the names match the glsl type names.
The creation of the render jobs doesn't really belong with the running
of those jobs. This makes the code a little easier to navigate as it was
too easy to lost between load-time and run-time code.
This is with the new render job scheme. I very much doubt it actually
works (can't start testing until everything passes, and it's disabled
for the moment (define in vid_render_vulkan.c)), but it's helping iron
out what more is needed in the render system.
I never liked it, but with C2x coming out, it's best to handle bools
properly. I haven't gone through all the uses of int as bool (I'll leave
that for fixing when I encounter them), but this gets QF working with
both c2x (really, gnu2x because of raw strings).
Wrap the strtod, strtof, strtol, strtoul functions, supporting the end
pointer as well (if not nil, the int offset of the end pointer relative
to the string start is returned).
Also, str_unmutable creates a return string from a mutable string
(copying it).
Meaning some leaks have been plugged, and some useful functions added:
loading a file (avoids polluting progs memory), setting the single
character lexeme string, and getting the line number.
Segfaulting when trying to produce an error message doesn't help get the
message out. Sure, `obj_error (nil...)` is a bit of an abuse, but it
shouldn't segfault the engine.
I don't know what I was thinking when I checked for 0 count for resizing
the set. Attempting to add/remove 0 elements results in adding/removing
4G elements. Oops.
set_while checks the iterator's current element membership and skips to
the first element with different membership. ie, if the current element
is in the set, then set_while returns the next element *not* in the set,
but if the current is not in the set, then set_while returns the next
element that *is* in the set. Rather handy for dealing with clusters of
set elements.
I never liked the various hacks I had come up with for representing
resource handles in Ruamoko. Structs with an int were awkward to test,
pointers and ints could be modified, etc etc. The new @handle keyword (@
used to keep handle free for use) works just like struct, union and
enum in syntax, but creates an opaque type suitable for a 32-bit handle.
The backing type is a function so v6 progs can use it without (all the
necessary opcodes exist) and no modifications were needed for
type-checking in binary expressions, but only assignment and comparisons
are supported, and (of course) nil. Tested using cbuf_t and QFile: seems
to work as desired.
I had considered 64-bit handles, but really, if more than 4G resource
objects are needed, I'm not sure QF can handle the game. However, that
limit is per resource manager, not total.
Removed a bogus dependency from libQFecs, and fixed the order of ui
libraries. This takes care of some first-time make install issues.
Libtool needs the libraries to be specified in dependency order.
Carrying on as if the missing font had been loaded leads to way too many
issues for it to be a good thing (not that that really needs to be
said). Fixes the segfaults in my test scene.
Really, a bit more than stub as the basic code is there, but nothing
works properly yet due to missing resources (especially descriptor sets
and pools), and the frame buffer creation is still disabled.
The step dependencies are not handled yet as threading isn't used at
this stage, but since I'll require dependencies to always come earlier,
this shouldn't cause a problem.
I always suspected the overflow conversions were UB, but with gcc doing
different things on arm, I thought it was about time to abandon those
particular tests. What I was not expecting was for the return value of
strcmp to be "UB" (in that there's no guarantee of the exact value, just
< = > 0). Fortunately, nothing actually relies on the value of the op
other than the tests, so modify the test to make the behavior well
defined.
I had somehow missed vkfieldignore in a consistency pass, or just messed
up its initialization (and thus deallocation) resulting in a double-free
of the strings.
This fixes a Sys_Error when loading the level for the first demo (and
probably many other times). It was mod_numknown getting set to 0 that
triggered the issue, but that seems to be necessary for the other
renderers. I think the whole model loading and caching system needs an
overhaul as this doesn't feel quite right due to removing part of the
advantage of caching the model data.
While the previous cleanup took care of the C side, it turns out vkgen
was leaking property list items all over the place, but they were
cleaned up by the shutdown code.
Requiring top-level {} or () for (usually) hand-written files is awkward
and even a little error prone, and certainly ugly at times. With this,
loaders that expect a particular format can specify the format a little
more directly.
The jobs will become the core of the renderer, with each job step being
one of a render pass, compute pass, or processor (CPU-only) task. The
steps support dependencies, which will allow for threading the system in
the future.
Currently, just the structures, parse support, and prototype job
specification (render.plist) have been implemented. No conversion to
working data is done yet, and many things, in particular resources, will
need to be reworked, but this gets the basic design in.
I had looked into doing reference counting on the strings, but didn't
like the implementation. However, it did make for better string handling
in the property list parser.
Flushing memory requires nonCoherentAtomSize alignment, but this can
cause the flush range to go out of bounds of an improperly sized buffer.
However, only host-visible (and probably really only cached, but all
three covered) needs flushing, so no rounding up is done for
device-local memory.
I'm not sure just what was going on other than *other* components were
getting double-removed when the hierarchy reference component was
removed when the entity was being deleted. This might even prevent
issues with removing the hierarchy from an entity that's not being
deleted as the pre-invalidation prevents the removal from deleting the
entity.
It turns out that the fixes for other problems related to removing
hierarchy reference components fixed the problem moving the entity
invalidation fixed, and invalidating the entity late somehow broke the
sprite renderer (at least in glsl).
The hierarchy leak was particularly troublesome to fix, but now the
hierarchies get updated (and freed) automatically just by removing the
hierarchy reference component from the entity. I suspect there will be
issues with entities that are on multiple hierarchies, but I'll sort
that out later.
It turns out that the bsearch bug was hiding incorrect handling of
indices in the subpool beyond the last tracked subpool. In which case, a
correctly working bsearch correctly fails to find the range, but the
search can be skipped entirely.
And rename _bsearch to QF_bsearch_r since that's far less confusing.
Also, update the test to make it possible for valgrind to detect the
out-by-one. The problem was found when trying to remove components from
an entity when using subpools.
I'm not 100% sure this is the best fix for the issue, but the way the
cbuf interpreter stack works (especially in the console code) meant that
the stack was built in the order opposite to how it could be safely
deleted with the existing function. Yeah, more leaks :P
Some of them, especially in rua_obj, were quite legitimate and even a
problem for thread-safety (rua_cmd is currently not thread-safe, but it
needs a lock, which I don't feel like doing at this stage).
Uncovered by the memory leak cleanup: the nodes were all being "linked"
to the first node, those nodes in between the first and last were
getting lost.
This was mainly for the shutdown functions, thus allowing Sys_Shutdown
(and Sys_RegisterShutdown) to be per-thread, but it seemed like a good
idea to make everything per-thread.
Finally, hash links can be freed when the hash context is no longer
relevant. The context is created automatically when needed, and the
owner can delete the context when its done with the relevant hash
tables.
It should have been this way all along, and it seems I thought they were
when I did rua_gui.c as it already freed its resource block, which would
have been a double free (oops). Fixes an invalid write when shutting
down progs in qwaq-cmd (relevant change not committed).
I tried out -std=c2x (doesn't work due to typeof (gcc bug?)) and
-sdt=gnu2x (still no #embed) and found that only regex.c was a problem
(nice), and now it no longer is a problem.
Render passes and subpasses are now mostly initialized, just command
buffers and frame buffer related info to go (including view/scissor for
pipelines).
Due to a typo in the list of extra property list items to add to the
symbol table (corrected), subsequent symbols were pointing to the wrong
memory address.
Not only does this quieten the validation layers, it ensures that all
the object handles are named and where they need to be. Also fixes only
one pipeline being created instead of the 15 or so.
It turns out labeled arrays don't work if structs aren't declared in the
right order (no idea what that is, though) as the struct might not have
been processed when the labeled array field is initialized. Thus, do a
pro-processing pass to set up any parse data prior to writing the
tables.
Most importantly, this cleans up creation of self-referencing symbol
tables from property lists, but adds in C-defined symbols as well. While
QFV_ParseRenderInfo is currently the only the function that uses it, it
might be helpful in the future, especially as I clean up the other parse
support code.
The render passes seem to be created successfully, but pipelines fail
due to not having layout set, resulting in a segfault (bug in validation
layers?).
I don't remember why I kept the abbreviated configs for images and image
views, but it because such that I need to be able to specify them
completely. In addition, image views support external images.
The rest was just cleaning up after the changes to qfv_resobj_t.
Vulkan requires color blend state is only for color attachments (ignored
otherwise), but it shouldn't be necessary to actually specify the blend
state and instead have it default to something reasonable.
Unfortunately, colorWriteMask affects the output even if blending is
disabled, so it must be initialized to something reasonable (r|g|b|a)
for when the default is used.
.dictionary can ask for standard parsing via a .parse key (value is
ignored currently).
Fields can use $auto to use standard parsing for that field.
If either is used, the plist field descriptors are written.
They're currently just stubs, but this gets the render info loading
working without any errors. The next step is to connect up pipelines and
create the image resources, then implementing the task functions will
have meaning.
This gets an empty (no tasks or pipelines connected) render context
initialized and available for other subsystems to register their task
functions. Nothing is using it yet, but the test parse of rp_main_def
fails gracefully (needs those tasks).
This just sets up the memory block and cexpr descriptors for the
parameters, parameter parsing is separate (and next). The parameters are
aligned to their size.
Needed to add the render passes plitem to the cexr symbol table, too.
All that remains is to figure out how to deal with multiview (or really
@next) and get task parsing working.
A bunch of missed struct members, incorrect parse types, and some logic
errors in the parse setup. Still not working due to problems with
vectors from plist string references and some other errors, but getting
there.
This is most useful when parsing a labeled array where the key/value
pairs go into a simple array:
key = value;
going to:
struct foo {
const char *key;
enumtype value;
};
This treats dictionary items as arrays ordered by key creation (ie, the
order of the key/value pairs in the dictionary is preserved). The label
is written to the specified field when parsing the struct. Both actual
arrays and single element "arrays" are supported.