The goal is to get vulkan relying on the "renderpass" abstraction, but
this gets vulkan up and running again, and even fixes the rendering
issues (in the end, getting canvas working wasn't required, but is still
planned).
This is a bit of a hack to allow me to work on vulkan's screen update
"pipeline" without having to mess with the other renderers, since it
turns out they're (currently) fundamentally incompatible.
When a pic needs dynamic vertices (eg, for sub-pics), a descriptor set
is allocated and updated if one has not been created for the pic. This
is done each frame: the descriptor sets are recycled (there currently is
rarely a need for more than a small handful of dynamic descriptors, so
64 should be plenty for now).
Unfortunately, due to the order of operations issue between draw items
getting queued and submitting commands to vulkan (the cause of the pics
not rendering correctly per 8fff71ed4b),
the validation layers complain (correctly) about the command buffers
being executed with updated descriptor sets. Getting the canvas system
up and running will fix that.
The pic is scaled to fill the specified rect (then clipped to the
screen (effectively)). Done just for the console background for now, but
it will be used for slice-pics as well.
Not implemented for vulkan yet as I'm still thinking about the
descriptor management needed for the instanced rendering.
Making the conback rendering conditional gave an approximately 3% speed
boost to glsl with the GL stub (~12200fps to ~12550fps), for either
conback render method.
The wording might seem a little odd, but cl_screen is really the full 2D
client HUD while the console is completely independent of the client and
shouldn't know that the client even exists. Ideally, the resize events
would be handled by the canvas system, to which end this is a small
step.
This fixes the broken dynamic lighting in fisheye rendering. It does
mean that frustum culling of lit surfaces needed to be removed, but if
not doing frustum culling on lit surfaces was good enough for a P90,
it's probably good enough for an i7-6850K.
They are usually larger images (eg, the main menu graphic) and thus make
a mess of the atlas (thus, making them separate means a smaller atlas
can be used). All sorts of things are in a mess (descriptor management,
subpic rendering not supported, wrong alpha value for the transparent
pixel), but this gets the basic loader going.
This just takes advantage of the dynamic verts for doing subpics. It's
not really the most optimal code as it has to write both the vertices
(64 bytes per quad) and the instances (24 bytes per quad), but that's
still better than the old 128 bytes per quad (and having a single
pipeline is nice).
The problem was that I had mixed up the purpose of the per-frame vertex
buffers and used them for the core quad data when they were meant for
subpic and the like, and forgotten about the static vertex buffer.
This gets at least conchars working (pic in general not tested yet).
Any performance gains will be utterly swamped by the deferred renderer,
but it will allow better control of quad render order by any client
code (and should be slightly better for simpler renderers when I get
support for them working).
Right now, plenty is broken (much of the higher level draw functions are
disabled, and pics don't render correctly), but this gets at least the
basics in so I'm not bouncing diffs around as much.
It turns out the slice pipeline is compatible with the glyph pipeline in
that its vertex attribute data is a superset (just the addition of the
offset attributes). While the queues have yet to be merged, this will
eventually get glyphs, sliced sprites, and general (static) quads into
the one pipeline. Although this is slightly slower for glyph rendering
(due to the need to pass an extra 8 bytes per glyph), this should be
faster for quad rendering (when done) as it will be 24 bytes per quad
instead of 32 bytes per vertex (ie, 128 bytes per quad), but this does
serve as a proof of concept for doing quads, glyphs and sprites in the
one pipeline.
The main reason I had created in the first place was I hadn't thought of
using image view swizzles to handle coverage-alpha textures (for
monochrome glyphs), and for whatever reason also had the texture in a
different binding slot to the twod fragment shader. With both issues out
of the way, there's no reason to have an almost identical (just some
naming) shader just for glyphs.
With an eye towards merging the 2d pipelines as much as possible, I
found that the glyph and basic 2d quad texture descriptors were in
different slots for no reason I can think of. Having them in the same
slot would mean I could use the same fragment shader for all 2d
pipelines (though the plan is to get it down to two: (sliced) quads and
lines).
I hadn't noticed the problem until playing with early fragment tests for
the sprite fragment shaders, but passing data that expects triangle
strips to a pipeline that expects triangle lists doesn't work too well
when drawing quads.
This is the beginning of supporting 2d rendering in 3d space. The idea
is that a canvas can be in 2d orthographic space (not attached to any
entity with a 3d transform), or in 3d perspective space (attached to an
entity with a 3d transform, either as a child of the camera, or of some
object in 3d space).
It will replace the current HUD code when it's working.
I found I needed the subrange start as well as the end, but I liked that
the subpools themselves used only the end of the range, so switching to
just a unint32_t for the value and adding a function to return a tuple
made sense. I had kept the struct because I thought I might want to
store additional information (eg, the entity "owning" the subpool), but
found that I didn't need such information as the systems using subpools
that way would have access to the entity by other means.
Interestingly, the change found a bug in subpool creation: I really
don't know why things worked before, but they work better now :)
Subpools are for grouping components by some criterion. Any component
that has a rangeid callback will be grouped with other components that
return the same render id. Note that the ordering of components within a
group will be affected by adding a component into a group that comes
before that group (or removing a component).
Component pools can have multiple groups, added and removed dynamically,
but removing a group should (currently) be done only when empty.
While "set" is a tad strong (there's just the one component for now), I
had missed the changes when adding ECS systems. Fixes the segfault at
the end of demo1 (ie, when any center text is printed).
Instead of creating new entities for the text views. This approximately
halves the number of entities required to display flowed text, but also
tests the ability to have an entity in multiple hierarchies (the goal of
the ECS component and system changes).
Marking them as cached means that they'll be "uncached" instead of
destroyed when freed, which would not be a particularly good thing. I
have no memory as to how I found this as I found the change in my git
stash.
While this does require an extra call after registering components, it
allows for multiple component sets (ie, sub-systems) to be registered
before the component pools are created. The base id for the registered
component set is returned so it can be passed to the subsystem as
needed.
There's now a main ecs.h file that includes the sub-system headers,
removing the need to explicitly include several header files, but the
sub-systems are a less cluttered.
This means that the component id used for hierarchy references must be
passed to Hierarchy_New and Hierarchy_Copy, but does all an entity to
have more than one hierarchy, which is useful for canvases (hierarchies
of views) in the 3d world (the canvas root would have a 3d hierarchy
reference and a 2d (view) hierarchy reference).
It seems that the mouse escaping the barriers requires some combination
of hitting two at once, and holding your mouth just right (something
about sliding the mouse up and down one barrier near the other).
However, sending the mouse back to the center of the screen when it
touches a barrier makes such sliding impossible.
This seems to fix#38
I obviously need a better way to test legacy code because the fix for
unsigned-int behavior with clang broke mouse warping when using
XGrabPointer instead of XInput2's XIGrabEnter.
The separation now uses height above (right of) the base line, and depth
below (left of) the base line. This puts the text exactly where I want
it, but there's still the problem of uneven line spacing caused by
descenders and ascenders. However, I suspect that's more up to the
text/font handling code to get the boxes right (maybe set spaces to have
the right dimensions?).
The main problem was the confusion about the coordinates within a single
glyph, and thus the glyphs position within the view's box. With this,
flowed text works fairly well except for some issues with spacing
between lines (which I think is due to the flow code not having been
tested with offset boxes).
While Draw_Glyph does draw only one glyph at a time, it doesn't shape
the text every time, so is a major win for performance (especially
coupled with pre-shaped text).
Font cannot be overridden yet, but script attributes (language, script
type, direction) and features can be set at all three levels in a
passage. Attributes on the root level act as defaults for the paragraph
and word levels, and paragraph attributes act as defaults for the word
level.
Passage_Delete needs to check if the hierarchy is valid as no text may
have been added, which results in a null pointer for the hierarchy.
Text shaping needs to set language etc every time it resets the buffer.
This causes some problems with linking if libQFgui is linked with
libQFrenderer (which is necessary in the long run), but it seems
everything gets away with it for now (which, tbh, I don't like).
And add a function to process a passage into a set of views with glyphs.
The views can be flowed: they have flow gravity and their sizes set to
contain all the glyphs within each view (nominally, words). Nothing is
tested yet, and font rendering is currently broken completely.
Font and text handling is very much part of user interface and at least
partially independent of rendering, but does fit it better with GUI than
genera UI (ie, both graphics and text mode), thus libQFgui as well as
libQFui are built in the ui directory.
The existing font related builtins have been moved into the ruamoko
client library.
I had done the loader for the GPU renderers, so the CPU renderer didn't
draw the characters transparently. Fixes the pink block in my ruamoko
test scene (due to the notify text area).
While it doesn't really make any difference to the texture upload (8-bit
is 8-bit), and the sampler is in control of the interpretation, this
makes vulkan more consistent with the specification of the glyph
texture.
In theory, it supports all the non-palette formats, but only luminance
and alpha (tex_l and tex_a) have been tested. Fixes the rather broken
glyph rendering.
World scale can only be approximate if non-uniform scales and
non-orthogonal rotations are involved, but it is still useful
information sometimes.
However, the calculation is expensive (needs a square root), so remove
world scale as a component and instead calculate it on an as-needed
basis because it is quite expensive to do for every transform when it is
used only by the legacy-GL alias model renderer.
Thanks to the 3d frame buffer output being separate from the swap chain,
it's possible to have a different frame buffer size from the window
size, allowing for a smaller buffer and thus my laptop can cope (mostly)
with the vulkan renderer.
The escape was actually harmless as the buffers would not be read due to
the particle count being 0 (thus why the buffers were at the end of the
staging buffer: no space was allocated for them, only for the system
buffer, but their offsets were just past the system buffer). However,
the validation layers quite rightly did not like that. Thus, the two
buffers are pointed to the system buffer so all three descriptors are
always valid.
Where too far is 1024 units as that is the maximum supported, or the
radius. The change to using unsigned for the distances meant the simple
checks missed the effective max dist going negative, thus leading to a
segfault.
I had debated putting the blending in the compose subpass or a separate
pass but went with the separate pass originally, but it turns out that
removing the separate pass gains 1-3% (5-15/545 fps in a timedemo of
demo1).
viewstate's time is from cl.time which is not what's used to set
last_servermessage (that uses realtime). After careful investigation, I
found that cl.time is not at all suitable and that the original id code
used realtime (I think it was just me being lazy when I merged the
code). Fixes the stuck net icon.
quake changes rocket and grenade models to explosion models, but
quakeworld does not. This resulted in nq drawing two explosion sprites
instead of one. Separating the types allows nq to skip adding a sprite
for the explosion.
It's a bit flaky for particles, especially at higher frame rates, but
that's due to supporting only 64 overlapping pixels. A reasonable
solution is probably switching to a priority heap for the "sort" and
upping the limit.
This required making the texture set accessible to the vertex shader
(instead of using a dedicated palette set), which I don't particularly
like, but I don't feel like dealing with the texture code's hard-coded
use of the texture set. QF style particles need something mostly for the
smoke puffs as they expect a texture.
It doesn't want to work on my nvidia (or more recent sid?) and doesn't
seem to be necessary. The problem may be multiple event sets before the
first wait, but investigation can wait for now.
This is probably the biggest reason I had problems with particles not
updating correctly: the descriptors were generally point pointing to
where the data actually was in the staging buffer.
I don't yet know whether they actually work (not rendering yet), but the
system isn't locking up, and shutdown is clean, so at least resources
are handled correctly.
Although it works as intended (tested via hacking), it's not hooked up
as the current frame buffer handling in r_screen is not readily
compatible with how vulkan output is handled. This will need some
thought to get working.
This splits up render pass creation so that the creation of the various
resources can be tailored to the needs of the actual render pass
sub-system. In addition, it gets window resizing mostly working (just
some problems with incorrect rendering).
If the result object type pointer is null, then the parsed result type
and value pointers are written directly to the result object rather than
testing the parsed result type against the object type and copying the
parsed result value data to the location of the object value. It is then
up to the caller to check the type and copy the value data.
It turns out the semaphore used for vkAcquireNextImageKHR may be left in
a signaled state for VK_ERROR_OUT_OF_DATE_KHR. While it seems to be
possible to clear the semaphore using an empty queue submission,
destroying and recreating the semaphore works well.
Still have problems with the frame buffer after window resize, though.
Swap chain acquisition is part of final output handling. However, as the
correct frame buffers are required for the render passes, the
acquisition needs to be performed during the preoutput render pass.
Window resize is still broken, but this is a big step towards fixing it.
This is the minimum maximum count for sampled images, and with layered
shadow maps (with a minimum of 2048 layers supported), that's really way
more than enough.
I guess nvidia gives a non-srgb format as the first in the list, but my
laptop gives an srgb format first, thus the unexpected difference in
rendering brightness. Hard-coding BGRA isn't any better, but it will do
for now.
Things are a bit of a mess with interdependence between sub-module
initialization and render pass initialization, and window resizing is
broken, but the main render pass rendering to an image that is then
post-processed (currently just blitted) is working. This will make it
possible to implement fisheye and water warp (and other effects, of
course).
When working, this will handle the output to the swap-chain images and
any final post-processing effects (gamma correction, screen scaling,
etc). However, currently the screen is just black because the image
for getting the main render pass output isn't hooked up yet.
Now each (high level) render pass can have its own frame buffer. The
current goal is to get the final output render pass to just transfer the
composed output to the swap chain image, potentially with scaling (my
laptop might be able to cope).
While the HUD and status bar don't cut out a lot of screen (normally),
they might start to make a difference when I get transparency working
properly. The main thing is this is a step towards pulling the 2d
rendering into another render pass so the main deferred pass is
world-only.
Using swizzles in an image view allows the same shader to be used with
different image "types" (eg, color vs coverage).
Of course, this needed to abandon QFV_CreateImageView, but that is
likely for the best.
It turns out that nearest filtering doesn't need any offsets to avoid
texel leaks so long as the screen isn't also offset. With this, the 2d
rendering looks good at any scale (minus the inherent blockiness).
It seemed like a good idea at the time, but it exacerbates pixel leakage
in atlas textures that have no border pixels (even in nearest sampling
modes).
The rest of the system won't add one automatically (since entity
creation no longer does), but the alias and iqm rendering code expect
there to be one. Fixes a segfault when starting a scene (demo etc).
There's no API yet as I need to look into the handling of qpic_t before
I can get any of this into the other renderers (or even vulkan, for that
matter).
However, the current design for slice rendering is based on glyphs (ie,
using instances and vertex pulling), with 3 strips of 3 quads, 16 verts,
and 26 indices (2 reset). Hacky testing seems to work, but real tests
need the API.
I don't know why it didn't happen during the demo loop, but going from
the start map to e1m1 caused a segfault due to the efrags for a lava
ball getting double freed (however, I do think it might be because the
ball passed through at least two leafs, while entities in the demos did
not). The double free was because SCR_NewScene (indirectly) freed all
the efrags without removing them from entities, and then the client code
deleting the entities caused the visibility components to get deleted
and thus the efrags freed a second time. Using ECS_RemoveEntities on the
visibility component ensures the entities don't have a visibility
component to remove when they are later deleted.
While simple component pools can be cleared simply by zeroing their
counts, ones that have a delete function need that function to be called
for all the components in the pool otherwise leaks can happen.
It's currently used only by the vulkan renderer, as it's the only
renderer that can make good use of it for alias models, but now vulkan
show shirt/pants colors (finally).
This cuts down on the memory requirements for skins by 25%, and
simplifies the shader a bit more, too. While at it, I made alias skins
nominally compatible with bsp textures: layer 0 is color, 1 is emissive,
and 2 is the color map (emissive was on 3).
As the RGB curves for many of the color rows are not linearly related,
my idea of scaling the brightest color in the row just didn't work.
Using a masked palette lookup works much better as it allows any curves.
Also, because the palette is uploaded as a grid and the coordinates are
calculated on the CPU, the system is extendable beyond 8-bit palettes.
This isn't quite complete as the top and bottom colors are still in
separate layers but their indices and masks can fit in just one, but
this requires reworking the texture setup (for another commit).
For whatever reason, I had added an extra 4 bytes to the fragment
shader's push-constants. It took me a while to figure out why renderdoc
wouldn't stop complaining about me not writing enough data.
It turns out my approach to alias skin coloring just doesn't work for
the quake data due to the color curves not having a linear relationship,
especially the bottom colors.
It works on only one layer and one mip, and assumes the provided texture
data is compatible with the image, but does support sub-image updates
(x, y location as parameters, width and height in the texture data).
The bright end of the color map is actually twice the palette value, but
I didn't understand this when I came up with the shirt/pants color
scheme for vulkan. However, the skin texture can store only 0..1, so the
mapping to 0..2 needs to be done in the shader. It looks like it works
at least better: the gold key at the end of demo1 doesn't look as bleh,
though I do get some weird colors still on ogres etc.
Currently only for gl/glsl/vulkan. However, rather than futzing with
con_width and con_height (and trying to guess good values), con_scale
(currently an integer) gives consistent pixel scaling regardless of
window size.
Well, sort of: it's still really in the renderer, but now calling
R_AddEfrags automatically updates the visibility structure as necessary,
and deleting an entity cleans up the efrags automatically. I wanted this
over twenty years ago.
The support for the new vector types broke compiling code using
--advanced. Thus it's necessary to ensure vector constants are
float-type and vec3 and vec4 are treated as vector and quaternion, which
meant resurrecting the old vector expression code for v6p progs.
This fixes maplist showing only those maps in the user directory.
However, no checking is done for duplicate files due to earlier search
paths overriding later paths.
This involved disabling sigils for hipnotic and rogue (not used),
adjusting the number of items views, and moving the two keys views for
hipnotic. Rogue is not yet using the correct status bar pics.
The functionality of the hipnotic and rogue weapon power-ups is now done
by a various mappings instead of separate functions. In theory, this
should make things more flexible, but most importantly, there's a lot
less code duplication.
Sigils can't be flashed as they don't have any animations provided, and
they're not normally as critical. I don't know why items weren't
flashed, but since the pics are there, might as well use them (and the
flashing keys do look pretty good).
I think this makes the purpose of the functions more clear and makes the
protocol logic less dependent on the meaning of some of the updates.
Most of the update functions are not fully implemented yet.
I had forgotten that the cl structs in nq and qw were different layouts,
which resulted in qw's sbar/hud being quite broken. Rather than messing
with the structs, I decided it would be far better in the long run to
clean up sbar's access to the cl struct and the few other nq/qw specific
globals it used. There are still plenty of bugs to fix, but now almost
everything is in the one place.
In the end, it was removal of the old entries that corrupted the parent
indices. Very nicely, most of the fixes involved removing code. Taking
advantage of the ECS to debug the hierarchies was fun, and the resulting
colorized entity names helped no end.
Even 37 objects is a lot, but it's a whole lot better than 180. Most
importantly, it reproduces the problem, which seems to be not all parent
indices getting updated. The child indices seem to be working nice, as
do the reference object indices (ie, the entity components). I suspect
its the parent indices getting corrupted that cause problems on the
second switch of the hud/sbar cvar as the parent indices are used to
find the child indices that need to be updated.
This improves the behavior of hierarchies when self-inserting, but nq's
sbar still crashes when trying to do so. However, its tree is a fair bit
more complex than the test case (that does pass now), so I need to try
to replicate the important parts of the tree with fewer objects (180 is
too many to work with).
As expected, reparenting a sub-hierarchy such that it (and possibly its
children) move up the arrays fails (this is why sbar needs to first
remove the sub-hierarchy then insert it).
Since test_build_hierarchy2 already tested removal of a sub-hierarchy
(once fixed), it seems test_build_hierarchy3 testing parenting within
the same hierarchy would be a good idea. Reparenting such that
everything moves to later in the arrays works nicely (not very
surprising).
Its updates to the various indices were out, but this was missed due to
the tests being wrong. I wonder if I got interrupted while working on
them last and just assumed the removals were correct. This improves
sbar's behavior, but it's still wrong when pulling the armory view out
of the inventory. Very unsure what's going on, but the various indices
look ok, as do the view positions.
Ugh, things were quite bad, it turns out. It seems a lot of trouble
would have been saved if these tests had worked (however, something is
still not quite right as views are out of place).
This is the bug that sbar found when pulling a sub-hierarchy out of a
larger hierarchy: child indices not getting updated correctly for later
siblings and any niece objects.
The hierarchy-specific tests from the transform tests have been moved
into the ecs tests and the transform tests renamed appropriately. As
part of the process, hierarchies can now have a null type (ie, no
additional components maintained by the hierarchy). This should make
sorting out the issues highlighted by sbar a bit easier.
It should have always been here, but when I first created the hierarchy
and transform objects, I didn't know where things would go. Having two
chunks of code for setting an entity's parent was too already too much,
and I expect to have other hierarchy types. Doesn't fix the issues
encountered with sbar, of course.
The text object covering the whole passage was not being initialized,
thus center print tried to print rubbish when (incorrectly) printing the
entire message.
I'm not particularly happy with the way onresize is handled, but at this
stage a better way of dealing with resizing views and getting the child
views to flow correctly hasn't come to mind. However, the system should
at least be usable.
I'm not sure when things broke on my laptop (I thought I got warp and
fisheye working on my laptop), but it turns out things weren't quite
right, thus warp (and presumably fisheye) weren't working properly due
to GLSL errors that I only just noticed. This fixes water warp (and
probably fisheye).
This includes moving the related cvars from botn nq and qw into the
client hud code. In addition, the hud code supports update and
update-once function components. The update component is for updates
that occur every frame, but update-once components (not used yet) are
for one-shot updates (eg, when a value updates very infrequently).
Much of the nq/qw HUD system is quite broken, but the basic status bar
seems to be working nicely. As is the console (both client and server).
Possibly the biggest benefit is separating the rendering of HUD elements
from the updating of them, and much less traversing of invisible views
whose only purpose is to control the positioning of the visible views.
The view flow tests are currently disabled until I adapt the flow code
to ECS.
There seems to be a problem with view resizing in that some gravities
don't follow resizing correctly.
As the bookkeeping data is spread between three arrays, sorting a
component pool is not trivial and thus not something to duplicate around
the codebase.
It's not quite complete in that entities need to be created for the
objects, and passage text object might get additional components in the
hierarchy, but the direct use of views has been replaced by the use of a
hierarchy object with the same tree structure, and now has text objects
for paragraphs and the entire passage.
As an implementation detail, inserting null hierarchies (src hierarchy
is null) is supported for ease of hierarchy construction from data. No
component data is copied, only the child and parent indices and counts
are updated.
The resource functions assume the requested layers is correct (really,
the lighting code assumes that the resource functions assume such), but
QFV_CreateImage multiplies the layer count by 6 for cube maps (really,
the issue is in QFV_CreateImage, but I want to move away from it
anyway).
The check for the entity being the view model was checking only the
view model id, which is not sufficient when the view model is invalid by
never being set to other than 0s. A better system for dealing with the
view model is needed.
Another step towards moving all resource creation into the one place.
The motivation for doing the change was getting my test scene to work
with only ambient lights or no lights at all.
It seems this isn't needed any more (not sure why) as both glsl and
vulkan are happy without it. Also unsure why moving to ECS made gl and
sw change behavior regarding rendering the test models in my scene.
While the libraries are probably getting a little out of hand, the
separation into its own directory is probably a good thing as an ECS
should not be tied to scenes. This should make the ECS more generally
useful.
This fixes the segfault due to the world entity not actually existing,
without adding a world entity. It takes advantage of the ECS in that the
edge renderer needs only the world matrix, brush model pointer, and the
animation frame number (which is just 0/1 for brush models), thus the
inherent SOA of ECS helps out, though benchmarking is needed to see if
it made any real difference.
With this, all 4 renderers are working again.
Since entity_t has a pointer to the registry owning the entity, there's
no need to access a global to get at the registry. Also move component
getting closer to where it's used.
It no longer initializes the new component. For that, use
Ent_SetComponent which will copy in the provided data or call the
component's create function if the data pointer is null (in which case,
Ent_SetComponent acts as Ent_SetComponent used to).
I realized I should check that the entity owns the component before
treating it as existing when adding an existing component, then noticed
that Ent_RemoveComponent didn't check before removing the component. I
imagine that would have been a fun debug session :P
While checking on how Ent_AddComponent behaved (I don't remember what I
was looking for, though), I realized that instead of treating adding the
same component to an entity as an error, Ent_AddComponent should just
return the existing component.
It was being set to -1 unconditionally due to forgetting to use id.
However, I decided I didn't like reusing the id var and did some
renaming while I was at it.
This puts the hierarchy (transform) reference, animation, visibility,
renderer, active, and old_origin data in separate components. There are
a few bugs (crashes on grenade explosions in gl/glsl/vulkan, immediately
in sw, reasons known, missing brush models in vulkan).
While quake doesn't really need an ECS, the direction I want to take QF
does, and it does seem to have improved memory bandwidth a little
(uncertain). However, there's a lot more work to go (especially fixing
the above bugs), but this seems to be a good start.
Hierarchies are now much closer to being more general in that they are
not tied to 3d transforms. This is a major step to moving the whole
entity/transform system into an ECS.
Not that anything is actually rendered yet, but the validation layers
don't like the null render pass. Came up now because ctf1 seems to make
the first light an ambient light.
It didn't really add anything of value as the glyph bitmap rects and the
bearings were never used together, and the rest of the fields were
entirely redundant. A small step towards using a component system for
text.
That does feel a little redundant, but I think the System in ECS is
referring to the systems that run on the components, while the other
system is the support code for the ECS. Anyway...
This is based heavily on the information provided by @skipjack in his
github blog about EnTT. Currently, entity recycling and sparse arrays
for component pools have been implemented, and adding components to an
entity has been tested.
The inconsistencies in clang's handling of casts was bad enough, and its
silliness with certain extensions, but discovering that it doesn't
support raw strings was just too much. Yes, it gives a 3s boost to qfvis
on gmsp3v2.bsp, but it's not worth the hassle.
While chatting about utf-8, I noticed that QF doesn't ensure the input
sequences are the shortest possible encodings. It turns out that the
check is easy in that only the second byte needs to be checked if the
first byte's data bits are 0, and the second byte must have a data value
larger than that representable by the next lower leading byte.
While the base of a memory object was aligned when calculating the
memory block size, the top end was not, which could result in the memory
block not getting enough bytes allocated to satisfy alignment
requirements (eg, for flushing).
While fixing that, I noticed the offsets of objects were not being
aligned when binding, so that is fixed as well.
Fixes Mr Fixit on my VersaPro.
This is the beginning of adding ECS to QF. While the previous iteration
of hierarchies was a start in the direction towards ECS, this pulls most
of the 3d-specific transform stuff out of the hierarchy "objects",
making all the matrices and vectors/quaternions actual components (in
the ECS sense). There's more work to be done with respect to the
transform and entity members of hierarchy_t (entity should probably go
away entirely, and transform should become hierref_t (or whatever its
final name becomes), but I wanted to get things working sooner than
later.
The motivation for the effort was to allow views to use hierarchy_t,
which should be possible once I get entity and transform sorted out.
I am really glad I already had automated tests for hierarchies, as
things proved to be a little tricky to get working due to forgetting why
certain things were there.
With the improved atlas allocation, 2x is no longer needed and 1.2x
seems to be sufficient. Most importantly, it reduced the texture for
amiri-regular.ttf at 72 pix height from 8x to 4x (the staging buffer
isn't big enough for 8k textures).
Currently, only 16 fonts can be loaded (I need to sort out descriptor
set pools), but glyphs are grouped into batches of the same font. While
not quite optimal as it can result in bouncing between descriptor sets a
lot, it's still reasonably efficient.
Line rendering now has its own pipeline (removing the texture issue).
Glyph rendering (for fonts) has been reworked to use instanced quad
rendering, with the geometry (position and texture coords) in a static
buffer (uniform texture buffer), and each instance has a glyph index,
color, and 2d base position.
Multiple fonts can be loaded, but aren't used yet: still just the one
(work needs to be done on the queues to support multiple
textures/fonts).
Quads haven't changed much, but buffer creation and destruction has been
cleaned up to use the resource functions.
Its value on input is ignored. QFV_CreateResource writes the resource
object's offset relative to the beginning of the shared memory block.
Needed for the Draw overhaul.
I got tired of writing the same 13 or so lines of code over and over (it
actually put me off experimenting with Vulkan). Thus...
QFV_PacketCopyBuffer does the work of handling barriers and a (full
packet) copy from the staging buffer to a GPU buffer.
QFV_PacketCopyImage does a similar job, but for images. However, it
still needs a lot of work, but it does make getting a basic texture onto
the GPU much less of a hassle.
Both functions should make staging data much less error-prone.
This moves the qfv_resobj_t image initialization code from the IQM
loader into the resource management code. This will allow me to reuse
the code for setting up glyph data. As a bonus, it cleans up the IQM
code nicely.
A passage object has a list of all the text objects in the given string,
where the objects represent either white space or "words", as well as a
view_t object representing the entire passage, with paragraphs split
into child views of the passage view, and each paragraph has a child
view for every text/space object in the paragraph.
Paragraphs are split by '\n' (not included in any object).
White space is grouped into clumps such that multiple adjacent spaces
form a single object. The standard ASCII space (0x20) and all of the
Unicode characters marked "WS;<compat> 0020" are counted as white space.
Unless a white space object is the first in the paragraph, its view is
marked for suppression by the view flow code.
Contiguous non-white space characters are grouped into single objects,
and their views are not suppressed.
All text object views (both white space and "word") have their data
pointer set to the psg_text_t object representing the text for that
view. This should be suitable for simple text-mode unattributed display.
More advanced rendering would probably want to create suitable objects
and set the view data pointers to those objects.
No assumption is made about text direction.
Passage and paragraph views need to have their primary axis sizes set
appropriately, as well as their resize flags. Their xlen and ylen are
both set to 10, and xpos,ypos is 0,0. Paragraph views need their
setgeometry pointer set to the appropriate view_flow_* function.
However, they are set up to have their secondary axis set automatically
when flowed.
Text object views are set up for automatic flowing: grav_flow, 0,0 for
xpos,ypos. However, xlen and ylen are also both 0, so need to be set by
the renderer before attempting to flow the text.
Adjusting the size of the parent (container) view to the views it
contains will be useful for automatic layout and knowing how large the
view is for scrolling. New tests added so testing both with and without
the option is still possible.
This should be suitable for laying out text objects with word-wrap,
where each view is a "word" or break between "words". This should be
useful for any other objects that could benefit from similar layout
rules. All eight flows are supported left-right-top-down (English and
most European languages), right-left-top-down (Arabic and similar),
top-down-right-left (Chinese, Japanese, Korean), top-down-left-right,
as well as bottom-up variants of those four.
More work is needed for support of things like views being centered on
the flow line rather than on one edge (depends on flow direction),
offset views, and others. Suppression of "spaces" at the beginning of a
line is supported but not tested.
Due to the changes related to console views, the console was either
fully visible or not at all visible, so it took several seconds to
disappear whenever closed.
Taking the screen data from the event fixes the console size being out
due to screen_view updating after the app_window event fires. Really,
this makes it independent of the order.
UVs being 0 meant that lines were picking up the upper left pixel of
char 0 of conchars. With quake data, this meant a transparent pixel.
Fixes invisible debug lines :P.
It turns out that using the swapchain image for the size requirements is
unreliable: when running under renderdoc, vkGetImageMemoryRequirements
sets the memory requirements fields to 0, leading eventually to a null
memory object being passed to vkMapMemory, which does not end well.
I had missed that vkCmdCopyImage requires the source and destination
images to have exactly the same size, and I guess assumed that the
swapchain images would always be the size they said they were, but this
is not the case for tiled-optimal images. However,
vkCmdCopyImageToBuffer does the right thing regardless of the source
image size.
This fixes the skewed screenshots when the window size is not a multiple
of 8 (for me, might differ for others).
There's a problem with screenshot capture in that the image is sheared
after window resize, but the screen view looks good, and vulkan is happy
with the state changes.
As gbuf_base derives from the base pipeline, it inherits base's dynamic
setting, and thus doesn't need its own. I had a FIXME there as I wasn't
sure why I had a redundant setting, but I really can't see why I'd want
it different from any of the other main renderpass pipelines.
I've found and mostly isolated the parts of the code that will be
affected by window resizing, minus pipelines but they use dynamic
viewport and scissor settings and thus shouldn't be affected so long as
the swapchain format doesn't change (how does that happen?)
Finally, the model_funcs and render_funcs struts use designated
initializers. Not only are they good for ensuring correct
initialization, they're great for the programmer finding the right
initializer.
I must have forgotten about the SYS_DeveloperID_... enum values, when I
wrote that code, because relying on the line number is not really for
the best.
Due to design issues in the console API that I don't feel like
addressing at this stage, the console view is not a child of the
client's screen view (not even sure it should be in the first place), so
it won't get resized automatically when the client's screen view
resizes. However, ie_app_window is sent when the screen size changes,
and the console has to process input events anyway, so it's quite
reasonable to handle the event.
With the addition of dependencies on freetype and harfbuzz, it became
clear that the renderer plugins need to be explicitly linked against
external dependencies (and that I need to do more installed testing,
rather than just my static local builds). This fixes the unresolved
symbols when attempting to load any of the plugins.
qwaq doesn't supply a backtile pic, so Draw_TileClear in the gl and glsl
renderers would segfault when qwaq's window width changed due to some
back-tile being drawn.
As of a recent nvidia driver update, it became necessary to enable the
feature. I guess older drivers (or vulkan validation layers?) were a bit
slack in their checking (or perhaps I didn't actually get those lighting
changes working at the time despite having committed them).
This did involve changing some field names and a little bit of cleanup,
but I've got a better handle on what's going on (I think I was in one of
those coding trances where I quickly forget how things work).
Just head and tail are atomic, but it seems to work nicely (at least on
intel). I actually had more trouble with gcc (due to accidentally
testing lock-free with the wrong ring buffer... oops, but yup, gcc will
happily optimize your loop to spin really really fast). Also served as a
nice test for C11 threading.
This makes bsp traversal even more re-entrant (needed for shadows).
Everything needed for a "pass" is taken from bsp_pass_t (or indirectly
through bspctx_t (though that too might need some revising)).
Ambient lights are represented by a point at infinity and a zero
direction vector (spherical lights have a non-zero direction vector but
the cone angle is 360 degrees). This fixes what appeared to be mangled
light renderers (it was actually just an ambient light being treated as
a directional light (point at infinity, but non-zero direction vector).
There are some issues with the light renderers getting mangled, and only
the first light is even touched (just begin and end of render pass), but
this gets a lot of the framework into place.
Sounds odd, but it's part of the problem with calling two different
things with essentially the same name. The "high level" render pass in
question may be a compute pass, or a complex series of (Vulkan) render
passes and so won't create a Vulkan render pass for the "high level"
render pass (I do need to come up with a better name for it).
I really don't remember why I made it separate, though it may have been
to do with r_ent_queue. However, putting it together with the rest is
needed for the "render pass" rework.
It now lives in vulkan_renderpass.c and takes most of its parameters
from plist configs (just the name (which is used to find the config),
output spec, and draw function from C). Even the debug colors and names
are taken from the config.
QFV_CreateRenderPass is no longer used, and QFV_CreateFramebuffer hasn't
been used for a long time. The C file is still there for now but is
basically empty.
The real reason for the delay in implementing support for pNext is I
didn't know how to approach it at the time, but with the experience I've
gained using and modifying vkparse, the solution turned out to be fairly
simple. This allows for the use of various extensions (eg, multiview,
which was used for testing, though none of the hookup is in this
commit). No checking is done on the struct type being valid other than
it must be of a chainable type (ie, have its own pNext).
The software renderer uses Bresenham's line slice algorithm as presented
by Michael Abrash in his Graphics Programming Black Book Special Edition
with the serial numbers filed off (as such, more just so *I* can read
the code easily), along with the Chen-Sutherland line clipping
algorithm. The other renderers were more or less trivial in comparison.
Enabled by 'developer lighting'. It was good for confirming that the
lights in ad_e1m1 (Doom Hangar 16) were actually being output (over 600
of them sometimes, ouch). Turned out to be the color scale ambiguity.
The pitch cvars are taken from quakespasm because I ran into a button I
couldn't shoot with the 80 degree limit, but I figured I'd add roll
limits while I was at it.
Surfaces marked with SURF_DRAWALPHA but not SURF_DRAWTURB are put in a
separate queue for the water shader and run with a turb scale of 0.
Also, entities with colormod alpha < 1 are marked to go in the same
queue as SURF_DRAWALPHA surfaces (ie, no SURF_DRAWTURB unless the
model's texture indicated such).
Textures whose names start with a { are meant to be rendered with
transparency. Surfaces using those textures are marked with
SURF_DRAWALPHA.
Unfortunately, the mip levels of ad_tears' transparent textures use the
wrong color so only the highest LOD works properly, but those textures
are meant to be loaded from external files anyway, it seems.
A listener is used instead of (really, as well as) ie_app_window events
because systems that need to know about windows sizes may not have
anything to do with input and the event system.
This breaks console scaling for now (con_width and con_height are gone),
but is a major step towards window resize support as console stuff
should never have been in viddef_t in the first place.
The client screen init code now sets up a screen view (actually the
renderer's scr_view) that is passed to the client console so it can know
the size of the screen. The same view is used by the status bar code.
Also, the ram/cache/paused icon drawing is moved into the client screen
update code. A bit of duplication, but I do plan on merging that
eventually.
Things seem to be at least close to the right place now.
Input line handling has been made more object-oriented in that the
collection of objects required for a single input line (command, say,
say_team) are bundled into one object with just one set of handlers for
resize and draw. Much tidier.
view_new sets the geometry, but any setgeometry that need a valid data
pointer would get null. It might be better to always have the data
pointer, but I didn't feel like doing such a change at this stage as
there are quite a lot of calls to view_new. Thus view_new_data which
sets the data pointer before calling setgeometry.
More tuning is needed on the actual splits as it falls over when the
lower rect gets too low for the subrects being allocated. However, the
scrap allocator itself will prefer exact width/height fits with larger
cutoff over inexact cuts with smaller cutoff. Many thanks to tdb for the
suggestions.
Fixes the fps dropping from ~3700fps down to ~450fps (cumulative due to
loss of POT rounding and very poor splitting layout), with a bonus boost
to about 4900fps (all speeds at 800x450). The 2d sprites were mostly ok,
but the lightmaps forming a capital gamma shape in a 4k texture really
hurt. Now the lightmaps are a nice dense bar at the top of the texture,
and 2d sprites are pretty good (slight improvement coming next).
This replaces old_console_t with con_buffer_t for managing scrollback,
and draw_charbuffer_t for actual character drawing, reducing the number
of calls into the renderer. There are numerous issues with placement and
sizing, but the basics are working nicely.
I really don't know why I tried to do ring-buffers without gaps, the
code complication is just not worth the tiny savings in memory. In fact,
just the switch from pointers to 32-bit offsets saves more memory than
not having gaps (on 64-bit systems, no change on 32-bit).
I've decided that appending to a full single-line buffer should simply
scroll through the existing text. Unsurprisingly, the existing code
doesn't handle the situation all that well. While I've already got a fix
for it, I think I've got a better idea that will handle full buffers
more gracefully.
This fixes the current line object getting corrupted by the tail line
update when the buffer is filled with a single line. There are probably
more tests to write and bugs to fix :)
I was looking through the code for Con_BufferAddText trying to figure
out what it was doing (answer: ring buffer for both text and lines) and
got suspicious about its handling of the line objects. I decided an
automated test was in order. It turns out I was right: filling the
buffer with a single long line causes the tail line to trample the
current line, setting its pointer and length to 0 when the final
character is put in the buffer.
It handles basic cursor motion respecting \r \n \f and \t (might be a
problem for id chars), wraps at the right edge, and automatically
scrolls when the cursor tries to pass the bottom of the screen.
Clearing the buffer resets its cursor to the upper left.
QFS_LoadFile closes its file argument (this is a design error resulting
from changing QFS_LoadFile to take a file instead of a path and not
completing the update), resulting in the call to Qfilesize accessing
freed memory.
This is intended for the built-in 8x8 bitmap characters and quake's
"conchars", but could potentially be used for any simple (non-composed
characters) mono-spaced font. Currently, the buffers can be created,
destroyed, cleared, scrolled vertically in either direction, and
rendered to the screen in a single blast.
One of the reasons for creating the buffer is to make it so scaling can
be supported in the sw renderer.
PR_Debug_ValueString prints the value at the given offset using the
provided type to format the string. The formatted string is appended to
the provided dstring.
PR_Debug_ValueString prints the value at the given offset using the
provided type to format the string. The formatted string is appended to
the provided dstring.
For whatever reason, building under MXE (for windows) causes FLAC to try
to use dll import references, but setting FLAC__NO_DLL before including
FLAC/export.h fixes the issue.
For whatever reason, building under MXE (for windows) causes FLAC to try
to use dll import references, but setting FLAC__NO_DLL before including
FLAC/export.h fixes the issue.
While this does pull the grovelling for the subpic out to the callers,
the real problem is the excessive use of qpic_t in the internal code:
qpic_t is really just the image format in wad files, and shouldn't be
used as a generic image handle.
Cleans up more of the icky code in the font drawing functions.
This makes working with quads, implied alpha quads, and lines much
cleaner (and gets rid of the bulk of the "eww" fixme), and will probably
make it easier to support multiple scraps and fonts, and potentially
more flexible ordering between pipelines.
-describe is sent to the object, and the returned string passed back.
There is a worry about the lifetime of the returned string as there's
currently no way of both ensuring it doesn't get freed prematurely and
ensuring it does eventually get freed.
If no handler has been registered, then the corresponding parameter is
printed as a pointer but with surrounding brackets (eg, [0xfc48]). This
will allow the ruamoko runtime to implement object printing.
-describe is sent to the object, and the returned string passed back.
There is a worry about the lifetime of the returned string as there's
currently no way of both ensuring it doesn't get freed prematurely and
ensuring it does eventually get freed.
If no handler has been registered, then the corresponding parameter is
printed as a pointer but with surrounding brackets (eg, [0xfc48]). This
will allow the ruamoko runtime to implement object printing.
This means that QF should support more exotic fonts without any issue
(once the rest of the text handling system is up to snuff) as HarfBuzz
does all the hard work of handling OpenType, Graphite, etc text shaping,
including kerning (when enabled).
Also, font loading now loads all the glyphs into the atlas (preload is
gone).
While the results are a little surprising (tends to alternate between
left side and top for allocations), there is much less wasted space in
the partially allocated regions, and the main free region seems to
always be quite big.
While VRect_Difference worked for subrect allocation, it wasn't ideal as
it tended to produce a lot of long, narrow strips that were difficult to
reuse and thus wasted a lot of the super-rectangle's area. This is
because it always does horizontal splits first. However, rewriting
VRect_Difference didn't seem to be the best option.
VRect_SubRect (the new function) takes only width and height, and splits
the given rectangle such that if there are two off-cuts, they will be
both the minimum and maximum possible area. This does seem to make for
much better utilization of the available area. It's also faster as it
does only the two splits, rather than four.
It is currently an ugly hack for dealing with the separate quad queue,
and the pipeline handling code needs a lot of cleanup, but it works
quite well, though I do plan on moving to HarfBuzz for text shaping. One
nice development is I got updating of descriptor sets working (just need
to ensure the set is no longer in use by the command queue, which the
multiple frames in flight makes easy).
It's implemented only in the Vulkan renderer, partly because there's a
lot of experimenting going on with it, but the glyphs do get transferred
to the GPU (checked in render doc). No rendering is done yet: still
thinking about whether to do a quick-and-dirty test, or to add HarfBuzz
immediately, and the design surrounding that.
R8G8B8A8 was hard-coded by accident when creating Vulkan_LoadTexArray
(or probably even the original Vulkan_LoadTex). This wasn't a problem
while everything was loaded in that format, but attempting to load an R8
texture didn't go so well. The same format as the image itself is used
now (correctly so).
I have recently learned that pre-multiplied alpha is the correct way to
do compositing, which is pretty much what the 2d pass does (actually,
all passes, but...). This required ensuring the color factor passed to
the fragment shader is pre-multiplied (a little silly for cshifts as
they used to be pre-multiplied but were un-pre-multiplied early in QF's
history and I don't feel like fixing that right now as it affects all
renderers), and also pre-multiplying alpha when converting from 8-bit
palette to rgba as the palette entry for transparent has that funky pink
(which is used in full-brights).
I will need to do more work to improve the 2d allocation, but rounding
up the requested sizes to the next power of two proved to be excessively
wasteful: I was able to allocate spots for only half of the sub-pics I
needed (though I did still need to double the number of pixels in the
end).
The software renderer uses Bresenham's line slice algorithm as presented
by Michael Abrash in his Graphics Programming Black Book Special Edition
with the serial numbers filed off (as such, more just so *I* can read
the code easily), along with the Chen-Sutherland line clipping
algorithm. The other renderers were more or less trivial in comparison.
Due to the mis-initialization of the union used to parse the color
vector, the intensity was incorrectly set to zero and thus the light
dropped, meaning that all lights in ad_tears were lost.
The extend instruction is for loading narrower data types into wider
data types, eg, single element into 2, 3, or 4 element types, with a
small set of extension schemes: 0, 1, -1, copy (for 1->any and 2 -> 4).
Possibly most importantly, it works with unaligned data.
Progress towards #30
Most were pretty easy and fairly logical, but gib's regex was a bit of a
pain until I figured out the real problem was the conditional
assignments.
However, libs/gamecode/test/test-conv4 fails when optimizing due to gcc
using vcvttps2dq (which is nice, actually) for vector forms, but not the
single equivalent other times. I haven't decided what to do with the
test (I might abandon it as it does seem to be UD).
This gets ambient sounds (in particular, water and sky) working again
for quakeworld after the recent sound changes, and again for nq after I
don't know how long.
Because the calculation didn't take the hunk header size (which is not
included in the hunk size) into account, the conversion to MB was one
short and thus the rounding up to the next 8 MB boundary was giving the
current total hunk size (ie, the already given size). Most confusing to
a user ("But I already asked for 128MB!").
It turns out that copying just "unknown" is a significant performance
hit when doing over 100M allocations. Making Hunk_RawAlloc the core and
initializing the name field with a single 0 shaved about a second off
`qfvis gmsp3v2.bsp` (from about 39s to about 38s).
My reason for using Hunk_HighAlloc for allocating cache blocks was to
lock them down so they were safe for the sound mixer to access when
running in a real-time thread. However, I had never tested under tight
memory constraints, which proved that the design (or maybe just
implementation) just wasn't robust. However, now that sounds are loaded
into a completely separate region, it's safe to put the cache back to
its original behaviour (still with 64-byte alignment and such, of
course). This will even allow the high hunk to be used again, though it
effectively was anyway with Hunk_TempAlloc.