The real reason for the delay in implementing support for pNext is I
didn't know how to approach it at the time, but with the experience I've
gained using and modifying vkparse, the solution turned out to be fairly
simple. This allows for the use of various extensions (eg, multiview,
which was used for testing, though none of the hookup is in this
commit). No checking is done on the struct type being valid other than
it must be of a chainable type (ie, have its own pNext).
The software renderer uses Bresenham's line slice algorithm as presented
by Michael Abrash in his Graphics Programming Black Book Special Edition
with the serial numbers filed off (as such, more just so *I* can read
the code easily), along with the Chen-Sutherland line clipping
algorithm. The other renderers were more or less trivial in comparison.
Enabled by 'developer lighting'. It was good for confirming that the
lights in ad_e1m1 (Doom Hangar 16) were actually being output (over 600
of them sometimes, ouch). Turned out to be the color scale ambiguity.
The pitch cvars are taken from quakespasm because I ran into a button I
couldn't shoot with the 80 degree limit, but I figured I'd add roll
limits while I was at it.
Surfaces marked with SURF_DRAWALPHA but not SURF_DRAWTURB are put in a
separate queue for the water shader and run with a turb scale of 0.
Also, entities with colormod alpha < 1 are marked to go in the same
queue as SURF_DRAWALPHA surfaces (ie, no SURF_DRAWTURB unless the
model's texture indicated such).
Textures whose names start with a { are meant to be rendered with
transparency. Surfaces using those textures are marked with
SURF_DRAWALPHA.
Unfortunately, the mip levels of ad_tears' transparent textures use the
wrong color so only the highest LOD works properly, but those textures
are meant to be loaded from external files anyway, it seems.
A listener is used instead of (really, as well as) ie_app_window events
because systems that need to know about windows sizes may not have
anything to do with input and the event system.
This breaks console scaling for now (con_width and con_height are gone),
but is a major step towards window resize support as console stuff
should never have been in viddef_t in the first place.
The client screen init code now sets up a screen view (actually the
renderer's scr_view) that is passed to the client console so it can know
the size of the screen. The same view is used by the status bar code.
Also, the ram/cache/paused icon drawing is moved into the client screen
update code. A bit of duplication, but I do plan on merging that
eventually.
Things seem to be at least close to the right place now.
Input line handling has been made more object-oriented in that the
collection of objects required for a single input line (command, say,
say_team) are bundled into one object with just one set of handlers for
resize and draw. Much tidier.
view_new sets the geometry, but any setgeometry that need a valid data
pointer would get null. It might be better to always have the data
pointer, but I didn't feel like doing such a change at this stage as
there are quite a lot of calls to view_new. Thus view_new_data which
sets the data pointer before calling setgeometry.
More tuning is needed on the actual splits as it falls over when the
lower rect gets too low for the subrects being allocated. However, the
scrap allocator itself will prefer exact width/height fits with larger
cutoff over inexact cuts with smaller cutoff. Many thanks to tdb for the
suggestions.
Fixes the fps dropping from ~3700fps down to ~450fps (cumulative due to
loss of POT rounding and very poor splitting layout), with a bonus boost
to about 4900fps (all speeds at 800x450). The 2d sprites were mostly ok,
but the lightmaps forming a capital gamma shape in a 4k texture really
hurt. Now the lightmaps are a nice dense bar at the top of the texture,
and 2d sprites are pretty good (slight improvement coming next).
This replaces old_console_t with con_buffer_t for managing scrollback,
and draw_charbuffer_t for actual character drawing, reducing the number
of calls into the renderer. There are numerous issues with placement and
sizing, but the basics are working nicely.
I really don't know why I tried to do ring-buffers without gaps, the
code complication is just not worth the tiny savings in memory. In fact,
just the switch from pointers to 32-bit offsets saves more memory than
not having gaps (on 64-bit systems, no change on 32-bit).
I've decided that appending to a full single-line buffer should simply
scroll through the existing text. Unsurprisingly, the existing code
doesn't handle the situation all that well. While I've already got a fix
for it, I think I've got a better idea that will handle full buffers
more gracefully.
This fixes the current line object getting corrupted by the tail line
update when the buffer is filled with a single line. There are probably
more tests to write and bugs to fix :)
I was looking through the code for Con_BufferAddText trying to figure
out what it was doing (answer: ring buffer for both text and lines) and
got suspicious about its handling of the line objects. I decided an
automated test was in order. It turns out I was right: filling the
buffer with a single long line causes the tail line to trample the
current line, setting its pointer and length to 0 when the final
character is put in the buffer.
It handles basic cursor motion respecting \r \n \f and \t (might be a
problem for id chars), wraps at the right edge, and automatically
scrolls when the cursor tries to pass the bottom of the screen.
Clearing the buffer resets its cursor to the upper left.
QFS_LoadFile closes its file argument (this is a design error resulting
from changing QFS_LoadFile to take a file instead of a path and not
completing the update), resulting in the call to Qfilesize accessing
freed memory.
This is intended for the built-in 8x8 bitmap characters and quake's
"conchars", but could potentially be used for any simple (non-composed
characters) mono-spaced font. Currently, the buffers can be created,
destroyed, cleared, scrolled vertically in either direction, and
rendered to the screen in a single blast.
One of the reasons for creating the buffer is to make it so scaling can
be supported in the sw renderer.
PR_Debug_ValueString prints the value at the given offset using the
provided type to format the string. The formatted string is appended to
the provided dstring.
PR_Debug_ValueString prints the value at the given offset using the
provided type to format the string. The formatted string is appended to
the provided dstring.
For whatever reason, building under MXE (for windows) causes FLAC to try
to use dll import references, but setting FLAC__NO_DLL before including
FLAC/export.h fixes the issue.
For whatever reason, building under MXE (for windows) causes FLAC to try
to use dll import references, but setting FLAC__NO_DLL before including
FLAC/export.h fixes the issue.
While this does pull the grovelling for the subpic out to the callers,
the real problem is the excessive use of qpic_t in the internal code:
qpic_t is really just the image format in wad files, and shouldn't be
used as a generic image handle.
Cleans up more of the icky code in the font drawing functions.
This makes working with quads, implied alpha quads, and lines much
cleaner (and gets rid of the bulk of the "eww" fixme), and will probably
make it easier to support multiple scraps and fonts, and potentially
more flexible ordering between pipelines.
-describe is sent to the object, and the returned string passed back.
There is a worry about the lifetime of the returned string as there's
currently no way of both ensuring it doesn't get freed prematurely and
ensuring it does eventually get freed.
If no handler has been registered, then the corresponding parameter is
printed as a pointer but with surrounding brackets (eg, [0xfc48]). This
will allow the ruamoko runtime to implement object printing.
-describe is sent to the object, and the returned string passed back.
There is a worry about the lifetime of the returned string as there's
currently no way of both ensuring it doesn't get freed prematurely and
ensuring it does eventually get freed.
If no handler has been registered, then the corresponding parameter is
printed as a pointer but with surrounding brackets (eg, [0xfc48]). This
will allow the ruamoko runtime to implement object printing.
This means that QF should support more exotic fonts without any issue
(once the rest of the text handling system is up to snuff) as HarfBuzz
does all the hard work of handling OpenType, Graphite, etc text shaping,
including kerning (when enabled).
Also, font loading now loads all the glyphs into the atlas (preload is
gone).
While the results are a little surprising (tends to alternate between
left side and top for allocations), there is much less wasted space in
the partially allocated regions, and the main free region seems to
always be quite big.
While VRect_Difference worked for subrect allocation, it wasn't ideal as
it tended to produce a lot of long, narrow strips that were difficult to
reuse and thus wasted a lot of the super-rectangle's area. This is
because it always does horizontal splits first. However, rewriting
VRect_Difference didn't seem to be the best option.
VRect_SubRect (the new function) takes only width and height, and splits
the given rectangle such that if there are two off-cuts, they will be
both the minimum and maximum possible area. This does seem to make for
much better utilization of the available area. It's also faster as it
does only the two splits, rather than four.
It is currently an ugly hack for dealing with the separate quad queue,
and the pipeline handling code needs a lot of cleanup, but it works
quite well, though I do plan on moving to HarfBuzz for text shaping. One
nice development is I got updating of descriptor sets working (just need
to ensure the set is no longer in use by the command queue, which the
multiple frames in flight makes easy).
It's implemented only in the Vulkan renderer, partly because there's a
lot of experimenting going on with it, but the glyphs do get transferred
to the GPU (checked in render doc). No rendering is done yet: still
thinking about whether to do a quick-and-dirty test, or to add HarfBuzz
immediately, and the design surrounding that.
R8G8B8A8 was hard-coded by accident when creating Vulkan_LoadTexArray
(or probably even the original Vulkan_LoadTex). This wasn't a problem
while everything was loaded in that format, but attempting to load an R8
texture didn't go so well. The same format as the image itself is used
now (correctly so).
I have recently learned that pre-multiplied alpha is the correct way to
do compositing, which is pretty much what the 2d pass does (actually,
all passes, but...). This required ensuring the color factor passed to
the fragment shader is pre-multiplied (a little silly for cshifts as
they used to be pre-multiplied but were un-pre-multiplied early in QF's
history and I don't feel like fixing that right now as it affects all
renderers), and also pre-multiplying alpha when converting from 8-bit
palette to rgba as the palette entry for transparent has that funky pink
(which is used in full-brights).
I will need to do more work to improve the 2d allocation, but rounding
up the requested sizes to the next power of two proved to be excessively
wasteful: I was able to allocate spots for only half of the sub-pics I
needed (though I did still need to double the number of pixels in the
end).
The software renderer uses Bresenham's line slice algorithm as presented
by Michael Abrash in his Graphics Programming Black Book Special Edition
with the serial numbers filed off (as such, more just so *I* can read
the code easily), along with the Chen-Sutherland line clipping
algorithm. The other renderers were more or less trivial in comparison.
Due to the mis-initialization of the union used to parse the color
vector, the intensity was incorrectly set to zero and thus the light
dropped, meaning that all lights in ad_tears were lost.
The extend instruction is for loading narrower data types into wider
data types, eg, single element into 2, 3, or 4 element types, with a
small set of extension schemes: 0, 1, -1, copy (for 1->any and 2 -> 4).
Possibly most importantly, it works with unaligned data.
Progress towards #30
Most were pretty easy and fairly logical, but gib's regex was a bit of a
pain until I figured out the real problem was the conditional
assignments.
However, libs/gamecode/test/test-conv4 fails when optimizing due to gcc
using vcvttps2dq (which is nice, actually) for vector forms, but not the
single equivalent other times. I haven't decided what to do with the
test (I might abandon it as it does seem to be UD).
This gets ambient sounds (in particular, water and sky) working again
for quakeworld after the recent sound changes, and again for nq after I
don't know how long.
Because the calculation didn't take the hunk header size (which is not
included in the hunk size) into account, the conversion to MB was one
short and thus the rounding up to the next 8 MB boundary was giving the
current total hunk size (ie, the already given size). Most confusing to
a user ("But I already asked for 128MB!").
It turns out that copying just "unknown" is a significant performance
hit when doing over 100M allocations. Making Hunk_RawAlloc the core and
initializing the name field with a single 0 shaved about a second off
`qfvis gmsp3v2.bsp` (from about 39s to about 38s).
My reason for using Hunk_HighAlloc for allocating cache blocks was to
lock them down so they were safe for the sound mixer to access when
running in a real-time thread. However, I had never tested under tight
memory constraints, which proved that the design (or maybe just
implementation) just wasn't robust. However, now that sounds are loaded
into a completely separate region, it's safe to put the cache back to
its original behaviour (still with 64-byte alignment and such, of
course). This will even allow the high hunk to be used again, though it
effectively was anyway with Hunk_TempAlloc.
I never liked "cache" as a name because it said where the sound was
stored rather than how it was loaded/played, but "stream" is ok, since
that's pretty much spot on. I'm not sure "block" is the best, but it at
least makes sense because the sounds are loaded as a single block (as
opposed to being streamed). An now, neither use the cache system.
Nuclear powered audio ;)
More seriously, use _Atomic on a few fields that very obviously need it.
That is, channel's buffer pointer (used to signal to the mixer that the
channel is ready for use) and "flow control" flags (stop, done and
pause), and head and tail in the buffer itself. Since QF has been
working without _Atomic (admittedly, thanks to luck and x86's strong
memory model), this should do until proven otherwise. I imagine getting
stream reading out of the RT thread will highlight any issues.
Turned out the channels simply weren't being freed by SND_ScanChannels
when they should have been (probably a good thing, too, as it wasn't
being told to wait for the mixer).
Care needs to be taken when freeing channels as doing so while an
asynchronous mixer is using them is unlikely to end well. However,
whether the mixer is asynchronous depends on the output driver. This
lets the driver inform the rest of the system that the output and mixer
are running asynchronously.
SYS_dev is a holdover from when we had only the one flag and is not
meant to be used for tests (I seem to remember mentioning an audit was
necessary, but obviously forgotten). One step at a time, I guess :)
This improves the locality of reference when mixing and removes the
proxy sfx for streamed sounds.
The buffer for streamed sounds is allocated when the stream is opened
(since streamed sounds can't share buffers), and freed when the stream
is closed.
For block sounds, the buffer is reference counted (with the sfx holding
one reference, so currently block buffers never get freed), with their
reference count getting incremented on open and decremented on close.
That the reference counts get to 1 has been confirmed, so all that
should be needed is proper destruction of the sfx instances.
Still need to sort out just why channels leak across level changes.
Getting the tag is possibly useful in general and definitely in
debugging. Setting, I'm not so sure as it should be done when allocated,
but that's not always possible.
Also, correct the return type of z_block_size, though it affected only
Z_Print. While an allocation larger than 4GB is... big for zone, the
blocks do support it, so printing should too.
They're currently treated as non-fatal, those sounds just won't ever
play. This allows ad_tears to at least load with only 32MB of locked
memory (it needs somewhere between 64 and 96).
Since Ruamoko got vector types, zone's 8-byte alignment was no longer
sufficient due to hardware-enforced alignment requirements of the
underlying vector operations.
Fixes#28.
And use it for Ruamoko object reference counts.
I need reference counts for dealing with block sound buffers since they
can be shared by many channels. I figured I take care of Ruamoko's
reference count location at the same time.
Fixes#27.
Sounds no longer use the cache, which is good for multi-threaded, but a
pain for memory management: the buffers are shared between channels that
play back the sounds, but when the sounds were cached, they were
automagically (thus problematically) freed when the space was needed.
That no longer happens, so they leak. I think the solution is to use
reference counting and retain/release in sfx->open() and sfx->close().
Streams are the easy one as they were never in the cache. As a side
effect, sfxstream_t is much smaller as it no longer has the buffer
embedded in the struct.
SND_AllocChannel is a little too aggressive in freeing channels that
have finished as the channel may be externally owned (eg, by cd_file).
Get bgm looping working again.
More shrinkage. It turned out the mixer uses the phase fields, so they
couldn't be removed, but even at 192kHz, +/- 127 samples produces
sufficient phase separation for a 21cm head (which is, actually, pretty
big: mine is about 15cm across), but that change can come later.
The ambient sound loading has been removed from snd_channels because 1)
it doesn't work for nq, 2) it should never have been there in the first
place (it belongs in the client, but that needs some more API).
This is part of a process to shrink channel_t so it doesn't waste locked
memory when it gets moved there. Eventually, only the fields the mixer
needs will be in channel_t itself: those needed for spacialization will
be moved into a separate array.
In the process, I found that channels leak across level changes, but
this appears to be due to the cached sounds being removed during loading
and the mixer never marking them as done (it sees the null sfx pointer
and assumes the channel was never in use). Having the mixer mark the
channel as done seems to fix the leak, but cause a free channel list
overflow. Rather than fight with that, I'll leave the leak for now and
fix it at its root cause: the management of the sound samples
themselves.
Sys_DoubleTime starts at 4Gs in order to keep its precision fixed for a
nice long time (about 120 years, iirc).
This fixes an instant watchdog trigger when first starting up in
testsound. I'm not sure why it didn't happen with nq, but I guess that
doesn't really matter
The scaling up of the volumes when setting a channel's volume bothered
me. The biggest issue being it hasn't been necessary for over a decade
since the conversion to a float-mixer. Now the volume and attenuation
scaling from protocol bytes is entirely in the client's hands.
This does mean that the gl and sw renderers can no longer call
S_ExtraUpdate, but really, they shouldn't be anyway. And I seem to
remember it not really helping (been way too long since quake ran that
slowly for me).
sfx_t is now private, and cd_file no longer accesses channel_t's
internals. This is necessary for hiding the code needed to make mixing
and channel management *properly* lock-free (I've been getting away with
murder thanks to x86's strong memory model and just plain luck with
gcc).
The tests fail as they exercise how the cache *SHOULD* work rather than
how it does now.
The tests do currently pass for the pending work I've done on the cache
system, but while working on it, I remembered why I reworked cache
allocation...
The essential problem is that sounds are loaded into the cache, which is
fine for synchronous output targets, but has proven to be a minefield
for asynchronous output targets (JACK, ALSA).
The reason for the minefield is the hunk takes priority over the cache,
and is free to move cache blocks around, and *even dispose of them
entirely* in order to satisfy memory allocations from either end of the
hunk. Doing this in an entirely single-threaded process (as DOS Quake
was) is perfectly safe, as the users of the cache just reload the
pointer each time, and bail if it's null (meaning the block has been
freed), or even cause the data to be reloaded if possible (I'm a little
fuzzy on the details for that as I didn't write that code). However, in
multi-threaded code, especially real-time (JACK, possibly ALSA), it's a
recipe for disaster. The 4cab5b90e6 commit was a (mostly) successful
attempt to mitigate the problem by allocating the cache blocks from the
high-hunk (thus minimizing any movement caused by low-hunk allocations),
it resulted in cache allocates and regular high-hunk allocations somehow
getting intertwined: while investigating just how much memory ad_tears
needs (somewhere between 192MB and 256MB), I got "trashed sentinel"
errors and upon investigation, I found what looks very suspiciously like
audio data written across a hunk control block.
I've decided that the cache allocation *algorithm* should be reverted to
how it was originally designed by Id (details will remain "modern"), but
while working on the tests, I remembered why I had done the changes in
the first place (above story). Thus the work on reverting the cache
allocation can't go in until I get sound memory management independent
of the cache. The tests are going in now so I have a constant reminder :)
And make Sys_MaskPrintf take the developer enum rather than just a raw
int.
It was actually getting some nasty hunk corruption errors when under
memory pressure that made it clear the sound system needs some work.
I had been trimming for the solid leaf, but not the empty leafs. I had
assumed the vis tool would trim the bits, but it seems to not be
reliable (though it could be a bug in qfvis, I think the map in question
is one of my test maps).
The texture animation data is compacted into a small struct for each
texture, resulting in much less data access when animating the texture.
More importantly, no looping over the list of frames. I plan on
migrating this to at least the other hardware renderers.
I found a test map with no texture data. Even after fixing the bsp
loader, vulkan didn't like it. Now vulkan is happy. The "Missing" text
is full-bright magenta on a dim grey background so it should be visible
in any lighting conditions.
Conflagrant Rodent has a sub-model with 0 faces (double bit error?)
causing simply counting faces to get out of sync with actual model
starts thus breaking *all* brush models that come after it (including
other maps). Thus be a little less lazy in figuring out model start
faces.
The models are broken up into N sub-(sub-)models, one for each texture,
but all faces using the same texture are drawn as an instance, making
for both reduced draw calls and reduced index buffer use (and thus,
hopefully, reduced bandwidth). While texture animations are broken, this
does mark a significant milestone towards implementing shadows as it
should now be possible to use multiple threads (with multiple index and
entid buffers) to render the depth buffers for all the lights.
This allows the use of an entity id to index into the entity data and
fetch the transform and colormod data in the vertex shader, thus making
instanced rendering possible. Non-world brush entities are still not
rendered, but the world entity is using both the entity data buffer and
entid buffer.
Sub-models and instance models need an instance data buffer, but this
gets the basics working (and the proof of concept). Using arrays like
this actually simplified a lot of the code, and will make it easy to get
transparency without turbulence (just another queue).
The gl water warp ones have been useless since very early on due to not
doing water warp in gl (vertex warping just didn't work well), and the
recent water warp implementation doesn't need those hacks. The rest of
the removed flags just aren't needed for anything. SURF_DRAWNOALPHA
might get renamed, but should be useful for translucent bsp surfaces
(eg, vines in ad_tears).
One more step towards BSP thread-safety. This one brought with it a very
noticeable speed boost (ie, not lost in the noise) thanks to the face
visframes being in tightly packed groups instead of 128 bytes apart,
though the sw render's boost is lost in the noise (but it's very
fill-rate limited).
This is next critical step to making BSP rendering thread-safe.
visframe was replaced with cluster (not used yet) in anticipation of BSP
cluster reconstruction (which will be necessary for dealing with large
maps like ad_tears).
The main goal was to get visframe out of mnode_t to make it thread-safe
(each thread can have its own visframe array), but moving the plane info
into mnode_t made for better data access patters when traversing the bsp
tree as the plane is right there with the child indices. Nicely, the
size of mnode_t is the same as before (64 bytes due to alignment), with
4 bytes wasted.
Performance-wise, there seems to be very little difference. Maybe
slightly slower.
The unfortunate thing about the change is the plane distance is negated,
possibly leading to some confusion, particularly since the box and
sphere culling functions were affected. However, this is so point-plane
distance calculations can be done with a single 4d dot product.
The map uses 41% of a 4k light map scrap, and 512 texture descriptors
wasn't enough for vulkan. Ouch. I do need to get cvars on these things,
but this will do for now (decades later...)
Sounds in Arcane Dimensions (at least those used by ad_tears) specify
start and end cue points. The code was using only the final point in the
list and thus breaking looped sounds. Now, the first cue point is used
as the loop start, and the second (if present), the sample length. Both
are bounds-checked against the wav's sample count. Fixes sound locking
up during the first seconds in ad_tears.
This one is ancient: the code was essentially unmodified since release
(just some formatting). Malformed vectors could sneak through due to map
bugs (eg, "angles -90" instead of "angle -90" as in ad_tears) and the
vector parsing code would continue past the end of the string and
writing into unowned memory, potentially messing up the libc allocation
records. Replacing with the obvious sscanf works nicely.
Sometimes, Quake code is brilliant. Other times, it's a real face-palm.
This fixes the annoying persistence of inputs when respawning and
changing levels. Axis input clearing is hooked up but does nothing as of
yet. Active device input clearing has always been hooked up, but also
does nothing in the evdev and x11 drivers.
It was added only because FitzQuake used it in its pre-bsp2 large-map
support. That support has been hidden in bspfile.c for some time now.
This doesn't gain much other than having one less type to worry about.
Well tested on Conflagrant Rodent (the map that caused the need for
mclipnode_t in the first place).
This was one of the biggest reasons I had trouble understanding the bsp
display list code, but it turns out it was for dealing with GLES's
16-bit limit on vertex indices. Since vulkan uses 32-bit indices,
there's no need for the extra layer of indirection. I'm pretty sure it
was that lack of understanding that prevented me from removing it when I
first converted the glsl bsp code to vulkan (ie, that 16-bit indices
were the only reason for elements_t).
It's hard to tell whether the change makes much difference to
performance, though it seems it might (noisy stats even over 50 timedemo
loops) and the better data localization indicate it should at least be
just as good if not better. However, the reason for the change is
simplifying the data structures so I can make bsp rendering thread-safe
in preparation for rendering shadow maps.
And maybe a nano-optimization. Switching from (~side + 1) to (-side)
seems to give glsl a very tiny speed boost, but certainly doesn't hurt.
Looking at some assembly output for the three cases, the two hacks seem
to generate the same code as each other, but 3 instructions vs 6 for ?:.
While ?: is more generically robust, the hacks are tuned for the
knowledge side is either 0 or 1. The later xor might alter things, but
at least I now know that the hack (either version) is worthwhile.
This is a particularly ancient bug, sort of introduced by rhamph when he
optimized temp entity model handling and later exacerbated by me.
However, I suspect the actual problem is limited to nq as qw's gamedir
handling would have caused the models to be reloaded, but nq doesn't
ever change game directories once running.
With experience, I have found that trying to continue after a validation
error tends to result in a segfault or some other nastiness, and
Sys_Shutdown (and the full shutdown sequence) is triggered for any error
signal (segfault, abort, etc) so just exit(1).
Although the skin pointer was being advanced after recording the
information in for the batch array, it was being reset the next time
around the loop (due to a mistranslation of the previous code). This
fixes the segfault while loading (gl, glsl, vulkan) or rendering (sw)
the sphere model from Rogue.
Some very much needed comments :P Still, nicely, I now have a much
better understanding of how the display lists are created (10 years
is a long time to remember how intricate code works (I do remember
fighting to get it working back then))
This makes it much easier to see just what is being done to build a
polygon to be passed to the GPU, and it served as a test for the
lightmap st changes since Vulkan currently never used them.
Many modders use negative lights for interesting effects, but vulkan
doesn't like the result of a negative int treated as unsigned when it
comes to texture sizes.
However, this time it doesn't modify the light array when it sorts the
lights by size since the lights are now located before the renderer gets
to see them, and having the fix up the light leafs array would be too
painful (and probably the completely wrong thing to do anyway: the light
array should be treated as constant by the renderer). 1.6GB of memory
for gmsp3v2's lights (a little better than marcher: more smaller lights?).
For reference:
gmsp3v2: shadow maps: 8330 layers in 29 images: 1647706112
marcher: shadow maps: 2440 layers in 11 images: 2358575104
While it wasn't the root cause of the disappearing lights (even after
sorting out the light limit issue), because the cause of that was
everything working as designed, I suspect sunlight wasn't reaching as
far as it should. Even it it was, this should be slightly faster
(especially for larger maps) as leafs can be tested 32 or 64 at a time
rather than individually.
For now, at least (I have some ideas to possibly reduce the numbers and
also to avoid the need for actual limits). I've seen gmsp3v2 use over
500 lights at once (it has over 1300), and I spent too long figuring out
that weird light behavior was due to the limit being hit and lights
getting dropped (and even longer figuring out that more weird behavior
was due to the lack of shadows and the world being too bright in the
first place).
Moving the negation into the calculation of the sun angle prevents -0
getting into the vector (not that it makes much difference other than
minor confusion when reading the light data).
Since the staging buffer allocates the command buffers it uses, it
needs to free them when it is freed. I think I was confused by the
validation layers not complaining about unfreed buffers when shutting
down, but that's because destroying the pool (during program shutdown,
when the validation layers would complain) frees all the buffers. Thus,
due to staging buffers being created and destroyed during the level load
process, (rather large) command buffers were piling up like imps in a
Doom level.
In the process, it was necessary to rearrange some of the shutdown code
because vulkan_vid_render_shutdown destroys the shared command pool, but
the pool is required for freeing the command buffers, but there was a
minor mess of long-lived staging buffers being freed afterwards. That
didn't end particularly well.
This ensures that the plugin's shutdown function won't get called twice
in the event of an error in the plugin's unload sequence triggering a
second Sys_Shutdown, especially if the plugin is being unloaded as a
part of another sub-system's shutdown sequence (which is probably in
itself a design mistake, need to look into that).
While gcc was quite correct in its warning, all I needed was to
explicitly truncate the string. I don't remember why I didn't do that
back when I made the changes in 4f58429137, but it works now, and the
surrounding code does expect the string to be no more than 15 chars
long. This fixes yet another memory leak (but timedemo over multiple
runs still leaks like a sieve).
This is meant for a "permanent" tear-down before freeing the memory
holding the VM state or at program shutdown. As a consequence, builtin
sub-systems registering resources are now required to pass a "destroy"
function pointer that will be called just before the memory holding
those resources is freed by the VM resource manager (ie, the manager
owns the resource memory block, but each subsystem is responsible for
cleaning up any resources held within that block).
This even enhances thread-safety in rua_obj (there are some problems
with cmd, cvar, and gib).
This gives a rather significant speed boost to timedemo demo1: from
about 2300-2360fps up to 2520-2600fps, at least when using
multi-texture.
Since it was necessary for testing the scrap, gl got the ability to set
the console background texture, too.
While it takes one extra step to grab the marksurface pointer,
R_MarkLeaves and R_MarkLights (the two actual users) seem to be either
the same speed or fractionally faster (by a few microseconds). I imagine
the loss gone to the extra fetch is made up for by better bandwidth
while traversing the leafs array (mleaf_t now fits in a single cache
line, so leafs are cache-aligned since hunk allocations are aligned).
Unfortunately, the animations are pre-baked (by the loader) blocking
run-time determined animations (IK etc). However, this at least gets
everything working so the basics can be verified (the shader posed some
issue resulting in horror movies ;).
It copies an entire hierarchy (minus actual entities, but I'm as yet
unsure how to proceed with them), even across scenes as the source scene
is irrelevant and the destination scene is used for creating the new
transforms.
Brush models looked a little too tricky due to the very different style
of command queue, so that's left for now, but alias, iqm and sprite
entities are now labeled. The labels are made up of the lower 5 hex
digits of the entity address, the position, and colored by the
normalized position vector. Not sure that's the best choice as it does
mean the color changes as the entity moves, and can be quite subtle
between nearby entities, but it still helps identify the entities in the
command buffer.
And, as I suspected, I've got multiple draw calls for the one ogre. Now
to find out why.
The bones aren't animated yet (and I realized I made the mistake of
thinking the bone buffer was per-model when it's really per-instance (I
think this mistake is in the rest of QF, too)), skin rendering is a
mess, need to default vertex attributes that aren't in the model...
Still, it's quite satisfying seeing Mr Fixit on screen again :)
I wound up moving the pipeline spec in with the rest of the pipelines as
the system isn't really ready for separating them.
The plists can now be accessed by name and the forward render pass
config is available (but not used, or tested beyond syntax). I was going
to have the IQM pipeline spec separate but ran into limitations in the
system (which needs a lot of polish, really).
That @inherit is pretty useful :) This makes it much easier to see how
different pipelines differ or how they are the similar. It also makes it
much clearer which sub-pass they're for.
I was wondering why scaled-down quake-guy was dimmer than full-size
quake-guy. And the per-fragment normalization gives the illusion of
smoothness if you don't look at his legs (and even then...).
Maps specify sunlight as shining in a specific direction, but the
lighting system wants the direction to the sun as it's used directly in
shading calculations. Direction correctness confirmed by disabling other
lights and checking marcher's outside scene (ensuring the flat ground
was lit). As a bonus, I've finally confirmed I actually have the skybox
in the correct orientation (sunlight vector more or less matched the
position of the sun in marcher's sky).
I'm not sure what's up with the weird lighting that results from dynamic
lights being directional (sunlight works nicely in marcher, but it has a
unit vector for position).
Abyss of Pandemonium uses global ambient light a lot, but doesn't
specify it in every map (nothing extracting entities and adding a
reasonable value can't fix). I imagine some further tweaking will be
needed.
The parsing of light data from maps is now in the client library, and
basic light management is in scene. Putting the light loading code into
the Vulkan renderer was a mistake I've wanted to correct for a while.
The client code still needs a bit of cleanup, but the basics are working
nicely.
This replaces *_NewMap with *_NewScene and adds SCR_NewScene to handle
loading a new map (for quake) in the renderer, and will eventually be
how any new scene is loaded.
This leaves only the one conditional in the shader code, that being the
distance check. It doesn't seem to make any noticeable difference to
performance, but other than explosion sprites being blue, lighting
quality seems to have improved. However, I really need to get shadows
working: marcher is just silly-bright without them, and light levels
changing as I move around is a bit disconcerting (but reasonable as
those lights' leaf nodes go in and out of visibility).
Id Software had pretty much nothing to do with the vulkan renderer (they
still get credit for code that's heavily based on the original quake
code, of course).
It's not used yet, and thus may have some incorrect settings, but I
decided that I will probably want it at some stage for qwaq. It's
essentially was was in the original spec, but updated for some of the
niceties added to parsing since I removed it back then. It's also in its
own file.
Just "loading" and "unloading" (both really just hints due to the
caching system), and an internal function for converting a handle to a
model pointer, but it let me test IQM loading and unloading in Vulkan.
The model system is rather clunky as it is focused around caching, so
unloading is more of a suggestion than anything, but it was good enough
for testing loading and unloading of IQM models in Vulkan.
Despite the base IQM specification not supporting blend-shapes, I think
IQM will become the basis for QF's generic model representation (at
least for the more advanced renderers). After my experience with .mu
models (KSP) and unity mesh objects (both normal and skinned), and
reviewing the IQM spec, it looks like with the addition of support for
blend-shapes, IQM is actually pretty good.
This is just the preliminary work to get standard IQM models loading in
vulkan (seems to work, along with unloading), and they very basics into
the renderer (most likely not working: not tested yet). The rest of the
renderer seems to be unaffected, though, which is good.
The resource subsystem creates buffers, images, buffer views and image
views in a single batch operation, using a single memory object to back
all the buffers and images. I had been doing this by hand for a while,
but got tired of jumping through all those vulkan hoops. While it's
still a little tedious to set up the arrays for QFV_CreateResource (and
they need to be kept around for QFV_DestroyResource), it really eases
calculation of memory object size and sub-resource offsets. And
destroying all the objects is just one call to QFV_DestroyResource.
I might need to do similar for other formats, but i ran into the problem
of the texture type being tex_palette instead of the expected tex_rgba
when pre-(no-)loading a tga image resulting in Vulkan not liking my
attempt at generating mipmaps.
This allows the fuzzy bsearch used to find a def by address to work
properly (ie, find the actual def instead of giving some other def +
offset). Makes for a much more readable instruction stream.
The scene id is in the lower 32-bits for all objects (upper 32-bits are
0 for actual scene objects) and entity/transform ids are in the upper
32-bits. Saves having to pass around a second parameter in progs code.
pr_type_t now contains only the one "value" field, and all the access
macros now use their PACKED variant for base access, making access to
larger types more consistent with the smaller types.
Vulkan doesn't appreciate the empty buffers that result from the model
not having any textures or surfaces that can be rendered (rightfully so,
for such a bare-metal api).
I doubt the calls were ever actually made in a normal map due to the
node actually being a node when breaking out of the loop, but when I
experimented with an empty world model (no nodes, one infinite empty
leaf) I found that visit_leaf was getting called twice instead of once.
Since it is updated every frame, it needs to be as fast as possible for
the cpu code. This seems to make a difference of about 10us (~130 ->
~120) when testing in marcher. Not a huge change, but the timing
calculation was wrapped around the entire base world pass, so there was
a fair bit of overhead from bsp traversal etc.
It makes a significant difference to level load times (approximately
halves them for demo1 and demo2). Nicely, it turns out I had implemented
the rest of the staging buffer code (in particular, flushing) correctly
in that it seems there's no corruption any of the data.
They're really redundant, and removing the next pointer makes for a
slightly smaller cvar struct. Cvar_Select was added to allow finding
lists of matching cvars.
The tab-completion and config saving code was reworked to use the hash
table DO functions. Comments removed since the code was completely
rewritten, but still many thanks to EvilTypeGuy and Fett.
Hash_Select returns a list of elements that match a given criterion
(select callback returning non-0).
Hash_ForEach simply calls a function for every element.
And use it for hud_scoreboard_gravity. Putting the enum def in view made
the most sense as view does own the base type and the enum is likely to
be by useful for other settings.
I think I'd gotten distracted while making the changes to the server,
then simply copied the partial changes to the client. It didn't blow up
thanks to the backing store bing char * and the type sized for int, so
safe on any platform, but useless as it wasn't connected properly.
It's actually pretty neat being able to directly, but safely, control a
function pointer via a cvar :)
The misinterpretations were due to either the cvar not being accessed
directly by the engine, but via only the callback, or the cvars were
accesssed only by progs (in which case, they should be float). The
remainder are a potential enum (hud gravity) and a "too hard basket"
(rcon password: need to figure out how I want to handle secret strings).
Other parts of quakefs treat an empty path as an error, so fs_sharepath
and fs_userpath must never be empty or they will effectively be
rejected. While the user explicitly setting them to empty strings is one
way for them to become empty, another is QFS_CompressPath compressing
'.' to an empty path, which makes it rather difficult to set up the
traditional quake directory tree (ie, operate from the current
directory).
My script didn't know what type to make the cvars since they're not used
directly by the code, so they got treated as strings instead of ints or
floats.
This is an extremely extensive patch as it hits every cvar, and every
usage of the cvars. Cvars no longer store the value they control,
instead, they use a cexpr value object to reference the value and
specify the value's type (currently, a null type is used for strings).
Non-string cvars are passed through cexpr, allowing expressions in the
cvars' settings. Also, cvars have returned to an enhanced version of the
original (id quake) registration scheme.
As a minor benefit, relevant code having direct access to the
cvar-controlled variables is probably a slight optimization as it
removed a pointer dereference, and the variables can be located for data
locality.
The static cvar descriptors are made private as an additional safety
layer, though there's nothing stopping external modification via
Cvar_FindVar (which is needed for adding listeners).
While not used yet (partly due to working out the design), cvars can
have a validation function.
Registering a cvar allows a primary listener (and its data) to be
specified: it will always be called first when the cvar is modified. The
combination of proper listeners and direct access to the controlled
variable greatly simplifies the more complex cvar interactions as much
less null checking is required, and there's no need for one cvar's
callback to call another's.
nq-x11 is known to work at least well enough for the demos. More testing
will come.
The prefix gives more context to the error messages, making the system a
lot easier to use (it was especially helpful when getting my cvar revamp
into shape).
Based on the flags type used in vkparse (difference is the lack of
support for plists). Having this will make supporting named flags in
cvars much easier (though setting up the enum type is a bit of a chore).
This allows for easy (and safe) printing of cexpr values where the type
supports it. Types that don't support printing would be due to being too
complex or possibly write-only (eg, password strings, when strings are
supported directly).
Surprisingly, only two, but they were caught by the different value
fields being used, thus the cvar was checked in multiple places. I
imagine that's not really all that common, so there may be some
inconsistencies between default value and use.
This is progress towards #23. There are still some references to
host_time and host_client (via nq's server.h), and a lot of references
to sv and svs, but this is definitely a step in the right direction.
This allows a single render pass description to be used for both
on-screen and off-screen targets. While Vulkan does allow a VkRenderPass
to be used with any compatible frame buffer, and vkparse caches a
VkRenderPass created from the same description, this allows the same
description to be used for a compatible off-screen target without any
dependence on the swapchain. However, there is a problem in the caching
when it comes to targeting outputs with different formats.
As I had suspected, it's due to a synchronization problem between the
scrap and drawing. There's actually a double problem in that data
uploaded to the scrap isn't flushed until the first frame is rendered
causing a quick init-shutdown sequence to take at least five seconds due
to the staging buffer waiting (and timing out) on a stuck fence.
Rendering just one frame "fixes" the problem (draw was one of the
earliest subsystems to get going in vulkan).
Surprisingly, only two, but they were caught by the different value
fields being used, thus the cvar was checked in multiple places. I
imagine that's not really all that common, so there may be some
inconsistencies between default value and use.