If no handler has been registered, then the corresponding parameter is
printed as a pointer but with surrounding brackets (eg, [0xfc48]). This
will allow the ruamoko runtime to implement object printing.
Most were pretty easy and fairly logical, but gib's regex was a bit of a
pain until I figured out the real problem was the conditional
assignments.
However, libs/gamecode/test/test-conv4 fails when optimizing due to gcc
using vcvttps2dq (which is nice, actually) for vector forms, but not the
single equivalent other times. I haven't decided what to do with the
test (I might abandon it as it does seem to be UD).
This gets ambient sounds (in particular, water and sky) working again
for quakeworld after the recent sound changes, and again for nq after I
don't know how long.
I never liked "cache" as a name because it said where the sound was
stored rather than how it was loaded/played, but "stream" is ok, since
that's pretty much spot on. I'm not sure "block" is the best, but it at
least makes sense because the sounds are loaded as a single block (as
opposed to being streamed). An now, neither use the cache system.
Nuclear powered audio ;)
More seriously, use _Atomic on a few fields that very obviously need it.
That is, channel's buffer pointer (used to signal to the mixer that the
channel is ready for use) and "flow control" flags (stop, done and
pause), and head and tail in the buffer itself. Since QF has been
working without _Atomic (admittedly, thanks to luck and x86's strong
memory model), this should do until proven otherwise. I imagine getting
stream reading out of the RT thread will highlight any issues.
Care needs to be taken when freeing channels as doing so while an
asynchronous mixer is using them is unlikely to end well. However,
whether the mixer is asynchronous depends on the output driver. This
lets the driver inform the rest of the system that the output and mixer
are running asynchronously.
This improves the locality of reference when mixing and removes the
proxy sfx for streamed sounds.
The buffer for streamed sounds is allocated when the stream is opened
(since streamed sounds can't share buffers), and freed when the stream
is closed.
For block sounds, the buffer is reference counted (with the sfx holding
one reference, so currently block buffers never get freed), with their
reference count getting incremented on open and decremented on close.
That the reference counts get to 1 has been confirmed, so all that
should be needed is proper destruction of the sfx instances.
Still need to sort out just why channels leak across level changes.
Getting the tag is possibly useful in general and definitely in
debugging. Setting, I'm not so sure as it should be done when allocated,
but that's not always possible.
Also, correct the return type of z_block_size, though it affected only
Z_Print. While an allocation larger than 4GB is... big for zone, the
blocks do support it, so printing should too.
And use it for Ruamoko object reference counts.
I need reference counts for dealing with block sound buffers since they
can be shared by many channels. I figured I take care of Ruamoko's
reference count location at the same time.
Fixes#27.
Sounds no longer use the cache, which is good for multi-threaded, but a
pain for memory management: the buffers are shared between channels that
play back the sounds, but when the sounds were cached, they were
automagically (thus problematically) freed when the space was needed.
That no longer happens, so they leak. I think the solution is to use
reference counting and retain/release in sfx->open() and sfx->close().
Streams are the easy one as they were never in the cache. As a side
effect, sfxstream_t is much smaller as it no longer has the buffer
embedded in the struct.
More shrinkage. It turned out the mixer uses the phase fields, so they
couldn't be removed, but even at 192kHz, +/- 127 samples produces
sufficient phase separation for a 21cm head (which is, actually, pretty
big: mine is about 15cm across), but that change can come later.
The ambient sound loading has been removed from snd_channels because 1)
it doesn't work for nq, 2) it should never have been there in the first
place (it belongs in the client, but that needs some more API).
This is part of a process to shrink channel_t so it doesn't waste locked
memory when it gets moved there. Eventually, only the fields the mixer
needs will be in channel_t itself: those needed for spacialization will
be moved into a separate array.
In the process, I found that channels leak across level changes, but
this appears to be due to the cached sounds being removed during loading
and the mixer never marking them as done (it sees the null sfx pointer
and assumes the channel was never in use). Having the mixer mark the
channel as done seems to fix the leak, but cause a free channel list
overflow. Rather than fight with that, I'll leave the leak for now and
fix it at its root cause: the management of the sound samples
themselves.
The scaling up of the volumes when setting a channel's volume bothered
me. The biggest issue being it hasn't been necessary for over a decade
since the conversion to a float-mixer. Now the volume and attenuation
scaling from protocol bytes is entirely in the client's hands.
sfx_t is now private, and cd_file no longer accesses channel_t's
internals. This is necessary for hiding the code needed to make mixing
and channel management *properly* lock-free (I've been getting away with
murder thanks to x86's strong memory model and just plain luck with
gcc).
And make Sys_MaskPrintf take the developer enum rather than just a raw
int.
It was actually getting some nasty hunk corruption errors when under
memory pressure that made it clear the sound system needs some work.
I always wanted it there, there were dependency issues at the time. I
guess they got cleaned up for the most part since then (other than
cd_file, but it's on my hit-list).
The texture animation data is compacted into a small struct for each
texture, resulting in much less data access when animating the texture.
More importantly, no looping over the list of frames. I plan on
migrating this to at least the other hardware renderers.
The models are broken up into N sub-(sub-)models, one for each texture,
but all faces using the same texture are drawn as an instance, making
for both reduced draw calls and reduced index buffer use (and thus,
hopefully, reduced bandwidth). While texture animations are broken, this
does mark a significant milestone towards implementing shadows as it
should now be possible to use multiple threads (with multiple index and
entid buffers) to render the depth buffers for all the lights.
This allows the use of an entity id to index into the entity data and
fetch the transform and colormod data in the vertex shader, thus making
instanced rendering possible. Non-world brush entities are still not
rendered, but the world entity is using both the entity data buffer and
entid buffer.
Sub-models and instance models need an instance data buffer, but this
gets the basics working (and the proof of concept). Using arrays like
this actually simplified a lot of the code, and will make it easy to get
transparency without turbulence (just another queue).
The gl water warp ones have been useless since very early on due to not
doing water warp in gl (vertex warping just didn't work well), and the
recent water warp implementation doesn't need those hacks. The rest of
the removed flags just aren't needed for anything. SURF_DRAWNOALPHA
might get renamed, but should be useful for translucent bsp surfaces
(eg, vines in ad_tears).
One more step towards BSP thread-safety. This one brought with it a very
noticeable speed boost (ie, not lost in the noise) thanks to the face
visframes being in tightly packed groups instead of 128 bytes apart,
though the sw render's boost is lost in the noise (but it's very
fill-rate limited).
This is next critical step to making BSP rendering thread-safe.
visframe was replaced with cluster (not used yet) in anticipation of BSP
cluster reconstruction (which will be necessary for dealing with large
maps like ad_tears).
The main goal was to get visframe out of mnode_t to make it thread-safe
(each thread can have its own visframe array), but moving the plane info
into mnode_t made for better data access patters when traversing the bsp
tree as the plane is right there with the child indices. Nicely, the
size of mnode_t is the same as before (64 bytes due to alignment), with
4 bytes wasted.
Performance-wise, there seems to be very little difference. Maybe
slightly slower.
The unfortunate thing about the change is the plane distance is negated,
possibly leading to some confusion, particularly since the box and
sphere culling functions were affected. However, this is so point-plane
distance calculations can be done with a single 4d dot product.
GCC does a nice enough job compiling the more readable form (though
admittedly, hadd is possibly more readable than what's there for
dot[fd], hadd is supposedly slower than the shuffles and adds, and qfvis
seems to support that).
This fixes the annoying persistence of inputs when respawning and
changing levels. Axis input clearing is hooked up but does nothing as of
yet. Active device input clearing has always been hooked up, but also
does nothing in the evdev and x11 drivers.
It was added only because FitzQuake used it in its pre-bsp2 large-map
support. That support has been hidden in bspfile.c for some time now.
This doesn't gain much other than having one less type to worry about.
Well tested on Conflagrant Rodent (the map that caused the need for
mclipnode_t in the first place).
This was one of the biggest reasons I had trouble understanding the bsp
display list code, but it turns out it was for dealing with GLES's
16-bit limit on vertex indices. Since vulkan uses 32-bit indices,
there's no need for the extra layer of indirection. I'm pretty sure it
was that lack of understanding that prevented me from removing it when I
first converted the glsl bsp code to vulkan (ie, that 16-bit indices
were the only reason for elements_t).
It's hard to tell whether the change makes much difference to
performance, though it seems it might (noisy stats even over 50 timedemo
loops) and the better data localization indicate it should at least be
just as good if not better. However, the reason for the change is
simplifying the data structures so I can make bsp rendering thread-safe
in preparation for rendering shadow maps.
This is a particularly ancient bug, sort of introduced by rhamph when he
optimized temp entity model handling and later exacerbated by me.
However, I suspect the actual problem is limited to nq as qw's gamedir
handling would have caused the models to be reloaded, but nq doesn't
ever change game directories once running.
They should probably be cause leafsurfaces since they are the actual
surfaces of the leaf: ie, the faces of the leaf mesh if each leaf was
sub-sub-model.
For now, at least (I have some ideas to possibly reduce the numbers and
also to avoid the need for actual limits). I've seen gmsp3v2 use over
500 lights at once (it has over 1300), and I spent too long figuring out
that weird light behavior was due to the limit being hit and lights
getting dropped (and even longer figuring out that more weird behavior
was due to the lack of shadows and the world being too bright in the
first place).
Since the staging buffer allocates the command buffers it uses, it
needs to free them when it is freed. I think I was confused by the
validation layers not complaining about unfreed buffers when shutting
down, but that's because destroying the pool (during program shutdown,
when the validation layers would complain) frees all the buffers. Thus,
due to staging buffers being created and destroyed during the level load
process, (rather large) command buffers were piling up like imps in a
Doom level.
In the process, it was necessary to rearrange some of the shutdown code
because vulkan_vid_render_shutdown destroys the shared command pool, but
the pool is required for freeing the command buffers, but there was a
minor mess of long-lived staging buffers being freed afterwards. That
didn't end particularly well.
While gcc was quite correct in its warning, all I needed was to
explicitly truncate the string. I don't remember why I didn't do that
back when I made the changes in 4f58429137, but it works now, and the
surrounding code does expect the string to be no more than 15 chars
long. This fixes yet another memory leak (but timedemo over multiple
runs still leaks like a sieve).