The problem was that I had mixed up the purpose of the per-frame vertex
buffers and used them for the core quad data when they were meant for
subpic and the like, and forgotten about the static vertex buffer.
This gets at least conchars working (pic in general not tested yet).
Any performance gains will be utterly swamped by the deferred renderer,
but it will allow better control of quad render order by any client
code (and should be slightly better for simpler renderers when I get
support for them working).
Right now, plenty is broken (much of the higher level draw functions are
disabled, and pics don't render correctly), but this gets at least the
basics in so I'm not bouncing diffs around as much.
It turns out the slice pipeline is compatible with the glyph pipeline in
that its vertex attribute data is a superset (just the addition of the
offset attributes). While the queues have yet to be merged, this will
eventually get glyphs, sliced sprites, and general (static) quads into
the one pipeline. Although this is slightly slower for glyph rendering
(due to the need to pass an extra 8 bytes per glyph), this should be
faster for quad rendering (when done) as it will be 24 bytes per quad
instead of 32 bytes per vertex (ie, 128 bytes per quad), but this does
serve as a proof of concept for doing quads, glyphs and sprites in the
one pipeline.
The main reason I had created in the first place was I hadn't thought of
using image view swizzles to handle coverage-alpha textures (for
monochrome glyphs), and for whatever reason also had the texture in a
different binding slot to the twod fragment shader. With both issues out
of the way, there's no reason to have an almost identical (just some
naming) shader just for glyphs.
With an eye towards merging the 2d pipelines as much as possible, I
found that the glyph and basic 2d quad texture descriptors were in
different slots for no reason I can think of. Having them in the same
slot would mean I could use the same fragment shader for all 2d
pipelines (though the plan is to get it down to two: (sliced) quads and
lines).
I hadn't noticed the problem until playing with early fragment tests for
the sprite fragment shaders, but passing data that expects triangle
strips to a pipeline that expects triangle lists doesn't work too well
when drawing quads.
Marking them as cached means that they'll be "uncached" instead of
destroyed when freed, which would not be a particularly good thing. I
have no memory as to how I found this as I found the change in my git
stash.
It seems that the mouse escaping the barriers requires some combination
of hitting two at once, and holding your mouth just right (something
about sliding the mouse up and down one barrier near the other).
However, sending the mouse back to the center of the screen when it
touches a barrier makes such sliding impossible.
This seems to fix#38
I obviously need a better way to test legacy code because the fix for
unsigned-int behavior with clang broke mouse warping when using
XGrabPointer instead of XInput2's XIGrabEnter.
While Draw_Glyph does draw only one glyph at a time, it doesn't shape
the text every time, so is a major win for performance (especially
coupled with pre-shaped text).
And add a function to process a passage into a set of views with glyphs.
The views can be flowed: they have flow gravity and their sizes set to
contain all the glyphs within each view (nominally, words). Nothing is
tested yet, and font rendering is currently broken completely.
Font and text handling is very much part of user interface and at least
partially independent of rendering, but does fit it better with GUI than
genera UI (ie, both graphics and text mode), thus libQFgui as well as
libQFui are built in the ui directory.
The existing font related builtins have been moved into the ruamoko
client library.
I had done the loader for the GPU renderers, so the CPU renderer didn't
draw the characters transparently. Fixes the pink block in my ruamoko
test scene (due to the notify text area).
While it doesn't really make any difference to the texture upload (8-bit
is 8-bit), and the sampler is in control of the interpretation, this
makes vulkan more consistent with the specification of the glyph
texture.
In theory, it supports all the non-palette formats, but only luminance
and alpha (tex_l and tex_a) have been tested. Fixes the rather broken
glyph rendering.
Thanks to the 3d frame buffer output being separate from the swap chain,
it's possible to have a different frame buffer size from the window
size, allowing for a smaller buffer and thus my laptop can cope (mostly)
with the vulkan renderer.
The escape was actually harmless as the buffers would not be read due to
the particle count being 0 (thus why the buffers were at the end of the
staging buffer: no space was allocated for them, only for the system
buffer, but their offsets were just past the system buffer). However,
the validation layers quite rightly did not like that. Thus, the two
buffers are pointed to the system buffer so all three descriptors are
always valid.
Where too far is 1024 units as that is the maximum supported, or the
radius. The change to using unsigned for the distances meant the simple
checks missed the effective max dist going negative, thus leading to a
segfault.
I had debated putting the blending in the compose subpass or a separate
pass but went with the separate pass originally, but it turns out that
removing the separate pass gains 1-3% (5-15/545 fps in a timedemo of
demo1).
It's a bit flaky for particles, especially at higher frame rates, but
that's due to supporting only 64 overlapping pixels. A reasonable
solution is probably switching to a priority heap for the "sort" and
upping the limit.
This required making the texture set accessible to the vertex shader
(instead of using a dedicated palette set), which I don't particularly
like, but I don't feel like dealing with the texture code's hard-coded
use of the texture set. QF style particles need something mostly for the
smoke puffs as they expect a texture.
It doesn't want to work on my nvidia (or more recent sid?) and doesn't
seem to be necessary. The problem may be multiple event sets before the
first wait, but investigation can wait for now.
This is probably the biggest reason I had problems with particles not
updating correctly: the descriptors were generally point pointing to
where the data actually was in the staging buffer.
I don't yet know whether they actually work (not rendering yet), but the
system isn't locking up, and shutdown is clean, so at least resources
are handled correctly.
Although it works as intended (tested via hacking), it's not hooked up
as the current frame buffer handling in r_screen is not readily
compatible with how vulkan output is handled. This will need some
thought to get working.
This splits up render pass creation so that the creation of the various
resources can be tailored to the needs of the actual render pass
sub-system. In addition, it gets window resizing mostly working (just
some problems with incorrect rendering).
It turns out the semaphore used for vkAcquireNextImageKHR may be left in
a signaled state for VK_ERROR_OUT_OF_DATE_KHR. While it seems to be
possible to clear the semaphore using an empty queue submission,
destroying and recreating the semaphore works well.
Still have problems with the frame buffer after window resize, though.