Vulkan validation (quite rightly) doesn't like it when the flush range
goes past the end of the buffer, but also doesn't like it when the flush
range isn't cache-line aligned, so align the size of the buffer, too.
I had originally planned on mixing the stage management with general
texture support code like I did in glsl, but I think that was a mistake
and I did keep looking for scrap.[ch] when I wanted to edit something to
do with the scrap...
This cleans up texture_t and possibly even improves locality of
reference when running through texture chains (not profiled, and not
actually the goal).
It optionally generates mipmaps, and supports the main texture types
(especially for texture packs), including palettes, but is otherwise
rather unsophisticated code. Needs a lot of work, but testing first.
Cleans up global space and makes it usable in multiple contexts. Also,
max quads dropped to 32k as each frame now has its own vertex buffer to
avoid issues with vertex overwrites (which I have seen). However, all
vertex buffers are in the one memory/buffer object (using offsets) and
the index buffer has been moved into a device-local memory object.
I think I did two as a bit of a ring buffer, but the new ring buffer
system used inside a staging buffer makes it less necessary. Also, the
staging buffer is now a fair bit bigger (4M is probably not really
enough)
This allows the array in which the command buffers are allocated to be
allocated on the stack using alloca and thus remove the need to
malloc/free of relatively small chunks.
The scrap texture did very good things for the glsl renderer and the
better control over data copying might help it do even better things for
vulkan, especially with lots of little icons.
It's never actually used (the texture can be fetched using
GLSL_ScrapTexture) and gets in the way of sharing the scrap system with
the vulkan renderer.
r_screen because of SCR_UpdateScreen, and r_cvar because the cvars
really should never have been in a plugin in the first place (and
r_screen needed access).
First pixels! This was a nightmare of little issues that the validation
layers couldn't help with: incorrect input assembly, incorrect vertex
attribute specs. Though the layers did help with getting the queues
working. Still, lots of work to go but this is a major breakthrough as
I now have access to visual debugging for textures and the like.
Short wrappers for Draw functins are in vid_render_vulkan.c so the
vulkan context can be passed on to the actual functions. The 2D shaders
are set up similar to those in glsl, but with full 32-bit color (rgba)
support instead of paletted. However, the textures are not loaded yet,
nor is anything bound.
The prototypes for handle parsers needed to be changes because it turned
out "single" was inappropriate for handles as "single" allocates memory
for the parsed object, but handles must be written directly.
The way I wound up using the field meant that exprctx should not "own"
the hashlinks chain, but rather just point to it. This fixes the nasty
access errors I had.
Dependencies on vkparse.hinc were spreading through the code which I
didn't want as that removes a lot of the automation from the automake
files. This keeps all parser code internal to vkparse.c's scope, and any
accesses required for enum and struct (not yet) definitions can be
fetched by name.
I want to be able to use name references, but that requires string
items, so anything that would normally be dictionary or array (or
binary, even) would also need to accept string. This seemed to be the
cleanest solution. Any custom parser would then need to check the type
and act appropriately, but any inappropriate types have already been
pre-filtered by the standard parsers.
Care needs to be taken to ensure the right function is used with the
right arguments, but with these, the need to use qconj(d|f) for a
one-off inverse rotation is removed.
They're binned by powers of two (with in between sizes going to the
smaller bin should I make cache-line allocations NPOT (which I think
might be worthwhile). However, there seems to still be a bug somewhere
causing a nasty leak as now my hacked qfvis consumes 40G in less than a
minute.
The idea is to not search through blocks for an available allocation.
While the goal was to speed up allocation of cache lines of varying
cluster sizes, it's not enough due to fragmentation.
They take advantage of gcc's vector_size attribute and so only cross,
dot, qmul, qvmul and qrot (create rotation quaternion from two vectors)
are needed at this stage as basic (per-component) math is supported
natively by gcc.
The provided functions work on horizontal (array-of-structs) data, ie a
vec4d_t or vec4f_t represents a single vector, or traditional vector
layout. Vertical layout (struct-of-arrays) does not need any special
functions as the regular math can be used to operate on four vectors at
a time.
Functions are provided for loading a vec4 from a vec3 (4th element set
to 0) and storing a vec4 into a vec3 (discarding the 4th element).
With this, QF will require AVX2 support (needed for vec4d_t). Without
support for doubles, SSE is possible, but may not be worthwhile for
horizontal data.
Fused-multiply-add is NOT used because it alters the results between
unoptimized and optimized code, resulting in -mfma really meaning
-mfast-math-anyway. I really do not want to have to debug issues that
occur only in optimized code.
The static variable meant that Fog_GetColor was not thread-safe (though
multiple calls in the one thread look to be ok for now). However, this
change takes it one step closer to being more generally usable.
Patch found in an old stash.
Shaders can be built as spv files and installed into
$libdir/quakeforge/shaders or as spvc files and compiled into the
engine. Loading supports $builtin/name to access builtin shaders,
$shader/path to access external standard shaders and quake filesystem
access for all other paths.
I had forgotten that msaa samples was governed by the driver (as a max)
and the renderpass setup code simply took the max. Thus why 1 vs 8
caused the display to render incorrectly.
It is capable of parsing single expressions with fairly simple
operations. It current supports ints, enums, cvars and (external) data
structs. It is also thread-safe (in theory, needs proper testing) and
the memory it uses can be mass-freed.
This was inspired by
Hoard: A Scalable Memory Allocator
for Multithreaded Applications
Emery D. Berger, Kathryn S. McKinley, Robert D. Blumofe, Paul R.
Wilson,
It's not anywhere near the same implementation, but it did take a few
basic concepts. The idea is twofold:
1) A pool of memory from which blocks can be allocated and then freed
en-mass and is fairly efficient for small (4-16 byte) blocks
2) Tread safety for use with the Vulkan renderer (and any other
multi-threaded tasks).
However, based on the Hoard paper, small allocations are cache-line
aligned. On top of that, larger allocations are page aligned.
I suspect it would help qfvis somewhat if I ever get around to tweaking
qfvis to use cmem.
The calculation fails (produces NaN) if the vectors are anti-parallel,
but works for all other combinations. I came up with this implementation
when I discovered Unity's Quaternion.FromToRotation could did not work
with very small angles. This implementation will produce a usable
quaternion below 0.00255 degrees (though it will be slightly larger than
unit). Unity's failed such that I could see KSP's skybox snap while it
rotated around my test vessel.
PL_ParseDictionary itself does only one level, but it takes care of the
key-field mappings and property list item type checking leaving the
actual parsing to a helper specified by the field. That helper is free
to call PL_ParseDictionary recursively.
The first line of the parsed item is stored and can be retrieved using
PL_Line. Line numbers not stored for dictionary keys yet. Will be 0 for
any items generated by code rather than parsed from a file or string.
There's still some cleanup to do, but everything seems to be working
nicely: `make -j` works, `make distcheck` passes. There is probably
plenty of bitrot in the package directories (RPM, debian), though.
The vc project files have been removed since those versions are way out
of date and quakeforge is pretty much dependent on gcc now anyway.
Most of the old Makefile.am files are now Makemodule.am. This should
allow for new Makefile.am files that allow local building (to be added
on an as-needed bases). The current remaining Makefile.am files are for
standalone sub-projects.a
The installable bins are currently built in the top-level build
directory. This may change if the clutter gets to be too much.
While this does make a noticeable difference in build times, the main
reason for the switch was to take care of the growing dependency issues:
now it's possible to build tools for code generation (eg, using qfcc and
ruamoko programs for code-gen).
This should keep things nicely extensible, since additional data can be
done in the data space and found using defs. This gets the compilation
units into the sym file.
The compilation unit stores the directory from which qfcc was run and
any source files mentioned. This is similar to dwarf's compilation unit.
Right now, this is the only data in the new debug space, but more might
come in the future so it seems best to treat the debug space separately
in the object files.
I decided that stopping in between function calls that are on the same
line is a good thing as it gives a chance to skip over the first but
step into the second.
This allows a debugger to do any symbol lookups and other preparations
between loading progs and the first code execution. .ctors are called as
per normal if debug_handler is not set.
In testing variable fw/precision in PR_Sprintf, I got a nasty reminder
of the limitations of the current progs ABI: passing @args to another QC
function does not work because the args list gets trampled but the
called function's locals. Thus, the need for a va_copy. It's not quite
the same as C's as it returns the destination args instead of copying
like memcpy, but it does copy the list from the source args to a
temporary buffer that is freed when the calling function returns.
When a type is aliased, the alias has two type chains: the simple type
chain with all other aliases stripped, and the full type chain. There
are still plenty of bugs in it, but having the clean type chain takes
care of the major issue that was in the previous attempt as only the
head of the type-chain needs to be skipped for type comparison.
Most of the bugs are in finding the locations where the head needs to be
skipped.
And rename prd_exit to prd_terminate (the idea is the host will
terminate the VM). This makes it possible for the debugger to pause the
VM before any code, even a builtin function, is executed. Breaks the
debugger source window, but only because it's not updating on file
change (I think).
I decided I want events for VM enter/exit but enter needs to somehow
pass the function which will be executed (even if a builtin). A generic
void * param seemed the best idea, which meant the error string could be
passed via the param instead of a "global" string in the progs struct.
Not sure when I'll use it, but as it avoids the null pointer check, it
should make for faster access to structs when reading or writing
multiple fields.
They take a pointer to a free-list used for hashlinks so the hashlink
pools can be per-thread. However, hash tables that are not updated are
always thread-safe, so this affects only updates. progs_t has been set
up such that it is easy for multiple progs within one thread can share
hashlinks.
While there was a breakpoint hook, it was for only breakpoints and more
was needed. Now there's a generic hook that is called for tracing,
breakpoints, watch points, runtime errors and VM errors, with the
"event" type passed as the first parameter and a data pointer in the
second.
This allows for the four combinations of shift and control. Not
bothering with alt because alt-f4 closes my xterm (bbkeys from the looks
of it: it grabs a bunch of Mod1-* keys).
I left the ones below 256 alone (some were necessary anyway), but above
256 they were almost entirely sequential, which rather defeats the
purpose of using an enum, especially since it makes it difficult to
expand sections.
The idea is to find th def that contains the address. Had to write my
own bsearch (well... lifted from wikipedia) because libc's is exact. The
defs are assumed to be sorted (which qfcc now ensures when it writes
progs and sym files).
Type encodings are used whenever they are available. For now, if they
are not, then everything is treated as void (which prints <void>, not
very useful). Most return statements and references to .return are now
very readable (excluding structs), and only params going through "..."
are a messy union.
The memset instructions now match the move* instructions other than the
first operand (always int). Probably breaks much, but fixed in next few
commits.
This "pushes" a temp string onto the callee's stack frame after removing
it from the caller's stack frame. This is so builtins can pass
auto-freed memory to called progs code. No checking is done, but mayhem
is likely to ensue if a string is pushed that was allocated in an
earlier frame.
PR_AllocTempBlock() works the same way as PR_SetTempString(), except
that it takes a size parameter and always allocates (never tries to
merge). This is, in a way, abusing the string system, but I needed a way
to allocate a block of progs memory that would be automatically freed
when the current frame ended. The biggest abuse is the need to cast away
the const of PR_GetString()'s return value.
This should speed up ruamoko code somewhat as hash table lookups have
been replaced with direct array indexing. As a bonus, support for
message forwarding has been added (though not tested).
As well as $prefix/include, of course. This fixes the problem with
external ruamoko builds failing due to keys.h and qfcc's "lockdown" on
system headers.
It proved to be too fragile in its current implementation. It broke
pointers to incomplete structs and switch enum checking, and getting it
to work for other things was overly invasive. I still want the encoding,
but need to come up with something more robust.a
Other than consistency with printf(), I'm not sure why we went with the
printed size as the return value; returning the resultant strings makes
much more sense as dsprintf() (etc) can then be used as a safe va()
Other than its blocking of access to certain files, it really wasn't
that useful compared to the functions in qfs, and pointless with access
to qfs anyway.
The progs execution code will call a breakpoint handler just before
executing an instruction with the flag set. This means there's no need
for the breakpoint handler to mess with execution state or even the
instruction in order to continue past the breakpoint.
The flag being set in a progs file is invalid.