Id Software had pretty much nothing to do with the vulkan renderer (they
still get credit for code that's heavily based on the original quake
code, of course).
It's not used yet, and thus may have some incorrect settings, but I
decided that I will probably want it at some stage for qwaq. It's
essentially was was in the original spec, but updated for some of the
niceties added to parsing since I removed it back then. It's also in its
own file.
Just "loading" and "unloading" (both really just hints due to the
caching system), and an internal function for converting a handle to a
model pointer, but it let me test IQM loading and unloading in Vulkan.
The model system is rather clunky as it is focused around caching, so
unloading is more of a suggestion than anything, but it was good enough
for testing loading and unloading of IQM models in Vulkan.
Despite the base IQM specification not supporting blend-shapes, I think
IQM will become the basis for QF's generic model representation (at
least for the more advanced renderers). After my experience with .mu
models (KSP) and unity mesh objects (both normal and skinned), and
reviewing the IQM spec, it looks like with the addition of support for
blend-shapes, IQM is actually pretty good.
This is just the preliminary work to get standard IQM models loading in
vulkan (seems to work, along with unloading), and they very basics into
the renderer (most likely not working: not tested yet). The rest of the
renderer seems to be unaffected, though, which is good.
The resource subsystem creates buffers, images, buffer views and image
views in a single batch operation, using a single memory object to back
all the buffers and images. I had been doing this by hand for a while,
but got tired of jumping through all those vulkan hoops. While it's
still a little tedious to set up the arrays for QFV_CreateResource (and
they need to be kept around for QFV_DestroyResource), it really eases
calculation of memory object size and sub-resource offsets. And
destroying all the objects is just one call to QFV_DestroyResource.
I might need to do similar for other formats, but i ran into the problem
of the texture type being tex_palette instead of the expected tex_rgba
when pre-(no-)loading a tga image resulting in Vulkan not liking my
attempt at generating mipmaps.
Having to refigure out what values are going into the vectors got old
very fast. The comments don't help with verifying the values, but at
least I can tell at a glance where 2(xy - wz) goes and thus determine
the "orientation" of the matrix.
Defs and symbols benefit from swizzling as that's one instruction vs 2-3
for loading a scalar into a vector component by component. Constants are
ok because the result gets converted to a vector constant.
qfcc is putting two temps in the same location due to
defspace_alloc_aligned_loc returning the same address when there was a
hole caused by an earlier aligned alloc: specifically, a size-3 hole and
a size-2 allocation with alignment-2.
The destination operand must be a full four component vector, but the
source can be smaller and small sources do not need to be aligned: the
offset of the source operand and the swizzle indices are adjusted. The
adjustments are done during final statement emission in order to avoid
confusing the data flow analyser (and that's when def offsets are known).
This reverts commit 2904c619c1.
In order to support swizzle operations, I need to be able to alias defs
to larger types (eg, float to vec4), but alias_def rightly won't allow
this. However, as the plan is to do this in the final steps before
emitting the instruction, I plan on creating an alias to a float then
adjusting the type in the alias, but to do so without extra shenanigans,
I need alias_def to allow aliases to the same type. As a fringe benefit,
it makes the code agree with the comment in def.h :P
This allows the fuzzy bsearch used to find a def by address to work
properly (ie, find the actual def instead of giving some other def +
offset). Makes for a much more readable instruction stream.
The scene id is in the lower 32-bits for all objects (upper 32-bits are
0 for actual scene objects) and entity/transform ids are in the upper
32-bits. Saves having to pass around a second parameter in progs code.
This came up when investigating an internal error from the line above.
It turned out the error was correct (problem with converting scalars to
vectors), but the break was not.
Currently, only vector/vec3 and quaternion/vec4 can be printed anyway,
but I plan on making explicit format strings for the types, so there
should be no need to promote any vector types (and really, any hidden
promotion is a bit of a pain, but standards...).
While the code would handle int vector types, there aren't any such
instructions, and the expression code shouldn't generate them, but all
float (32 and 64 bit) vector types do have a dot product instruction, so
check width rather than just vector/quaternion.
This fixes an error that's been lurking for over two years (since I made
parameters unlimited internally). The problem was the array was being
allocated on the stack and a simple struct copy was used to store type
type, resulting in a dangling pointer onto the stack. I'm surprised it
didn't cause more problems.
This allows all the tests to build and pass. I'll need to add tests to
ensure warnings happen when they should and that all vec operations are
correct (ouch, that'll be a lot of work), but vectors and quaternions
are working again.
Vector expressions no longer auto-widen due to the new vector types (I
might add such later, but for now this lets the tests try to build
(minus actual fixes in qfcc)).
With this, all vector widths and types are supported: 2, 3, 4 and int,
uint, long, ulong, float and double, along with support for suffixes to
make the type explicit: '1 2'd specifies a dvec2 constant, while '1 2 3'u
is a uivec3 constant. Default types are double (dvec2, dvec3, dvec4) for
literals with float-type components, and int (ivec2...) for those with
integer-type components.
Having three very similar sets of code for outputting values (just for
debug purposes even) got to be a tad annoying. Now there's only one, and
in the right place, too (with the other value code).
I'd created new_value_expr some time ago, but never used it...
Also, replace convert_* with cast_expr to the appropriate type (removes
a pile of value check and create code).
Use with quaternions and vectors is a little broken in that
vec4/quaternion and vec3/vector are not the same types (by design) and
thus a cast is needed (not what I want, though). However, creating
vectors (that happen to be int due to int constants) does seem to be
working nicely otherwise.
Nicely, I was able to reuse the generated conversion code used by the
progs engine to do the work in qfcc, just needed appropriate definitions
for the operand macros, and to set up the conversion code. Helped
greatly by the new value load/store functions.
pr_type_t now contains only the one "value" field, and all the access
macros now use their PACKED variant for base access, making access to
larger types more consistent with the smaller types.
The goal is to have somewhere to put entities so they can be rendered. I
suspect I could have used a bsp tree to partition space for frustum
culling, but it's probably not worth it at this stage (but shouldn't
require any changes to the engine: just the model).
Vulkan doesn't appreciate the empty buffers that result from the model
not having any textures or surfaces that can be rendered (rightfully so,
for such a bare-metal api).