It's implemented as the Hodge dual, which is probably reasonable until
people complain. Both ⋆ and ! are supported, though the former is a
little hard to see in Consola.
The singleton alias resulted in the adjusted swizzles being corrupted
when for the same def. Other than adding properly sized swizzles
(planned), the simplest solution is to (separately) allow alias that
stick out from from the def.
Currently via only the group mask (which is really horrible to work
with: requires too much knowledge of implementation details, but does
the job for testing), but it got some basics working.
Also, correct the handling of scalars in dot and wedge products: it
turns out s.v and s^v both scale. However, it seems the CSE code loses
things sometimes.
This makes working with them much easier, and the type system reflects
what's in the multi-vector. Unfortunately, that does mean that large
algebras will wind up having a LOT of types, but it allows for efficient
storage of sparse multi-vectors:
auto v = 4*(e1 + e032 + e123);
results in:
0005 0213 1:0008<00000008>4:void 0:0000<00000000>?:invalid
0:0044<00000044>4:void assign (<void>), v
0006 0213 1:000c<0000000c>4:void 0:0000<00000000>?:invalid
0:0048<00000048>4:void assign (<void>), {v + 4}
Where the two source vectors are:
44:1 0 .imm float:18e [4, 0, 0, 0]
48:1 0 .imm float:1aa [4, 0, 0, 4]
They just happen to be adjacent, but don't need to be.
This gets only some very basics working:
* Algebra (multi-vector) types: eg @algebra(float(3,0,1)).
* Algebra scopes (using either the above or @algebra(TYPE_NAME) where
the above was used in a typedef.
* Basis blades (eg, e12) done via procedural symbols that evaluate to
suitable constants based on the basis group for the blade.
* Addition and subtraction of multi-vectors (only partially tested).
* Assignment of sub-algebra multi-vectors to full-algebra multi-vectors
(missing elements zeroed).
There's still much work to be done, but I thought it time to get
something into git.
If a symbol is not found in the table and a callback is provided, the
callback will be used to check for a valid procedural symbol before
moving on to the next table in the chain. This allows for both tight
scoping of the procedural symbols and caching.
Due to joys of pointers and the like, it's a bit of a bolt-on for now,
but it works nicely for basic math ops which is what I wanted, and the
code is generated from the expression.
I never liked it, but with C2x coming out, it's best to handle bools
properly. I haven't gone through all the uses of int as bool (I'll leave
that for fixing when I encounter them), but this gets QF working with
both c2x (really, gnu2x because of raw strings).
The warning flag check worked too well: it enabled the warning and
autoconf's default main wanted the const attribute. The bug has been
floating around for a while, it seems.
I'm not certain this is correct, but it seems to me that du-chains are
the same information as ud-chains, but from the defining statement's
point of view instead of that of the using statement.
I think the current build_element_chain implementation does a reasonable
job, but I'm in the process of getting designated initializers working,
thus it will become important to ensure uninitialized members get
initialized.
I never liked the various hacks I had come up with for representing
resource handles in Ruamoko. Structs with an int were awkward to test,
pointers and ints could be modified, etc etc. The new @handle keyword (@
used to keep handle free for use) works just like struct, union and
enum in syntax, but creates an opaque type suitable for a 32-bit handle.
The backing type is a function so v6 progs can use it without (all the
necessary opcodes exist) and no modifications were needed for
type-checking in binary expressions, but only assignment and comparisons
are supported, and (of course) nil. Tested using cbuf_t and QFile: seems
to work as desired.
I had considered 64-bit handles, but really, if more than 4G resource
objects are needed, I'm not sure QF can handle the game. However, that
limit is per resource manager, not total.
The first use will be pointer analysis for function arguments where the
argument points to an array to mark the array as live, but I'm sure
there'll be plenty of other uses.
Needed for proper analysis (ud-chains etc). Of course, it was then
necessary to remove the parameter defs from the uninitialized defs.
Also, plug a couple of memory leaks (forgot to free some temporary
sets).
I had messed up the handling of declarators for combinations of pointer,
function, and array: the pointer would get lost (and presumably arrays
of functions etc). I think I had gotten confused and thought things were
a tree rather than a simple list, but Holub set me straight once again
(I've never regretted getting that book). Once I understood that, it was
just a matter of finding all the places that needed to be fixed. Nicely,
most of the duplicated code has been refactored and should be easier to
debug in the future.
The type system rewrite had lost some of the checks for function fields.
This puts the actual code in the one place and covers parameters as well
as globals.
The symtab code itself cares only about global/not global for the size
of the hash table, but other code can use the symtab type for various
checks (eg, parameter shadowing).
Along with QuakeC's, of course. This fixes type typeredef2 test (a lot
of work for one little syntax error). Unfortunately, it came at the cost
of requiring `>>` in front of state expressions on C-style functions
(QuakeC-style functions are unaffected). Also, there are now two
shift/reduce conflicts with structs and unions (but these same conflicts
are in gcc 3.4).
This has highlighted the need for having the equivalent of the
expression tree for the declaration system as there are now several
hacks to deal with the separation of types and declarators. But that's a
job for another week.
The grammar constructs for declarations come from gcc 3.4's parser (I
think it's the last version of gcc that used bison. Also, 3.4 is still
GPL 2, so no chance of an issue there).
While swizzle does work, it requires the source to be properly aligned
and thus is not really the best choice. The extend instruction has no
alignment requirements (at all) and thus is much better suited to
converting a scalar to a vector type.
Fixes#30
As a class's ivars are built up by inheritance, but with only that
class's ivars in the symbol table, is is necessary to include an offset
based on the super class's ivars in order to ensure alignments are
respected. This is achieved via the new `base` parameter to
build_struct(), which is used to offset the current size while
calculating the aligned offset of the symbols. The parameter is ignored
for unions, as they always start at 0. The ivars for the current class
still have a base offset of 0 until they are actually added to the
class.
Fixes#29
The alignment is specified as a power of 2 (ie, actual alignment = 1 <<
alignment) allowing old object files to be compatible (as their
alignment is 0). This is necessary for (in part for #30) as it turned
out even global vectors were not aligned correctly.
Currently, only data spaces even vaguely respect alignment. This may
need to be fixed in the future.
Most were pretty easy and fairly logical, but gib's regex was a bit of a
pain until I figured out the real problem was the conditional
assignments.
However, libs/gamecode/test/test-conv4 fails when optimizing due to gcc
using vcvttps2dq (which is nice, actually) for vector forms, but not the
single equivalent other times. I haven't decided what to do with the
test (I might abandon it as it does seem to be UD).
The destination operand must be a full four component vector, but the
source can be smaller and small sources do not need to be aligned: the
offset of the source operand and the swizzle indices are adjusted. The
adjustments are done during final statement emission in order to avoid
confusing the data flow analyser (and that's when def offsets are known).
Having three very similar sets of code for outputting values (just for
debug purposes even) got to be a tad annoying. Now there's only one, and
in the right place, too (with the other value code).
I'd created new_value_expr some time ago, but never used it...
Also, replace convert_* with cast_expr to the appropriate type (removes
a pile of value check and create code).
Use with quaternions and vectors is a little broken in that
vec4/quaternion and vec3/vector are not the same types (by design) and
thus a cast is needed (not what I want, though). However, creating
vectors (that happen to be int due to int constants) does seem to be
working nicely otherwise.
Nicely, I was able to reuse the generated conversion code used by the
progs engine to do the work in qfcc, just needed appropriate definitions
for the operand macros, and to set up the conversion code. Helped
greatly by the new value load/store functions.
pr_type_t now contains only the one "value" field, and all the access
macros now use their PACKED variant for base access, making access to
larger types more consistent with the smaller types.
In working with vectors and matrices while testing the scene wrappers, I
found that there was a fair bit of confusion about how large something
could be. Return values can be up to 32 words (but qfcc wasn't aware of
that), parameters were limited to 4 words still (and possibly should be
for varargs), and temp defs were limited to 8 words (1 lvec4). Temps are
used for handling return values (at least when not optimizing) and thus
must be capable of holding a return value, and passing large arguments
through *formal* parameters should be allowed. It seems reasonable to
limit parameter sizes to return value sizes.
A temp and a move are still used for large return values (4x4 matrix),
but that's an optimization issue: the code itself is at least correct.